PDSF Shutdown Is The End Of The Beginning For Cloud Computing

cord unplugging from a socket labelled PDSF

Posted on Thursday, May 9, 2019

Late last month, Berkeley Lab finally powered down their Parallel Distributed Systems Facility (PDSF) cluster for the final time. Relatively unknown outside academia, the PDSF has been an important tool for advanced scientific research since the 1990s.

Not unlike a traditional mainframe, researchers from across the world have been able to access the PDSF cluster remotely, giving them access to increased processing power that was unavailable in their own data centers. As a result, the PDSF has been instrumental in many award-winning scientific discoveries, including analyzing the structure of neutrinos, proving the existence of the Higgs boson particle and calculating the accelerating expansion rate of the universe, among others.

Not a supercomputer

The PDSF cluster is relatively small in comparison to a supercomputer, but its main strength was the flexibility of the system. In many ways, the cluster was well ahead of its time, offering cloud-like features and functions long before AWS even existed.

The PDSF cluster used the principle of shared capacity, for example: researchers could co-opt unused processing power from the central pool to power their experiments. And the custom Chroot OS (CHOS) was an early forerunner of containerization, allowing researchers to create and control their own operating environments, with data securely segregated from other users of the shared platform. 

A commodity-based cluster

The PDSF cluster was also constructed from off-the-shelf hardware, helping to reduce costs, both in terms of system design and spares. A 1994 report into the cluster noted that it had been built using non-proprietary systems and equipment from different manufacturers “at a fraction of the cost” of supercomputers. Hardware was typically upgraded twice a year to keep up with demand, but the commodity approach ensured these costs were manageable.

In many ways, the concept of commodity computing lives on with modern software-defined storage and networking, and cloud data centers themselves.

After twenty years of service, however, the PDSF cluster has finally been retired. Researchers have since had their projects migrated to the Cori supercomputer which will now provide the processing power needed to probe into the most complex scientific mysteries. And with the increased processing power of a supercomputer, these findings are likely to be even more impressive.

For more information on how CDS can support your non-proprietary systems, and equipment from different manufacturers running in peak condition, contact us today!

Download article as a PDF - PDSF Shutdown Is The End Of The Beginning For Cloud Computing

More Articles

2 men assembling a path made of puzzle pieces that stretches between a cloud and a data center

How To Solve Legacy Support Issues During Cloud Migration

Eventually you will want to retire your legacy systems– but what if you don’t want to pay your OEM for another 12 months of maintenance?

Scissors cutting through a piece of paper with a gear symbol on it

NetApp EOA: Your Final Support Coverage Warning

What is EOA and why does it matter so much when planning support coverage for your NetApp filers?