Using redeployed hardware to meet the big data challenge

Using redeployed hardware to solve the big data challenge

Posted on Tuesday, August 8, 2017

For most enterprises, big data analytics is now a key aspect of their strategic planning. And as IoT deployments reach scale, the need to process enormous data sets becomes even more important. Indeed, super computer-class processing may even become mainstream in the near future for large businesses too.

Previously the cost of specifying, building and deploying supercomputers has meant that only the very largest organizations have even considered such systems. But news from Durham University in England may be about to change that.

Introducing COSMA6

Realizing that their current supercomputer was unable to cope with increased theoretical modelling workloads, Durham’s Institute for Cosmological Computing (ICC) began looking for an upgrade or replacement. Simultaneously, the Hartree Centre at Daresbury was looking to retire their own HPC system.

Realizing that this redundant system was similar to their existing COSMA5 HPC system, the ICC agreed to take it. It was reassembled in the Durham University data centre and used to extend the existing distributed research, utilising the advanced computing (DIRAC) platform.

Why does this matter?

The ICC already had a super computer, so why does this example matter to applications outside academic research? Because by thinking outside the vendor-defined upgrade and migration cycle, the ICC was able to extend their computing capacity at far lower cost.

On a smaller scale, businesses can extend their own storage and processing capacity through intelligent redeployment of existing assets. Even post-warranty units can be re-integrated into big data platforms to provide additional number-crunching power. Further savings can be realized by upgrading older systems to fill all available disk shelves or processor sockets – again at far lower cost than replacing units with all-new equivalents.

Like the ICC, it could be that the future of your big data analytics program is driven by legacy hardware, rather than a brand new supercomputer. To learn more about increasing your ROI on existing assets, or redeploying them more effectively, please get in touch.

Download article as a PDF - Using redeployed hardware to meet the big data challenge

More Articles

GDPR is the nail in tape data storage coffin

Could GDPR be the final nail in tape’s coffin?

One GDPR requirement may force your cold storage data back into production.

Slow download speed from cloud platforms

Expanded NetApp cloud platform targets M&E vertical

NetApp is extending the ASE Cloud platform in an attempt to win over media and entertainment customers