Economic growth and IT spending have taken a backseat amid the ongoing COVID-19 pandemic. However, demand for global data storage is not only here to stay but promises steady growth in the coming years. According to the IDC’s Global Storage Sphere report, the amount of data stored is expected to increase from 6.8 zettabytes (ZB) in 2019 to 8.9 ZB by 2024, recording a 30 per cent increase. Hard-disk drive (HDD) is expected to remain the dominant storage device.
Earlier last month, Google Cloud announced developing a predictive ML system in partnership with Seagate. The ML system will predict and identify potential damages to the HDD while providing a sense of reliability and efficiency to the many HDD storage-using data centres using HDD across the globe.
Why is it needed?
In the past, once a problem was detected on the disk, the drive’s data had to be drained and isolated, diagnostic tests had to be run, and the device had to be re-introduced to the traffic. Thus, the problem had to be repaired using software on-site itself. If one missed identifying failures at the right time, it had the potential to cause severe outages across products and services.
With the increase in the number of enterprises with data centres, millions of disks are deployed in operation, generating terabytes (TB) of data, including billions of rows of SMART (self-monitoring, analysis and reporting technology), data and host metadata, and manufacturing data about each disk drive. It is impossible to deploy a workforce to monitor the hundreds of parameters and factors across all HDDs. Thus, there was a need for a machine learning-based solution to execute the task remotely. Google Cloud and Seagate together created a machine learning system to predict HDD health in data centres. Additionally, when an HDD is flagged for repair, the model takes data to predict future failures as well.
Data Storage matters
With AI penetrating industries rapidly, companies are now focusing on how to better their AI systems. Unfortunately, while concentrating on the computing side of AI, some businesses tend to ignore the storage side. This singular emphasis can, and often does, result in the disruption or failure of AI projects.
All the four stages of any AI project– ingestion, preparation, training and inferences made, have their storage requirements. Therefore, AI projects necessitate a storage infrastructure that is high-performing, scalable, and adaptable.
A recent study by researchers from England, Singapore, Switzerland, India, and the US show that ultra-HDDs can store ten times more data when carbon-based overcoats (COCs) are replaced with graphene.
Image Credits: University of Cambridge
HDDs have two major parts – a magnetic head and platters. The head moves above the platters. The space between the head and the platters is primarily occupied by COCs which protect platters from mechanical damage and corrosion. Researchers in Cambridge experimented with Heat-Assisted Magnetic Recording (HAMR), a technology that allows for higher storage density by heating the recording layer to extremely high temperatures. COCs cannot work at these temperatures, but graphene does. Thus graphene, along with HAMR, can provide ten times higher density.
While HDDs are bound to benefit from the predictive analysis model; SSDs arw going to give a tough time to HDDs. Since SSDs use electrical circuitry rather than physical moving parts, they outperform HDDs, resulting in faster startup times and fewer delays when opening applications or performing heavy computing tasks. In addition, the no-moving part makes SSDs less prone to vibrations and thermal issues; they use less power and thus are suitable for longer battery life. The market value of HDDs is expected to see a decline between 2019 to 2024, while the SSDs market value is projected to reach $80.3 billion by 2026.