Open Hardware Designs Make AI Systems More Efficient: Steve Helvie, Open Compute Project

steve helvie open compute

In 2010, Facebook, along with Microsoft, Intel, Rackspace and Goldman Sachs started the Open Compute Project- a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on computing infrastructure. Just recently, Google has also joined the board of directors for OCP. But, what’s so special about OCP that the world tech giants are coming together to spurt open-source hardware?

Steve Helvie, VP of Channel for the Open Compute Project (OCP) speaking at AIM’s recently concluded virtual event plugin told the story. Steve said, “When Facebook was going to build its data centres as the infrastructure was growing quite rapidly, they got together with manufacturers. They said if they are going to start from the beginning, what could they remove from the system, and what they did not need in an IT rack or a data centre facility.” 

This led Facebook to ponder for a solution and to make its system much more efficient in terms of operational and energy efficiency, it open-sourced its data centre design at the facility, networking and server levels.

Ten years later, OCP now has over 150 companies working on multiple projects on things like networking, server, storage, rack and power, advanced cooling, data centre, telcos, high-performance computing, and open system firmware and security.

Open Hardware For Better Integration & Interoperability

OCP projects cover over 6000 engineers across those projects, creating designs and specifications that can then be taken to market and used by end customers. Apart from end customers, suppliers, data centre consultants, all work together in a collaborative environment.

The OCP member companies identified six areas – Open Accelerator Module (OAM), Universal Baseboard (UBB), PCIe Switch Board (PSB), Secure Control Module (SCM), Tray and Chassis. In regard to these areas, OCP members work on common specifications and common designs to speed the pace of engineering. For its Open Accelerator Infrastructure Project, Microsoft and Baidu joined Facebook in March 2019. Within a year, there are nos many companies working on OAI including the likes of Intel, Qualcomm, Lenovo, IBM, Tencent, Inspur, Habana, AMD, Alibaba and others. 

“As AI infrastructure matures, more companies are producing accelerators, which is great but creates some challenges in integration and interoperability. A lot of this takes 6-12 months to make sure all work together, which is a very long time in the world of AI,” said Steve. Today, OCP has four different OAM accelerator modules from four different suppliers – Intel, Habana, AMD, Nvidia, all based off of the same specs, and are are interoperable to drive engineering and solutions faster. 

And, it’s not just at the core data centre level, it’s also at what’s happening with AI at the edge. “We have a group working similar to the OAI group on emerging techniques, including member companies like Asperitas, Submer and DCX. We are covering everything from the core all the way to the edge and thinking about open-source hardware and its impact on AI,” told Steve. 

Why Open Compute May Be More Efficient Than Traditional Servers

According to Steve, open hardware dramatically makes you more competitive than running things in the public cloud. With the Open Compute Project, enterprises (with private cloud settings) are finding better cost-efficiency compared to public cloud platforms.

“We have a provider in Africa right now who is offering cloud-based open compute services at a fraction of what people can do for Azure and AWS. Because with open compute, you gain on the energy efficiency and advanced cooling systems. You have tremendous cost benefits because you are not buying a lot of the stuff that you don’t need with an OCP server.”

But, what impact will open-source hardware have on the tech industry? According to Steve, there will be an impact on the end customer side as well as on the vendor supply side. In fact, many public sector tenders across the world have specified open compute designs. This is because of the fact that end customers like the idea of multi-sourcing strategy, which reduces vendor lock-ins for certain workload and data centre specification, from the network switch to the AI level.

Download our Mobile App

Vishal Chawla
Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Recent Stories

Our Upcoming Events

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
MOST POPULAR

Can OpenAI Save SoftBank? 

After a tumultuous investment spree with significant losses, will SoftBank’s plans to invest in OpenAI and other AI companies provide the boost it needs?

Oracle’s Grand Multicloud Gamble

“Cloud Should be Open,” says Larry at Oracle CloudWorld 2023, Las Vegas, recollecting his discussions with Microsoft chief Satya Nadella last week. 

How Generative AI is Revolutionising Data Science Tools

How Generative AI is Revolutionising Data Science Tools

Einblick Prompt enables users to create complete data workflows using natural language, accelerating various stages of data science and analytics. Einblick has effectively combined the capabilities of a Jupyter notebook with the user-friendliness of ChatGPT.