MITB Banner

Open Hardware Designs Make AI Systems More Efficient: Steve Helvie, Open Compute Project

Share

steve helvie open compute

In 2010, Facebook, along with Microsoft, Intel, Rackspace and Goldman Sachs started the Open Compute Project- a collaborative community focused on redesigning hardware technology to efficiently support the growing demands on computing infrastructure. Just recently, Google has also joined the board of directors for OCP. But, what’s so special about OCP that the world tech giants are coming together to spurt open-source hardware?

Steve Helvie, VP of Channel for the Open Compute Project (OCP) speaking at AIM’s recently concluded virtual event plugin told the story. Steve said, “When Facebook was going to build its data centres as the infrastructure was growing quite rapidly, they got together with manufacturers. They said if they are going to start from the beginning, what could they remove from the system, and what they did not need in an IT rack or a data centre facility.” 

This led Facebook to ponder for a solution and to make its system much more efficient in terms of operational and energy efficiency, it open-sourced its data centre design at the facility, networking and server levels.

Ten years later, OCP now has over 150 companies working on multiple projects on things like networking, server, storage, rack and power, advanced cooling, data centre, telcos, high-performance computing, and open system firmware and security.

Open Hardware For Better Integration & Interoperability

OCP projects cover over 6000 engineers across those projects, creating designs and specifications that can then be taken to market and used by end customers. Apart from end customers, suppliers, data centre consultants, all work together in a collaborative environment.

The OCP member companies identified six areas – Open Accelerator Module (OAM), Universal Baseboard (UBB), PCIe Switch Board (PSB), Secure Control Module (SCM), Tray and Chassis. In regard to these areas, OCP members work on common specifications and common designs to speed the pace of engineering. For its Open Accelerator Infrastructure Project, Microsoft and Baidu joined Facebook in March 2019. Within a year, there are nos many companies working on OAI including the likes of Intel, Qualcomm, Lenovo, IBM, Tencent, Inspur, Habana, AMD, Alibaba and others. 

“As AI infrastructure matures, more companies are producing accelerators, which is great but creates some challenges in integration and interoperability. A lot of this takes 6-12 months to make sure all work together, which is a very long time in the world of AI,” said Steve. Today, OCP has four different OAM accelerator modules from four different suppliers – Intel, Habana, AMD, Nvidia, all based off of the same specs, and are are interoperable to drive engineering and solutions faster. 

And, it’s not just at the core data centre level, it’s also at what’s happening with AI at the edge. “We have a group working similar to the OAI group on emerging techniques, including member companies like Asperitas, Submer and DCX. We are covering everything from the core all the way to the edge and thinking about open-source hardware and its impact on AI,” told Steve. 

Why Open Compute May Be More Efficient Than Traditional Servers

According to Steve, open hardware dramatically makes you more competitive than running things in the public cloud. With the Open Compute Project, enterprises (with private cloud settings) are finding better cost-efficiency compared to public cloud platforms.

“We have a provider in Africa right now who is offering cloud-based open compute services at a fraction of what people can do for Azure and AWS. Because with open compute, you gain on the energy efficiency and advanced cooling systems. You have tremendous cost benefits because you are not buying a lot of the stuff that you don’t need with an OCP server.”

But, what impact will open-source hardware have on the tech industry? According to Steve, there will be an impact on the end customer side as well as on the vendor supply side. In fact, many public sector tenders across the world have specified open compute designs. This is because of the fact that end customers like the idea of multi-sourcing strategy, which reduces vendor lock-ins for certain workload and data centre specification, from the network switch to the AI level.

Share
Picture of Vishal Chawla

Vishal Chawla

Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India.
Related Posts

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

Upcoming Large format Conference

May 30 and 31, 2024 | 📍 Bangalore, India

Download the easiest way to
stay informed

Subscribe to The Belamy: Our Weekly Newsletter

Biggest AI stories, delivered to your inbox every week.

AI Courses & Careers

Become a Certified Generative AI Engineer

AI Forum for India

Our Discord Community for AI Ecosystem, In collaboration with NVIDIA. 

Flagship Events

Rising 2024 | DE&I in Tech Summit

April 4 and 5, 2024 | 📍 Hilton Convention Center, Manyata Tech Park, Bangalore

MachineCon GCC Summit 2024

June 28 2024 | 📍Bangalore, India

MachineCon USA 2024

26 July 2024 | 583 Park Avenue, New York

Cypher India 2024

September 25-27, 2024 | 📍Bangalore, India

Cypher USA 2024

Nov 21-22 2024 | 📍Santa Clara Convention Center, California, USA

Data Engineering Summit 2024

May 30 and 31, 2024 | 📍 Bangalore, India

Subscribe to Our Newsletter

The Belamy, our weekly Newsletter is a rage. Just enter your email below.