Listen to this story
|
On November 14, 2022, American AI startup Cerebras Systems introduced its AI supercomputer, Andromeda. The supercomputer will now be available for academic and commercial research. Andrew Feldman, with Gary Lauterbach, Jean-Philippe Fricker, Michael James and Sean Lie founded Cerebras Systems in 2016.
Cerebras Systems is famously known for its dinner plate-sized chip made for work related to artificial intelligence. The supercomputer is built by linking 16 Cerebras CS-2 systems — the startup’s latest AI computer which was built around the large chip called the Wafer-Scale Engine 2.
Andromeda is capable of performing one quintillion operations per second or 1 exaflop of AI computing based on a 16 bit floating point format.
Earlier this year, the fastest US supercomputer capable of performing nuclear weapons simulations known as ‘Frontier’ at Oak Ridge National Laboratory – based on 64 bit double precision format– breached the 1 exaflop performance. When asked about the Frontier supercomputer, Cerebras founder and CEO Andrew Feldman said, “They’re a bigger machine. We’re not beating them. They cost $600 million to build. This is less than $35 million.”
Feldman said that while complicated weather and nuclear simulations run in computers of a 64 bit double precision, Andromeda is a computationally expensive format. He said that researchers are also looking at whether AI algorithms can eventually match related outcomes.
Owned by Cerebras, the supercomputer is built at a high performance data centre called ‘Colovore’ in Santa Clara, California. Feldman said, “Companies and researchers, including those from US national labs can access it remotely.”
Read: Cerebras Unveils World’s Largest AI Chip
In a bid to support the largest models, Cerebras last year introduced the world’s first multi-million core AI cluster architecture, which handles neural networks of 120 trillion parameters. The chip is said to have the computing power of a human brain.
The startup is backed by Sequoia Capital, SV Angel, Foundation Capital, Benchmark, Coatue, Eclipse Ventures, Altimeter Capital, Vy Capital, Empede Capital, and Abu Dhabi Growth Fund. To date, it has raised a total of $720 million in funding over six rounds.
Large language models such as OpenAI’s GPT-3, Microsoft NLG, and NVIDIA’s Megatron have grown exponentially over the years. To run these models at scale, companies require megawatts of power, a cluster of graphics processors, along with dedicated teams who operate them. Chips that support AI at scale play a crucial role, focusing on scaling compute, massive memory, and communication.
Supercomputers in India
India is also ahead of its peers when it comes to supercomputing. C-DAC (the Centre for Development of Advanced Computing, backed by the Ministry of Electronics and Information Technology) has also been working on a range of supercomputers. In May this year, it launched PARAM Ananta, a supercomputer developed under the NSM by C-DAC and IIT Gandhinagar. Incidentally, it stands at 102 positions among the world’s top 500 supercomputers, with a peak performance of 3.3 petaflops.
Read: A Complete List of Indian PARAM Supercomputers
In 2015, NSM launched the National SuperComputing Mission to boost Indian supercomputers. Accordingly, NSM announced a seven-year programme worth INR 4,500 crore ($55 million) to install 73 indigenous supercomputers by 2022.