Moore’s Law ruled the world of computing and electronics for almost half a decade. For the uninitiated, Moore’s Law is the prediction that the number of transistors in a dense integrated circuit will double about every two years. The observation is named after Gordon Moore, the co-founder of Intel and Fairchild Semiconductor. But in the last decade, Moore’s Law has slowed down. Technology giants like Google, Microsoft and Tesla, have started building their own artificial intelligence chips. It is now speculated that this will be a long-term trend. Components like processors and AI accelerators markets will be hit badly.
At their recent Google Cloud Next conference, the company introduced Edge TPU. Their application-specific integrated circuit (ASIC), which can run TensorFlow Lite machine learning models on mobile and embedded devices. In what is clearly a sign of Google bolstering its internet of things and AI portfolio, the search engine giant brought TPUs to the edge that enable the deployment of high-accuracy AI. Reportedly, Google CEO Sundar Pichai underscored that ML is a major key differentiator for Google, and that’s true for their Google Cloud customers as well.
Sign up for your weekly dose of what's up in emerging technology.
AI Edge Processing And Hardware
There are many commercial aspects which are enabling the sudden rise of AI chip-building frenzy. There is a wide range of disruptors which are helping this trend. Intel, AMD and their supply chains are being taken for a toss by internal initiatives by the likes of Google and Tesla. There are many aspects which have led to this eventuality:
- Tech giants building scalable IoT applications
- Intellectual property (IP) by tech companies make it easy to assemble chips
- New researchers directions based on multi-chip modules (MCM) and system-in-package (SIP)
ABI Research says that Edge AI inference will grow from just 6% in 2017 to 43% in 2023. A statement by Jack Vernon, at ABI Research, says, “The shift to the edge for AI processing will be driven by cheaper edge hardware, mission-critical applications, a lack of reliable and cost-effective connectivity options, and a desire to avoid expensive cloud implementation.”
An Intel press statement says that Waymo’s intelligent cars and the self-driving Chrysler Pacifica hybrid minivans feature Intel-based technologies for sensor processing, general computation and connectivity. But at the other end, Tesla took a unique decision to build its own AI chips rather than continuing to rely on systems from ongoing chipmakers like NVIDIA.
The Death Of Moore’s Law
Last year, NVIDIA’s CEO Jen-Hsun Huang, declared that the Moore’s Law was “dead”. According to Huang, while the performance growth in chips has fallen over the years, the GPUs have gotten much faster. It is therefore easy to see that why companies are designing and building their own chips. The need arose because today’s AI needs more than today’s conventional CPUs. GPU doesn’t always deliver the needed performance either.
Horst Simon, deputy director of Lawrence Berkeley National Laboratory, says that the world’s most powerful calculating machines appear to be already feeling the effects of Moore’s Law’s end times. He said, “For the last three years we’ve seen a kind of stagnation.”
The most worrying aspects of the death of Moore’s Law is that it will affect many areas like weather forecasting, genomics and energy, among others. The progress in technology and economy of the world may also somewhat be affected by the end of Moore’s Law. There is a limit to which you can make a circuit smaller. Shrinking circuits can make transistor faster but we have already reached a limit because of physics.
On the other hand, commentators also say that Moore’s Law is not dead — yet. Joel Hruska had said, “I’ve argued in the past Moore’s Law isn’t dead so much as its transformed. Rather than focusing strictly on increasing transistor counts and clock speeds, companies now focus on power efficiency and component integration. The explosion of specialised processors for handling AI and deep learning workloads is partly a reaction to the fact that CPUs don’t scale the way they used to.”
Race For Own AI Chips
Tesla had hired former Apple executive Peter Bannon to build their own AI chips. The details of the project were released during an AI conference and there was much controversy around why the decision was taken was taken by Tesla.
Facebook’s AI chief remarked,“The amount of infrastructure if we use the current type of CPU is just going to be overwhelming.” Yann LeCun had added that he wants to work with microprocessors and chips vendor but sternly warned, “If they don’t, then we’ll have to go with an industry partner who will build hardware to specs, or we’ll build our own.” Another report had suggested that Facebook was building a team to design their own semiconductors to lower their dependence on other chipmakers.
According to a report, Samsung has almost finished developing their own NPUs or AI chips, to improve the software for their mobile phones and servers. Google has already thrown its hat in the ring with the launch of TPU. Alphabet has funded Google and its partners have to do new experiments with the cloud technologies and AI chips within its own products.
With Tesla, Facebook and Google working on their chips to improve their products, they now have little interest in competing with each other, or even with Intel or NVIDIA. As bigger companies like Apple and Samsung also move their hardware making in-house, it is now close to being an area of concern for the chipmakers. But on the other hand, this move has made the tech giants get more control of their AI applications and products with reduced dependency on conventional vendors.