Now Reading
Can There Be A Moore’s Law For Algorithms? OpenAI Says Yes!

Can There Be A Moore’s Law For Algorithms? OpenAI Says Yes!

Back in 1965, Gordon Moore, co-founder of Intel, posited in his seminal article that the number of transistors in an integrated circuit would double every year, which is famously known as Moore’s law. Today, 50 years after this statement, Intel’s processors are capable of delivering 3500x performance of what it could do in its 1965 version. There hasn’t been any other technology that has improved at such a rate. 

As the processors became lighter and faster, the world of computers changed dramatically. One very important by-product of this innovation is the formation of artificial intelligence as a domain of its own. Algorithmic advancement too, has improved at a pace that resonates the success of integrated circuits. However, we still don’t talk about algorithms in terms of efficiency as we do in the context of classical computers. The metrics are usually measured via accuracy or some score. 

How To Start Your Career In Data Science?

So, is there a Moore’s law equivalent for algorithms, which is straightforward to track the progress of AI?

San Francisco based AI research labs, OpenAI, in an attempt to trace the progress of AI has surveyed the recent successes and recommended a few measures that can impact the progress of AI.

But, why is it difficult to have a measure to track overall progress? According to the researchers, here is why:

  • It’s impractical to perform a similar analysis for deep learning because we’re looking for approximate solutions
  • Performance is often measured in different units (accuracy, BLEU, cross-entropy loss, etc.) and gains on many of the metrics are hard to interpret
  • The problems are unique, and their difficulties aren’t comparable quantitatively, so assessment requires gaining an intuition for each problem
  • Most research focuses on reporting overall performance improvements rather than efficiency improvements, so additional work is required to disentangle the gains due to algorithmic efficiency from the gains due to additional computation
  • The rate at which new benchmarks are being solved aggravates the problem. It took 15 years to get to a human-level performance on MNIST, and only seven years on ImageNet

Measuring Algorithmic Efficiency 

Efficiency trends can be compared across domains like DNA sequencing (10-month doubling), solar energy (6-year doubling), and transistor density (2-year doubling).

This research by Danny Hernandez and Tom Brown of OpenAI probes the commonly held notion of rapid advancement in AI, and how much more can this domain accommodate. The researchers believe that the overall progress in AI/ML is a crucial question because it can ground the discussion in evidence. 

For their experiments, the researchers have leveraged open-source re-implementations to measure progress on AlexNet level performance over a long horizon. They observed a similar rate of training efficiency improvement for ResNet-50 level performance on ImageNet; a 17-month doubling time.

See Also

Since 2012, the amount of computing needed to train a neural network has been decreasing by a factor of 2 every 16 months. It now takes 44 times less computing to train a neural network. Whereas, Moore’s Law would yield an 11x cost improvement over this period. 

The chipmaker Intel has driven its success on the foundations of Moore’s law. To innovate with on-ground, hoping to match the statement has revolutionised the way we live today. Similarly, if algorithms are spoken in terms of efficiency, then it would be easy for every people involved right from developers to policymakers and make decisions. For instance, an increase in algorithmic efficiency might translate into more experiments. Governments will be encouraged to fund research if they understand what it is about. And, almost every grasps when we speak in terms of efficiency. Measuring overall progress, the researchers believe, will speed up future AI research in a way that’s somewhat analogous to having more computation.

Key Takeaways

This work by OpenAI is very ambitious and yet quite relevant. The researchers pin down a few techniques such as batch normalisation and others for rapid improvement of algorithms. Though their experiments have covered key aspects of training, the researchers still would like to leave the question of finding an overall measure open. Few of their key findings can be summarised as follows:

  • Hardware and algorithmic efficiency gains multiply and that neither factor is negligible over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both
  • Efficiency is straightforward to measure, as it’s just a meaningful slice of the learning curves that all experiments generate
  • AI tasks with high levels of investment (time and money) can lead to algorithmic efficiency outpacing gains from hardware efficiency (Moore’s Law) 
  • Measuring AI progress is critical for policymakers, economists, industry leaders, potential researchers, and others trying to navigate this disagreement and decide how much money and attention to invest in AI

Track AI progress from here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.
Join our Telegram Group. Be part of an engaging community

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top