Now Reading
A Decade Of AI: Most Defining Moments 2010-20

A Decade Of AI: Most Defining Moments 2010-20

  • Be it access to world standard courses, platforms, libraries, frameworks, hardware, this was the decade when AI went mainstream.
https://twitter.com/jimwaterson/status/1270236669137031169?s=20

AI — the yesteryear’s wildest science fiction is now an integral part of our lives. It wasn’t like this a decade ago. People were talking, theorising and experimenting with AI for sure, but what happened in the last decade has made AI more tangible. This was the decade when AI went mainstream. Be it access to world standard courses, platforms, libraries, frameworks, hardware — everything just fell into place. And, it wouldn’t be an exaggeration if one were to say that what was accomplished in the last ten years single-handedly fortified the foundations of our future. 

In this article, we look at a few of the most important breakthroughs that directly or indirectly have made AI a household name.

Convolutions Galore

The year 2012 was considered to be one of the most important years in the history of deep learning. This was the year when the power of convolutional neural networks (2012) was truly realised at the famous ImageNet competition where the participants were tasked with making accurate predictions of the objects. Dubbed “Alexnet” is a convolutional neural network (CNN) designed by Alex Krizhevsky, published with Ilya Sutskever and Krizhevsky. It halved the current error rate on Imagenet visual recognition to 15.3 per cent. The neural network taught itself to detect cats with 74.8% accuracy and faces with 81.7% from YouTube videos. The success of facial recognition in your phones or malls can be attributed to this work in 2012. The improved accuracies gradually allowed researchers to deploy models for medical imaging with great confidence. From retinopathy to cancer diagnosis, from kidney disease to AR assisted surgery, the field of medicine is gearing up for a very exciting decade ahead.

Also Read: What Happened When Google Tested Its AI In Real World

Newspeak: In Conversation With AI

The 2017 “Attention Is All You Need” by A Vaswani et al., created a cascade effect that enabled machines to understand language like never before. Thanks to the Transformers architecture, AI can now even write fake news, tweets and even have the potential to cause political instability. What followed the introduction of Transformers was Google’s release of the BERT model, which the search giant uses for keyword prediction and even SEO ranking among many others. As BERT became the de facto standard of natural language processing (NLP) models, other companies such as Microsoft and NVIDIA started catching up by piling up parameters. While NVIDIA’s Megatron came with 8 billion parameters, Microsoft’s Turing NLG model had 17 billion parameters. Then OpenAI(now partnered with Microsoft) turned the tables with the GPT model. While GPT-2 showed great promise, the real winner was GPT-3.

Source: OpenAI

The infamous GPT-3 again is an extension of the Transformers architecture. GPT-3 is a huge model that is trained on all things internet. It can code for you, write prose, generate business ideas, and its applications are only bound by human imagination. 

Checkmate, Humans

Beating humans at chess is nothing new for AI. But, algorithms ventured into more sophisticated human games — Jeopardy, Go, Poker and more. AlphaGo was the first program to defeat a Go’s professional player, considered the most difficult board game in the world. Whereas, IBM’s Watson managed to outwit two Jeopardy champions in a three-day showdown in the first half of the decade. Watson was able to win $77,147 in prizes while the other two human opponents had collected $24,000 and $21,600. Facebook’s Pluribus one-upped its contemporaries by trying its hand at Poker. Whereas, last week Alphabet Inc.’s DeepMind introduced MuZero that can master multiple games such as Shogi, chess and even Go.

Decoding Life

The behaviour of every organism can be traced down to its proteins. Proteins hold the secrets, deciphering which can help fight pandemics like COVID-19. But, the structure of a protein is so complex that it takes forever to run simulations. Google’s Deepmind took this responsibility upon themselves. Within four years, the research lab has solved the long-standing challenge in biology with AlphaFold, which was trained on the sequences and structures of about a hundred thousand proteins painstakingly mapped out by scientists worldwide. While computer vision proved to be extremely useful for diagnosis, solving protein folding problem can even assist researchers in drug discovery.

AI: The Artist And The Conman

Portrait of Edmond Belamy, Sold for $432,500 on 25 October at Christie’s

Last year, Belgium’s prime minister was seen talking about the urgent requirement to tackle the economic and climate crises. This very uncharacteristic video was later found to be a deepfake. The fake video was created in a full speech by the prime minister on the impacts of global warming using machine learning and artificial intelligence manipulating Wilmes’ voice and speaking style. 

The culprit behind these fake imagery is a meticulously designed algorithm — Generative Adversarial Networks, introduced in 2014. Over the next five years, these networks would gain such prominence that they have even encroached the last frontier of human endeavours — creativity. These networks can generate faces that never exist, swap faces, and make presidents talk gibberish and a lot more. One of the paintings made by GANs was even sold for a record price($400 K) at Christie’s auction. The flip side to this application is that people started exploiting it for malicious purposes. The situation was so dire that companies like Adobe had to come up with new techniques to spot the fakes. Not only this decade but GANs will be talked about in the next decade as well. Check more about deep fakes here.

Silicon: The Secret Sauce

Source: Google

The concept of neural networks is at least half a century old. The popular backpropagation method that powers many applications today was introduced three decades ago. But, what was lacking was the hardware that can run these computations. The last decade, we have witnessed more than a dozen companies working on building chips exclusively for machine learning operations. Today, the chip technology has advanced so much that it can perform a million operations (think: dot products) in a palm-sized device. These chips power data centres that let you stream your favourite Netflix movie, your home pods, smartphones and more. Custom AI chips and chips for edge is a multi-billion dollar business opportunity. 

Companies like Apple have already deployed custom ML chips– A14 Bionic– to offer intelligent services. Even Amazon Web Services which has relied on NVIDIA and Intel, is slowly entering the silicon business. This trend will only become more popular as these chips get tinier.  For example, with the NVIDIA Jetson AGX Xavier developer kit, as shown above, one can easily create and deploy end-to-end AI robotics applications for manufacturing, retail, smart cities, and more. Whereas, Google’s Coral toolkit can be leveraged to bring machine learning to edge. The safe, secure and real-time output is the theme of the modern world.

Also Read: Hot AI Chips To Look Forward To In 2021

The Open Source Fraternity

(Source: MIT Tech Review)

TensorFlow was open-sourced in 2015. A year later, in 2016, Facebook AI open-sourced PyTorch, a Python-based deep learning platform supporting dynamic computation graphs. Today, these two are the most widely used frameworks. By trying to one-up each other with every version, Google and Facebook have brought great ease to the ML community. Custom libraries, packages, frameworks, and tools burst onto the AI scene, launching many people into this field and adding more brainpower to AI research. Open-sourcing is what has defined the last couple of years. The open-sourcing of tools and increased accessibility to resources such as arxiv or Coursera and other such content providers fueled an AI revolution of sorts. Another key catalyst that has drawn massed towards AI is the popular competition platform– Kaggle. The communities that the likes of Kaggle and Github have nurtured led to a strong population of high quality AI developers.

Also Read: How A Harvard NeuroBiologist Became A Data Scientist: Journey Of Director Of DataRobot DS And Kaggle GM Sergey Yurgenson

See Also

More Learning And Less Rules

Popularised by Prof. Schmidhuber in the early 90s, meta-learning gained traction only recently. Meta-learning was introduced to make machine learning models learn new skills and adapt to the ever-changing environments in the presence of finite training precedents. Optimising machine learning models to do specific tasks by manipulating hyperparameters involves significant user input, which makes it a tedious process. This burden was eased with the introduction of meta-learning, which, in a way, has automated the optimisation part of the process. Automatic optimisation has resulted in a new industry– MLaaS; machine learning as a service. 

Also Read: How To Make Meta-learning More Effective

Future Direction

Source: Gartner

Here are a few other domains within AI, which experts forecast to play a major role:

  • Replicability
  • Ethical AI
  • Differential privacy
  • Geometric Deep Learning
  • Neuromorphic Computing
  • Reinforcement learning

Though AI has entered into fields which were never imagined, it still has to deliver on more popular applications such as self-driving cars. In this case, however, the challenge is more on the mathematical end. There are algorithms that make accurate decisions, and there are processors which can power these algorithms but when it comes to applications humans are still divided. Be it healthcare or self driving cars, AI is yet to prove its mettle. And, this can happen only when transparency and reproducibility are established.

Further Reading

Why Is It So Hard To Build An Ethical ML Framework For Healthcare

How To Encourage Reproducibility Within ML Community

Is Reproducibility In AI A Big Deal?

What Do You Think?

Join Our Telegram Group. Be part of an engaging online community. Join Here.

Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top