In order to make machines more human-like in nature, researchers from enterprises as well as academia have been striving hard to develop intelligent systems which have the ability to make their own decisions. This year we have witnessed a number of interesting developments in the domain of artificial intelligence and machine learning.
With a lot of developments going around and the constant digitisation, this year AI has proved to be more intelligent as well as complex than humans. For instance, beating the champions of various games like StarCraft, poker, go, among others. We see a number of developments in the domain of deep learning using algorithms and techniques like reinforcement learning (RNN), convolutional neural networks (CNN), generative adversarial network (GAN), etc.
In this article, we list down the top 5 algorithms which made a breakthrough in 2019.
1| Multi-Agent Learning Algorithm
This year we have witnessed several projects which used the multi-agent learning algorithms. For instance, Starcraft can be said as one of the most challenging real-time strategy games which is based on a multi-agent learning algorithm. It plays the full game of StarCraft II by using a deep neural network which is trained directly from raw game data by supervised learning and reinforcement learning algorithms.
In October, DeepMind announced that Alphastar has reached the grandmaster level and is beating 99.8% of humans in the real-time strategy game. During the early months of this year, Google’s DeepMind introduced AlphaStar which is a Starcraft II AI program that beats the top professional player.
Also, a few months back, the Facebook AI Research with Carnegie Mellon University developed an AI bot called Pluribus which has the capability of defeating human champions. This bot has beaten more than two players. Pluribus incorporates a new online search algorithm that can efficiently evaluate its options by searching just a few moves ahead rather than only to the end of the game. It is based on a form of counterfactual regret minimisation (CFR) which is an iterative self-play algorithm.
2| Neural Machine Translation
One of the most important domains for this year is the neural machine translation. Techniques like transfer learning, text-to-text, speech-to-text, text-to-speech, etc. have been thoroughly practised by the researchers in order to develop intelligent machines. In the present scenario, almost all the top-level organisations and academia have been working and developing systems in this domain.
In August, the Facebook AI models achieved first place in several language tasks included in this year’s annual news translation competition. The models, along with the cross-lingual pretraining and self-supervised learning for other modalities, used large-scale sampled back-translation, noisy channel modeling, and data-cleaning techniques to achieve the highest performance for translating from English to German, German to English, English to Russian, and Russian to English.
Also, the researchers at tech giant Google recently built a more enhanced system for neural machine translation (NMT) which has the capability to handle more than 100 languages. The researchers at the tech giant claimed that this model is the largest multilingual NMT system to date in terms of the amount of training data and the number of languages considered.
3| Adversarial Learning
GAN plays a major role and has been at the forefront of research when it comes to deep fakes in deep learning techniques. There has been a lot of research in GAN for a few years now. This year we saw several interesting GAN models. For instance, GauGan, a deep learning model developed by NVIDIA Research allows users to draw their own segmentation maps and manipulate the scene, labelling each segment with labels like sand, sky, sea or snow. The model is developed by using the PyTorch deep learning framework.
Also, this year at the AWS re:Invent, the developers announced a machine-learning driven keyboard known as DeepComposer. AWS DeepComposer is a 32-key, 2-octave keyboard designed for developers to get hands-on with Generative AI. The keyboard is developed with an aim to help the developers learn machine learning. One can get started with GAN without any prior knowledge of the domain. The GAN includes two different neural networks against each other in order to compose new and original digital outputs.
4| Object Detection
The first-ever image of the black hole which was witnessed in April was generated using a machine learning algorithm known as CHIRP which stands for Continuous High-resolution Image Reconstruction using Patch priors.
5| Combinatorial Optimisation
This year, Toshiba made a major breakthrough in combinatorial optimisation by using the Simulated Bifurcation Algorithm. Combinatorial optimisation is the selection of the best solutions from among a huge number of combinatorial patterns. The technique used the Simulated Bifurcation Algorithm which helps in quickly obtaining highly accurate approximate solutions for complex large-scale combinatorial optimisation problems.