Listen to this story
Machine learning algorithms can perform exponential tasks today—from mastering board games and identifying faces to automating daily tasks and making predictive decisions—this decade has brought forward countless algorithmic breakthroughs and several controversies. But one would find it a challenge to believe this development started only less than a century ago with Walter Pitts and Warren McCulloch. Analytics India Magazine takes you through a historical story of machine learning algorithms.
Machine learning was ideated first in 1943 by logician Walter Pitts and neuroscientist Warren McCulloch, who published a mathematical paper mapping the decision-making process in human cognition and neural networks. The paper recognised every neuron in the brain as a simple digital processor and the brain as a whole computing machine. Later, mathematician and computer scientist Alan Turing introduced the Turing test in 1950. The three-person game identifying machines as ‘intelligent’ is still unmastered by any machine in 2022. The Turing test demands a computer to fool a human into thinking the machine is also a human being.
The 1950s was when pioneering machine learning research was conducted using simple algorithms. In 1952, Arthur Samuel at IBM wrote the first computer program that played a game of checkers. The game was written on top of the alpha-beta pruning algorithm, a search algorithm that decreases the number of nodes evaluated by the minimax algorithm in search trees. This has since been used for two-player games. The algorithm improved over more games by learning from its winning strategies. In 1957, American psychologist Frank Rosenblatt designed the perceptron, the first neural network stimulating the thought processes of the human brain. The discovery is relevant to date. The nearest neighbour algorithm was introduced in 1967, one of the foremost algorithms that solved the ‘travelling salesman problem’, a common problem statement of a salesman who starts at a random city and visits the neighbouring cities repeatedly until all have been visited.
Sign up for your weekly dose of what's up in emerging technology.
The late 1900s
Backpropagation concepts were initially introduced in the 1960s and re-introduced in the 1980s to find hidden layers between the input and output layers of the neural networks, making them appropriate for commercial usage. In 1981, Gerald Dejong discovered Explanation Based Learning, based on a computer algorithm, including explanation and generalisation of data. The NetTalk neural network was written in 1985 by Terry Sejnowski, an algorithm capable of pronouncing words like a baby, based on text and matching phonetic transcript as input. In 1989, Christopher Watkins developed a Q-learning algorithm that improved the practical applications of reinforcement learning. The 1990s popularised forward statistical methods for algorithms, given that neural networks seemed less explainable and demanded higher computational power. These methods included support vector machines and random forest algorithms introduced in 1995. Following this, one of the biggest AI wins was in 1997 when IBM’s Deep Blue beat the world champion at chess.
The early 2000s popularised support vector clustering, unsupervised learning and kernel methods for machine learning. The introduction of convolutional neural networks in 2009 was a major breakthrough. Fei-Fei Li, a computer science professor at Stanford University, created a large dataset reflecting the real world in 2009, which became the foundation for Alex Krizhevsky, who created the first CNN, AlexNet, in 2012. Meanwhile, in 2011, IBM’s Watson beat its human competitors in Jeopardy, and Google Brain introduced its machine that can categorise objects like a cat. The company followed it up with its algorithm to browse YouTube videos and its unlabeled images to identify cats based in 2012. Word2vec algorithms introduced in 2013 used neural networks to learn word associations and later became the foundations for large language models. In 2014, Facebook developed DeepFace, an algorithm that created history by beating all previous benchmarks of computer algorithms recognising human faces. 2014 also saw the creation of generative adversarial networks (GANs) by Ian Goodfellow.
Download our Mobile App
Today, algorithmic architecture is the backbone of image, video, and voice generation, popularly used for deepfakes. In 2016, one of the most popular machine learning victories was marked by Deepmind’s AlphaGo algorithm, which beat the world champion in the Chinese board game, Go. In 2017, AlphaGo and its successors beat several champions in Go, Chess and Shogi. In 2017, Waymo started testing its autonomous minivans. Deepmind had another victory in 2018 with AlphaFold for its ability to predict protein structure.
2020 and beyond
The decade since 2015 has seen some of the most victorious algorithms since date while birthing major questions and concerns regarding their usage, safety and explainability. In 2020, Facebook AI Research introduced the Recursive Belief-based Learning, or ReBeL, a general RL+Search algorithm with the capacity to work in all two-player and zero-sum games—even those with imperfect information. Deepmind’s Player of Games introduced in 2021 can similarly play perfect and imperfect games. Deepmind also introduced the Efficient Non-Convex Reformulations, a verification algorithm in 2020. It is a novel non-convex reformulation of convex relaxations of neural network verification. AlphaFold 2 in 2021 achieved a level of accuracy much higher than any other group. The years have also created the largest transformer and language models such as GPT-3, Gopher, Jurassic-1, GLaM, MT-NLG and more, founded on NLP algorithms. Google also released Switch Transformers, a technique based on the modified MoE algorithm, Switch Routing, to train language models with over a trillion parameters.