Now Reading
Theoretical Physicists Hold The Key To Some Of The Toughest Problems In Artificial Intelligence

Theoretical Physicists Hold The Key To Some Of The Toughest Problems In Artificial Intelligence

Richa Bhatia

If you have been following the recent high-profile appointments in the artificial intelligence and machine learning industry, you would have noticed a distinct uptick in theoretical physicists snagging top positions in companies like Samsung AI R&D and Fetch AI. In fact, the trend of moving from a background in theoretical physics and maths to machine learning is gaining ground with PostDoc researchers, since there is a crossover of functions.

Physicists excel in ML because computer programs are inherently stochastic in nature. They already have a foundation in math and statistical tools needed to understand the complex ML methods. Physicists also specialise in writing high-performance numerical code, which is another helpful skill for ML development.

Noted AI scientist Yann LeCun had once said that traditionally, there is a history of theoretical physicists — particularly condensed matter physicists — bringing ideas and mathematical methods to ML, neural networks, probabilistic inference and SAT problems.

In a public lecture, Roger Melko, an associate faculty member at the Perimeter Institute, University of Waterloo expounded that ML algorithms are accelerating discovery in physics. He mentioned how DeepMind’s victory in Go which came from ML prodded researchers from different verticals to think about applying ML algorithms to tackle quantum physics complexity problem. In a recently-published paper by Melko and Juan Carrasquilla, the researchers have talked about the neural network, which is an upgraded version of an AI software used to identify numbers written by humans. Interestingly, the ML algorithm effectively captured and recognized the different phases of matter in a quantum system, that too with minimal adjustments.  

According to LeCun, the wave of interest in neural networks in the 1980s and early 1990s was in part caused by the connection between spin glasses and recurrent nets popularised by John Hopfield. While this caused some physicists to morph into neuroscientists and machine learners, most of them left the field when interest in neural networks waned in the late 1990s. With the prevalence of deep learning and all the theoretical questions that surround it, physicists are staging a comeback. Many young physicists and mathematicians are now working on trying to explain why deep learning works so well.

Rise In Demand For Physicists In AI Industry

For example, Fetch AI, an AI and digital economics company, recently announced the appointment of Marcin Abram who them as a Machine Learning Scientist. Abram completed his PhD in Theoretical Physics in 2016 and his doctoral research explored topics on coherence and emergent behaviour in quantum systems. Another key appointment was of Dr Sebastian Seung was by Samsung Electronics to bolster the AI R&D and bring a revolutionary business impact. An eminent computational neuroscientist, Dr Seung originally studied theoretical physics at Harvard. He has worked as a researcher at Bell Labs and a professor at the Massachusetts Institute of Technology (MIT).  

Hopfield’s Contribution To AI

One of the biggest contributions by notable scientist John Hopfield was the formalisation of autoencoder networks when he derived the Hopfield network. In 1982, Hopfield introduced Hopfield Network I, an artificial neural network to store and retrieve memory like the human brain. Hopfield Network II is a single-layered and recurrent network: the neurons are fully connected, that is, every neuron is connected to every other neuron.

From there, Boltzmann machines were invented to add some stochasticity to the network so it wouldn’t get stuck in local minima since Hopfield networks are deterministic in the standard formulation. The restricted Boltzmann machines are now used in deep belief networks by stacking them on top of each other, and a greedy layer-wise training algorithm made these networks feasible to use in practice and produce very accurate classifications, as well as being useful generative models. This happened in the last decade or so and is a piece of the story that continues to make headlines today.

These two groundbreaking advances came from the field of statistical physics and mathematics — the first one was Hopfield’s insight with the spin-glasses, and the other was the application of simulated annealing to solving these spin-glasses (shortly after the algorithm was invented), which was Hinton’s insight. Simulated annealing itself evolved from the Metropolis-Hastings algorithm described by Metropolis, Rosenbluth, and Teller in the 1950s. One of two independent papers presenting simulated annealing was named A Thermodynamic Approach To The Traveling Salesman Problem so the statistical mechanics roots are really clear.

Of late, a lot of maths and physics majors are building a career in this booming field. Given the huge demand for right talent, physicists can add robust value to AI research.


Provide your comments below


If you loved this story, do join our Telegram Community.

Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top