Listen to this story
Classic computers have made impressive progress, but there are still issues that machines fail to solve. Even with innovations that are capable of combating challenges, the inherent issues surrounding reasoning, context limitation, and scalability continue to instil a sense of uncertainty.
The global community of scientists have made concentrated efforts towards using AI and machine learning for real-world applications. However, it is believed that laying the basis for groundwork in AI rather than merely focusing on improvements may be the best course of action henceforth.
Sign up for your weekly dose of what's up in emerging technology.
To address such issues, several breakthroughs have been orchestrated in engineering computers that can utilise the laws of quantum physics to recognise patterns in complex problems. However, being in their nascent stages of development, these computers are sensitive to their surroundings and require extremely low temperatures to function.
In one such breakthrough, researchers from Tohoku University in Japan are believed to have built more advanced computers to calculate complex data and their scientists are now looking at something new and different—with the concept of ‘probabilistic computing’. The team is developing quantum-compatible computers that can apply quantum physics principles to the problem of pattern recognition.
Quantifying uncertainty and interpreting complex data
A research paper, titled ‘Local bifurcation with spin-transfer torque in superparamagnetic tunnel junctions’, published in the open access journal ‘Nature Communications’ is expected to serve as the foundation for creating more sophisticated computers. Scientists claim that this breakthrough could be the foundation to engineer more advanced computers that can quantify uncertainty and interpret complex data.
The team has discovered a mathematical description for what happens within tiny magnets as electric current and magnetic fields are applied to them.
Probabilistic computers could function at room temperature and deduce answers from complex input. It could be as simple as inferring information about a person by looking at their purchasing behaviour. It would pick out patterns instead of the computer providing a single, discrete result—thereby delivering a good guess on what the result might be.
From bits to p-bits
Among several ways to build such computers, the researchers are investigating devices called ‘magnetic tunnel junctions’, which are made from two layers of magnetic metal separated by an ultrathin insulator. The two nanomagnetic devices are thermally activated under an electric current and magnetic field as electrons tunnel through the insulating layer.
Depending on their spin, the electrons can cause changes or fluctuations within the magnets. These fluctuations are called p-bits—an alternative to the binary on/off or 0/1 bits in classical computers—which may form the basis of probabilistic computing.
For researchers to engineer this in probabilistic computers, they must be able to describe the physics that takes place within the magnetic tunnel junctions.
The approach is based on Néel-Arrhenius law with STT utilising superparamagnetic tunnel junctions with high sensitivity to external perturbations. These determine the exponents through measurements such as nanosecond STT switching, homodyne-detected ferromagnetic resonance, and random telegraph noise.
The paper cites, “The findings demonstrate the capability of superparamagnetic tunnel junction as a useful tester for statistical physics as well as sophisticated engineering of probabilistic computing hardware with a rigorous mathematical foundation.”
Prof. Shun Kanai of Tohoku University’s Research Institute of Electrical Communication said, “We have experimentally clarified the ‘switching exponent’ that governs fluctuation under the perturbations caused by magnetic field and spin-transfer torque in magnetic tunnel junctions. This gives us the mathematical foundation to implement magnetic tunnel junctions into the p-bit in order to sophisticatedly design probabilistic computers. Our work has also shown that these devices can be used to investigate unexplored physics related to thermally activated phenomena.”
The big ‘If’
Like with all big leaps, this also comes with its own set of problems. Besides the discourse on p-bits, quantum computers are set to operate 158 million times faster than the currently used supercomputers—by completing a task in 4 minutes which would take a traditional supercomputer 10,000 years.
For this to be materialised, addressing challenges around physics is imperative. However, that would also make it an exorbitantly expensive undertaking for most—barring large companies and best-funded research institutes.
Researchers are already attempting to build these machines at big firms such as Google and IBM—competing to put forth the first practical, general purpose quantum computers. Google is building an open-source library for quantum machine-learning applications—in collaboration with the University of Waterloo, Alphabet’s X and Volkswagen. TensorFlow Quantum (TFQ) is employed for the rapid prototyping of quantum ML models, where research communities can control and model natural or artificial quantum systems in collaboration.
An early prediction of this process was made when the late Nobel laureate and physicist Richard Feynman envisioned the role of probability in quantum computing—researchers now aim to implement that vision. Thus, probabilistic computing adds probabilities, whereas quantum computing adds complex probability amplitudes. The theoretical promise of P-bits is unequivocal, but even then, practical hurdles seem inevitable.