The last couple of years has seen a tremendous rise in the enhancement of quantum computers. Companies like Google have collaborated with the likes of NASA in order to establish new benchmarks.
According to a report from a leading financial portal, the quantum processor took 200 seconds to sample one instance of the quantum circuit one million times, while a supercomputer would require 20,000 years to perform that task.
In the paper that was taken down, the researchers said that, to their knowledge, the experiment “marks the first computation that can only be performed on a quantum processor.”
This news was followed by claims that now there is no code that cannot be cracked and the implications it has for encryption. This sent ripples across the world of cryptocurrency amongst many others.
What’s The Big Deal With Quantum Supremacy
Quantum supremacy means that a quantum computer has beaten a classical computer at a certain task.
To put it in another perspective, by running a huge cluster of classical cores for (say) a month, you can eventually verify the outputs that your QC produced in a few seconds — while also seeing that the QC was many orders of magnitude faster.
According to the computer scientist Scott Aaronson, this does mean that sampling-based quantum supremacy experiments are almost specifically designed for ~50-qubit devices like the ones being built right now. Even with 100 qubits, we wouldn’t know how to verify the results using all the classical computing power available on earth.
Future Direction
Companies like D-Wave and Google have been trying to develop a sophisticated hybrid system which incorporates quantum effects into the classical ML problems since the turn of this decade and they will continue to do so. With the current fabrication techniques, D-Wave has managed to develop 2000 qubits processors.
For every qubit, say x, there are 2^x states. Now, imagine the number of states a 100 qubit processor can handle! Two qubits can store four states, three qubits can store eight states. So, 50 qubits in quantum computers can be approximated to 1 quadrillion bits in classical computers.
This alone is the single most important feature and reason behind the arduous efforts to improve quantum computing. The exponentially increasing complex amplitudes can have as much information encoded. Now, this endless chain of information can be thought of as a large matrix representing equations. Machine Learning models run on a system of equations like these.
For example, a hyperplane in Support Vector Machines can be written as,
wx-b = 0
So with increasing feature set, the complexity of the equation increases or let’s say the size of a matrix increases and complexity is where quantum machine learning comes into the picture.
By initiating a quantum state where amplitudes, that give away the probability density of the quantum state vectors can be related to the feature vectors in the dataset used for training the model. So anyone who has had an idea on how to run a standard regression model would comprehend how complex and time consuming it gets with an increasing number of rows/columns (think millions).
The processing power of quantum computers has the potential to unfold a myriad of opportunities. From drug discovery to solving mathematical problems, from machine learning to material sciences, quantum computers can revolutionise many domains.
Researchers have been working on machines that will be easier to build, manage, and scale and some computers are now available via the computing cloud. But it could still be many years before quantum computers become a household name.