Physicists from Ghent University in collaboration with AI researchers from Ontario, Canada, have successfully demonstrated with their new generative model (RUGAN), how to produce snapshots of a doped two-dimensional Fermi-Hubbard model that are indistinguishable from previously reported experimental realisations.
RUGAN or regressive upscaling generative adversarial network is based on generative adversarial networks (GANs)
Sign up for your weekly dose of what's up in emerging technology.
Overview Of RUGAN
Before going any further into the workings of RUGAN, first one needs to understand why atomic level monitoring is such a big deal. Atoms make matter, so when the arrangement of atoms changes the properties of the material varies too. To have any idea of the arrangement of atoms, researchers need to predict their position, and since atoms are always in motion, researchers establish ultracold conditions to make them numb and slow.
To accomplish whatever is discussed above, it takes an exhaustive and expensive experimental methodology. This is where RUGAN comes into the picture.
The new generative approach called “regressive upscaling generative adversarial network” (RUGAN) can create microstates with properties for which no training data is available, and can also create samples at a much larger scale (or ‘upscale’) than the training examples.
A current limitation in experiments with regards to the quantum gas microscopy on ultracold atoms is the limited number of sites that can be imaged. The upscaling ability of the RUGAN overcomes this limitation.
RUGAN is made up of deep residual convolutional networks. Convolutional neural networks, by construction, have a limited receptive field defined by the size of the convolutional kernels and the depth of the network.
The output of the RUGAN is a series of synthetic snapshots at prescribed doping values. Researchers then applied the same analysis procedure to these as to the experimentally obtained snapshots.
As only small-scale samples are required for training, the latter generalisation enables efficient sampling of configurations at scales inaccessible to traditional methods, either due to excessive computational cost or experimental restrictions on the imageable system size.
Current theoretical frameworks of this model often focus on the description of a number of specified observables, such as spin-spin correlators or hidden order, the power of generative learning lies in its unbiased learning procedure. Hence, especially at large doping values, the synthetic snapshots created by RUGAN provide a better match with experimental observations than current theoretical predictions.
For many problems in atomic physics, it’s challenging to find a solution. So, it is advisable and most importantly, inexpensive to verify that you have a solution once you have a solution candidate.
For search problems, one could use the surrogate solution for narrowing down the space of parameters and only do detailed simulations for the cases that are interesting.
The authors believe that there are still some good use cases for generative models trained on simulated data. They point at how GANs are becoming popular with material sciences by recommending new structures owing to their advantage of sampling latent space accurately.
However, they also admit that GANs are not the final step in these experimental based results but a technique which can help researchers to pivot around certain solutions and cut down the chase, which usually in the range of decades when it comes to experiments related to atomic physics.
The deep learning method in this work is broadly applicable and can be used for the efficient large-scale simulation of equilibrium and nonequilibrium physical systems.
The Hubbard model, in the discussion above, has a great significance to solid-state physics, which is a branch of physics that covers semiconductors and properties of materials at the atomic level.