Complex statistical methods have been downscaled these days, thanks to advanced technology and better methods available to tackle the traditional statistical analysis. With machine learning algorithms and sophisticated hardware at the front, statistical computations require lesser time and fewer computing resources. In addition, statistical software packages such as MiniTab, SPSS, JMP and so on, are fuelling novel computing needs to determine statistical results. \n\nOne such statistical method that has seen a face-lift is the Monte Carlo method (also known as Monte Carlo analysis or Monte Carlo Simulation, in different fields of study). The changes in the method mainly has ML aspects involved to deal with the setbacks with regard to the conventional method. In this article, we explore the novel method called \u2018Self-Learning\u2019 Monte Carlo (SLMC) method, developed by academics at Chinese Academy of Sciences in collaboration with Massachusetts Institute of Technology, which practically observes an improvement in the Monte Carlo method\/simulation.\n\nWhat is Monte Carlo method?\n\nOriginally developed by Stanislaw Ulam, a mathematics professor from Poland, Monte Carlo(MC) method or simulation involves measuring the relationship between two or more mathematical variables using random sampling. This process usually involves error, which are however significantly reduced if large samples are taken in the context. In other words, the method incorporates the process of integration of random variables at specific regions (called as intervals) to follow a deterministic approach.\n\nMachine Learning in Monte Carlo method \n\nThe basis for SLMC is to perform an \u201cupdate\u201d simulation to an existing MC configuration leading to a large set of configurations so that it serves as training data for further simulation. This means, the simulation learns on its own based on previous input configurations. The \u201cupdate\u201d is carried out using the Markov Process for MC method where the probability of transition is from A to B, requiring a detailed balance principle to be followed. The formula is given by\n\nP(A \u2192B)\/ P(B\u2192A) = W(B)\/ W(A) \n\nwhere, W stands for the probability distribution of configurations. The \u201cupdate\u201d followed throughout the study will be labelled \u2018local\u2019 because a specific instance (called \u2018site\u2019) in the configuration is chosen and a new configuration is obtained by changing the variable in the site. Secondly, the new configuration is selected based on the detailed balance principle mentioned earlier. If it holds good for the principle, the new configuration is updated in the Markov chain, else the same configuration is kept and replicated. This way, SLMC can work with statistical models of most popular types by learning data.\n\nSLMC Procedure \n\nSLMC follows a 4-step procedure.\n\n\n Perform a trial simulation with the help of \u2018local\u2019 update which leads to a set of configurations which act as training data.\n Select a Hamiltonian operator H \u00a0and allow it to learn according to configurations provided to give out an effective Hamiltonian Heff\n Develop simulation moves with regard to effective Hamiltonian Heff.\n Observe whether the simulation moves hold good with respect to the original Hamiltonian H.\n\n\n\n\nIn the above steps, Steps 1 & 2 form the self-learning components and Steps 3 & 4 are performed repeatedly to observe variations in the actual MC simulation. \n\nThe implementation in the paper shows models to observe ferromagnetic properties as part of statistical mechanics, with regard to properties such as critical temperature and so on. The simulation is carried to check the interaction among the four orbital spins mentioned in the Hamiltonian. The results show the simulation process is faster by 10 times than the normal. All of the results are plotted with respect to local update using autocorrelation time which depicts how well the MC configurations adhere to the Markov Chain process.\n\nThroughout the iterative process, the data points of the spin correlation fall close to the self-learning fit depicting that the simulation yields positive and valid output (energy difference, in this case) \u00a0and also stick to the configurations obtained in the SLMC model.\n\nConclusion: \n\nThe approach and application in the study was limited to spin systems (atomic physics), specifically ferromagnetism. The authors also contend that SLMC may see improvements if newer ML techniques are utilised for simulation. The study also emphasized the need for a Hamiltonian operator, which is restricted only to quantum mechanics. With ML, this can be eliminated and expanded to other areas of study. Consequently, this leads to bridging the gap between theoretical studies in developing a model against the computational performance in practical.