In recent years, machine learning has been successfully used to identify phase transitions and classify phases of matter in a data-driven manner. Neural network (NN)-based approaches are particularly appealing due to the ability of NNs to learn arbitrary functions. However, the larger an NN, the more computational resources are needed to train it, and the more difficult it is to understand its decision making. Thus, we still understand little about the working principle of such machine learning approaches, when they fail or succeed, and how they differ from traditional approaches. In this talk, I will present analytical expressions for the optimal predictions of three popular NN-based methods for detecting phase transitions that rely on solving classification and regression tasks using supervised learning at their core. These predictions are optimal in the sense that they minimize the target loss function. Therefore, in practice, optimal predictive models are well approximated by high-capacity predictive models, such as large NNs after ideal training. I will show that the analytical expressions we have derived provide a deeper understanding of a variety of previous NN-based studies and enable a more efficient numerical routine for detecting phase transitions from data.
Zoom Link: https://pitp.zoom.us/j/91642481966?pwd=alkrWEFFcFBvRlJEbDRBZWV3MFFDUT09