Recently, machine learning has become a popular tool to use in fundamental science, including lattice field theory. Here I will report on some recent progress, including the Inverse Renormalisation Group and quantum-field theoretical machine learning, combining insights of lattice field theory and machine learning in a hopefully constructive manner.
In 2008 Masahiro Hotta proposed a protocol for transporting energy between two localized observers A and B without any energy propagating from A to B. When this protocol is applied to the vacuum state of a quantum field, the local energy density in the field achieves negative values, violating energy conditions
We will explore the protocol of quantum energy teleportation and show how quantum information techniques can be used to activate thermodynamically passive states. We will review the first experiment showcasing the local activation of ground state energy (carried out in 2022), and we will discuss the potential of this relativistic quantum information protocol to create exotic distributions of stress-energy density in a quantum field theory, and how spacetime might react to them.
In recent years, machine learning has been successfully used to identify phase transitions and classify phases of matter in a data-driven manner. Neural network (NN)-based approaches are particularly appealing due to the ability of NNs to learn arbitrary functions. However, the larger an NN, the more computational resources are needed to train it, and the more difficult it is to understand its decision making. Thus, we still understand little about the working principle of such machine learning approaches, when they fail or succeed, and how they differ from traditional approaches. In this talk, I will present analytical expressions for the optimal predictions of three popular NN-based methods for detecting phase transitions that rely on solving classification and regression tasks using supervised learning at their core. These predictions are optimal in the sense that they minimize the target loss function. Therefore, in practice, optimal predictive models are well approximated by high-capacity predictive models, such as large NNs after ideal training. I will show that the analytical expressions we have derived provide a deeper understanding of a variety of previous NN-based studies and enable a more efficient numerical routine for detecting phase transitions from data.
In this talk, I will discuss our work on using models inspired by natural language processing in the realm of many-body physics. Specifically, I will demonstrate their utility in reconstructing quantum states and simulating the real-time dynamics of closed and open quantum systems. Finally, I will show the efficacy of using these models for combinatorial optimization, yielding solutions of exceptional accuracy compared to traditional simulated and simulated quantum annealing methods.
This talk will consider a stochastic multiple-timescale dynamical system modeling a biological neuron. With this model, we will separately uncover the mechanisms underlying two different ways biological neurons encode information with stochastic perturbations: self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR). We will then show that in the same weak noise limit, SISR and ISR are related through the relative geometric positioning (and stability) of the fixed point and the generic folded singularity of the model’s critical manifold. This mathematical result could explain the experimental observation where neurons with identical morphological features sometimes encode different information with identical synaptic input. Finally, if time permits, we shall discuss the plausible applications of this result in neuro-biologically inspired machine learning algorithms, particularly reservoir computing based on liquid-state machines.
The importance of the tenfold way in physics was only recognized in this century. Simply put, it implies that there are ten fundamentally different kinds of matter. But it goes back to 1964, when the topologist C. T. C. Wall classified the associative real super division algebras and found ten of them. The three 'purely even' examples were already familiar: the real numbers, complex numbers and quaternions. The rest become important when we classify representations of groups on Z/2-graded Hilbert spaces. We explain this classification, its connection to Clifford algebras, and some of its implications.