In the fuzzy dark matter model, dark matter consists of “axion-like” ultra-light scalar particles of mass around 10⁻²² eV. This candidate behaves similarly to cold dark matter on large scales, but exhibits different properties on smaller (galactic) scales due to macroscopic wave effects arising from the extremely light particles’ large de Broglie wavelengths. It has both particle physics motivations and a rich astrophysical phenomenology, giving rise to notable differences in the structures on highly non-linear scales due to the manifestation of wave effects, which can impact a number of contentious small-scale tensions in the standard cosmological model, ΛCDM. Some of the unique features include transient wave interference patterns and granules, the presence flat-density cores (solitons) at the centers of dark matter halos, and the formation of quantized vortices. I will present large numerical simulations of cosmic structure formation with this dark matter model – including the full non-linear wave dynamics – using a pseudo-spectral method to numerically solve the Schrödinger–Poisson equations, and the significant computational challenges associated with these equations. I will discuss several observables, such as the evolution of the matter power spectrum, the fuzzy dark matter halo mass function, dark matter halo density profiles, and the question of a fuzzy dark matter core–halo mass relation, using results obtained from these simulations, and contrast them with corresponding results for the cold dark matter model.
Aside from quantum mechanics, other areas in physics constrain information processing. From relativity, we know information cannot move faster than the speed of light. In quantum gravity, we expect but don't fully understand additional constraints, for instance on how densely information can be stored. Can we develop an understanding of information processing in the context of quantum gravity? Towards doing so, we consider quantum information within ``holographic'' spacetimes, which have an alternative, non-gravitational, description. In that context questions about information processing in the presence of gravity can be translated to different questions about quantum information without gravity. As a specific example, we study constraints on computation within a gravitating region, and use the holographic description to argue that gravity constrains the complexity of operations that can happen inside the region. Along the way, we are led to new connections relating quantum gravity, position-based cryptography, quantum secret sharing and complexity theory.
Topological quantum codes illustrate a variety of concepts in quantum many-body physics. They also provide a realistic and resource-efficient approach to building scalable quantum computers. In this talk, I will focus on fault-tolerant quantum computing with a new topological code, the 3D subsystem toric code (STC). The 3D STC can be realized on the cubic lattice by measuring small-weight local parity checks. The 3D STC is capable of handling measurement errors while performing reliable quantum error correction and implementing logical gates in constant time. This dramatically reduces the computational time overhead and gives the 3D STC a resilience to time-correlated noise.
Reinforcement learning (RL) has made substantial empirical progress in solving hard AI challenges in the past few years. A large fraction of these progresses—Go, Dota 2, Starcraft 2, economic simulation, social behavior learning, and so on—come from multi-agent RL, that is, sequential decision making involving more than one agent. While the theoretical study of single-agent RL has a long history and a vastly growing recent interest, multi-agent RL theory is arguably a newer and less developed field, with its own unique challenges and opportunities.
In this tutorial, we present an overview of recent theoretical developments in multi-agent RL. We will focus on the model of Markov games, and cover basic formulations, objectives, as well as learning algorithms with their theoretical guarantees. We will discuss the inefficiency of classical algorithms---self-play, fictitious play, etc., and then proceed with recent advances in provably efficient learning algorithms under various regimes including two-player zero-sum games, multiplayer general-sum games, exploration, and large state space (function approximation).
Reinforcement learning (RL) has made substantial empirical progress in solving hard AI challenges in the past few years. A large fraction of these progresses—Go, Dota 2, Starcraft 2, economic simulation, social behavior learning, and so on—come from multi-agent RL, that is, sequential decision making involving more than one agent. While the theoretical study of single-agent RL has a long history and a vastly growing recent interest, multi-agent RL theory is arguably a newer and less developed field, with its own unique challenges and opportunities.
In this tutorial, we present an overview of recent theoretical developments in multi-agent RL. We will focus on the model of Markov games, and cover basic formulations, objectives, as well as learning algorithms with their theoretical guarantees. We will discuss the inefficiency of classical algorithms---self-play, fictitious play, etc., and then proceed with recent advances in provably efficient learning algorithms under various regimes including two-player zero-sum games, multiplayer general-sum games, exploration, and large state space (function approximation).
This tutorial will give an overview of the theoretical foundations of reinforcement learning, a promising paradigm for developing AI systems capable of making data-driven decisions in unknown environments. The first part of the tutorial will cover introductory concepts such as problem formulations, planning in Markov decision processes (MDPs), exploration, and generalization; no prior background will be assumed. Building on these concepts, the main aim of the tutorial will be to give a bird's-eye view of the statistical landscape of reinforcement learning (e.g., what modeling assumptions lead to sample-efficient algorithms), with a focus on algorithmic principles and fundamental limits. Topics covered will range from basic challenges and solutions (exploration in tabular RL, policy gradient methods, contextual bandits) to the current frontier of understanding. A running theme will be connections and parallels between supervised learning and reinforcement learning. Time permitting, we will touch on additional topics such as reinforcement learning with offline data.
The formalism of quantum theory over discrete systems is extended in two significant ways. First, tensors and traceouts are generalized, so that systems can be partitioned according to almost arbitrary logical predicates. Second, quantum evolutions are generalized to act over network configurations, in such a way that nodes be allowed to merge, split and reconnect coherently in a superposition. The hereby presented mathematical framework is anchored on solid grounds through numerous lemmas. Indeed, one might have feared that the familiar interrelations between the notions of unitarity, complete positivity, trace-preservation, non-signalling causality, locality and localizability that are standard in quantum theory be jeopardized as the partitioning of systems becomes both logical and dynamical. Such interrelations in fact carry through, albeit two new notions become instrumental: consistency and comprehension.
This tutorial will give an overview of the theoretical foundations of reinforcement learning, a promising paradigm for developing AI systems capable of making data-driven decisions in unknown environments. The first part of the tutorial will cover introductory concepts such as problem formulations, planning in Markov decision processes (MDPs), exploration, and generalization; no prior background will be assumed. Building on these concepts, the main aim of the tutorial will be to give a bird's-eye view of the statistical landscape of reinforcement learning (e.g., what modeling assumptions lead to sample-efficient algorithms), with a focus on algorithmic principles and fundamental limits. Topics covered will range from basic challenges and solutions (exploration in tabular RL, policy gradient methods, contextual bandits) to the current frontier of understanding. A running theme will be connections and parallels between supervised learning and reinforcement learning. Time permitting, we will touch on additional topics such as reinforcement learning with offline data.