Temperature-Resistant Order in 2+1 Dimensions
Fedor Popov Stony Brook University
Fedor Popov Stony Brook University
Niko Sarcevic Newcastle University
Chong Wang Perimeter Institute for Theoretical Physics
Fedor Popov Stony Brook University
High temperatures are typically thought to increase disorder. Here we examine this idea in Quantum Field Theory in 2+1 dimensions. For this sake we explore a novel class of tractable models, consisting of nearly-mean-field scalars interacting with critical scalars. We identify UV-complete, local, unitary models in this class and show that symmetry breaking $\mathbb{Z}_2 \to \emptyset$ occurs at any temperature in some regions of the phase diagram. This phenomenon, previously observed in models with fractional dimensions, or in the strict planar limits, or with non-local interactions, is now exhibited in a local, unitary 2+1 dimensional model with a finite number of fields.
Niko Sarcevic Newcastle University
The Vera C. Rubin Observatory’s LSST promises unprecedented cosmological constraints, but achieving them requires more than just statistical power—it demands forecasting pipelines that can account for complex astrophysical systematics and modeling challenges on small scales. In this talk, I present work within the LSST Dark Energy Science Collaboration (DESC) to develop a modular forecasting framework that connects realistic data modeling with infrastructure built for extensibility and validation. Drawing on my role as forecasting group lead, I will outline key challenges in pipeline design, the role of validation in maintaining forecast credibility, and the use of good coding practices—such as modularization and model registries—to ensure long-term adaptability. I’ll close with a look at new directions, including forecasts at high redshift and multi-probe combinations, underscoring how thoughtful infrastructure enables reliable science in the LSST era.
I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design.
In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018).
Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.
I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design.
In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018).
Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.
Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids.
This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers.
I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.
Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids.
This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers.
I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.
Markov chains are among the most popular sampling algorithms both in theory and practice. There is a vast theory on understanding the mixing times of Markov chains. But what if the Markov chain does not mix fast? Can we still use such Markov chains in down-stream applications of sampling, and what theoretical guarantees can we show about these chains? In this talk, we will define a notion of "locally stationary measure" -- which is an analogue of local optima in convex optimization. We will see some generic methods to analyze the structure of distributions that non-mixing Markov chains sample from, along with applications to finding large independent sets in graphs, and finding planted cuts in random graphs. Finally, we will conclude the talk with a set of open questions on locally stationary measures. Based on joint work with Kuikui Liu, Prasad Raghavendra, Amit Rajaraman and David X Wu.
Markov chains are among the most popular sampling algorithms both in theory and practice. There is a vast theory on understanding the mixing times of Markov chains. But what if the Markov chain does not mix fast? Can we still use such Markov chains in down-stream applications of sampling, and what theoretical guarantees can we show about these chains? In this talk, we will define a notion of "locally stationary measure" -- which is an analogue of local optima in convex optimization. We will see some generic methods to analyze the structure of distributions that non-mixing Markov chains sample from, along with applications to finding large independent sets in graphs, and finding planted cuts in random graphs. Finally, we will conclude the talk with a set of open questions on locally stationary measures. Based on joint work with Kuikui Liu, Prasad Raghavendra, Amit Rajaraman and David X Wu.
I will present a method to reduce extremal combinatorial problems to establishing the unsatisfiability of k-sparse linear equations mod 2 (aka k-XOR formulas) with a limited amount of randomness. This latter task is then accomplished by bounding the spectral norm of certain "Kikuchi" matrices built from the k-XOR formulas. In these talks, I will discuss a couple of applications of this method from the following list.
1. Proving hypergraph Moore bound (Feige's 2008 conjecture) -- the optimal trade-off between the number of equations in a system of k-sparse linear equations modulo 2 and the size of the smallest linear dependent subset. This theorem generalizes the famous irregular Moore bound of Alon, Hoory and Linial (2002) for graphs (equivalently, 2-sparse linear equations mod 2).
2. Proving a cubic lower bound on 3-query locally decodable codes (LDCs), improving on a quadratic lower bound of Kerenedis and de Wolf (2004) and its generalization to q-query locally decodable codes for all odd q,
3. Proving an exponential lower bound on linear 3-query locally correctable codes (LCCs). This result establishes a sharp separation between 3-query LCCs and 3-query LDCs that are known to admit a construction with a sub-exponential length. It is also the first result to obtain any super-polynomial lower bound for >2-query local codes.
Time permitting, I may also discuss applications to strengthening Szemeredi's theorem, which asks for establishing the minimal size of a random subset of integers S such that every dense subset of integers contains a 3-term arithmetic progression with a common difference from S, and the resolution of Hamada's 1970 conjecture on the algebraic rank of binary 4-designs.
I will include pointers to the many open questions and directions where meaningful progress seems within reach for researchers who may get interested in some of the topics.
I will present a method to reduce extremal combinatorial problems to establishing the unsatisfiability of k-sparse linear equations mod 2 (aka k-XOR formulas) with a limited amount of randomness. This latter task is then accomplished by bounding the spectral norm of certain "Kikuchi" matrices built from the k-XOR formulas. In these talks, I will discuss a couple of applications of this method from the following list.
1. Proving hypergraph Moore bound (Feige's 2008 conjecture) -- the optimal trade-off between the number of equations in a system of k-sparse linear equations modulo 2 and the size of the smallest linear dependent subset. This theorem generalizes the famous irregular Moore bound of Alon, Hoory and Linial (2002) for graphs (equivalently, 2-sparse linear equations mod 2).
2. Proving a cubic lower bound on 3-query locally decodable codes (LDCs), improving on a quadratic lower bound of Kerenedis and de Wolf (2004) and its generalization to q-query locally decodable codes for all odd q,
3. Proving an exponential lower bound on linear 3-query locally correctable codes (LCCs). This result establishes a sharp separation between 3-query LCCs and 3-query LDCs that are known to admit a construction with a sub-exponential length. It is also the first result to obtain any super-polynomial lower bound for >2-query local codes.
Time permitting, I may also discuss applications to strengthening Szemeredi's theorem, which asks for establishing the minimal size of a random subset of integers S such that every dense subset of integers contains a 3-term arithmetic progression with a common difference from S, and the resolution of Hamada's 1970 conjecture on the algebraic rank of binary 4-designs.
I will include pointers to the many open questions and directions where meaningful progress seems within reach for researchers who may get interested in some of the topics.
Chong Wang Perimeter Institute for Theoretical Physics
Optional
Numerical methods are powerful tools for advancing our understanding of quantum gravity. In this talk, I will introduce two complementary numerical approaches. The first focuses on solving nonlinear partial differential equations that arise in Loop Quantum Gravity (LQG)-inspired effective models. This framework enables us to investigate the formation and evolution of shock waves in spherically symmetric gravitational collapse. The second approach involves the use of complex critical points, Lefschetz thimble techniques, and the Metropolis Monte Carlo algorithm to study the Lorentzian path integral in Spinfoam models and Quantum Regge Calculus. These methods offer new insights into quantum cosmology and black-to-white hole transitions.