Format results
- Daniela Kaufmann (Johannes Kepler University Linz)
Constraining Early Dark Energy with Large Scale Structure
Evan McDonough University of Winnipeg
Floquet spin chains and the stability of their edge modes
Aditi Mitra New York University (NYU)
Supersymmetry and RCHO revisited
Paul Townsend University of Cambridge
Searching for Dark Matter with Superconducting Qubits - Akash Dixit
Akash Dixit University of Chicago
Melonic field theories
Dario Benedetti Ecole Polytechnique - CPHT
Spontaneous black hole scalarization
Hector Okada da Silva Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
Hypotheses about Satisfiability and their Consequences
Russell Impagliazzo (UC San Diego)The theory of quantum information: channels, capacities, and all that
Graeme Smith University of Colorado Boulder
Prospects in Celestial Holography
Sabrina Pasterski Perimeter Institute for Theoretical Physics
Theoretical Foundation of Solvers: Context, Directions and Open Problems
Vijay Ganesh (University of Waterloo), Laurent Simon (Bordeaux INP), and David Mitchell (Simon Fraser University)
Combining SAT and Computer Algebra for Circuit Verification
Daniela Kaufmann (Johannes Kepler University Linz)Even more than 25 years after the Pentium FDIV bug, automated verification of arithmetic circuits, and most prominently gate-level integer multipliers, still imposes a challenge. Approaches which purely rely on SAT solving or on decision diagrams seem to be unable to solve this problem in a reasonable amount of time. In this talk, we will demonstrate a verification technique that is based on algebraic reasoning and is currently considered to be one of the most successful verification techniques for circuit verification. In this approach the circuit is modelled as a set of polynomial equations. For a correct circuit we need to show that the specification is implied by the polynomial representation of the given circuit. However parts of the multiplier, i.e., final stage adders, are hard to verify using only computer algebra. We will present a hybrid approach which combines SAT and computer algebra to tackle this issue.Constraining Early Dark Energy with Large Scale Structure
Evan McDonough University of Winnipeg
The Hubble tension is conventionally viewed as that between the cosmic microwave background (CMB) and the SH0ES measurement. A prominent proposal for a resolution of this discrepancy is to introduce a new component in the early universe, which initially acts as "early dark energy" (EDE), thus decreasing the physical size of the sound horizon imprinted in the CMB and increasing the inferred H_0, bringing it into near agreement with SH0ES. However, this impacts cosmological observables beyond the CMB -- in particular, the large scale structure (LSS) of the universe across a range of redshift. The H_0 tension resolving EDE cosmologies produce scale-dependent changes to the matter power spectrum, including 10% more power at k=1 h/Mpc. Motivated by this, I will present the results of two analyses of LSS constraints on the EDE scenario. Weak lensing and galaxy clustering data (from, e.g., the Dark Energy Survey) significantly constrain the EDE model, and the resulting H_0 is in significant tension with SH0ES. Complementary to this, including data from the Baryon Oscillation Spectroscopic Survey (BOSS), analyzed using the effective field theory (EFT) of LSS, yields an EDE H_0 value that is in significant (3.6\sigma) tension with SH0ES. These results indicate that current LSS data disfavours the EDE model as a resolution of the Hubble tension, and, more generally, that the EDE model fails to restore cosmological concordance. A sensitivity forecast for EUCLID suggests that future LSS surveys can close the remaining parameter space of the model.
Floquet spin chains and the stability of their edge modes
Aditi Mitra New York University (NYU)
In this talk I will begin by introducing symmetry protected topological (SPT) Floquet systems in 1D. I will describe the topological invariants that characterize these systems, and highlight their differences from SPT phases arising in static systems. I will also discuss how the entanglement properties of a many-particle wavefunction depend on these topological invariants. I will then show that the edge modes encountered in free fermion SPTs are remarkably robust to adding interactions, even in disorder-free systems where generic bulk quantities can heat to infinite temperatures due to the periodic driving. This robustness of the edge modes to heating can be understood in the language of strong modes for free fermion SPTs, and almost strong modes for interacting SPTs.
I will then outline a tunneling calculation for extracting the long lifetimes of these edge modes by mapping the Heisenberg time-evolution of the edge operator to dynamics of a single particle in Krylov space.
Supersymmetry and RCHO revisited
Paul Townsend University of Cambridge
Various links between supersymmetry and the normed division algebras R,C,H,O were found in the 1980s. This talk will focus on the link between K=R,C,H,0 and supersymmetric field theories in a Minkowski spacetime of dimension D=3,4,6,10. The first half will survey the history starting with a 1944/5 paper of Dirac and heading towards the links found in 1986/7 between R,C,H,O and super-Yang-Mills theories. The second half will review a result from 1993 that connects, via a twistor-type transform, the superfield equations of super-Maxwell theory in D=3,4,6,10 to a K-chirality constraint on a K-valued worldline superfield of N=1,2,4,8 worldline supersymmetry. This provide an explicit connection of octonions to the free-field D=10 super-Maxwell theory.Searching for Dark Matter with Superconducting Qubits - Akash Dixit
Akash Dixit University of Chicago
Detection mechanisms for low mass bosonic dark matter candidates, such the axion or hidden photon, leverage potential interactions with electromagnetic fields, whereby the dark matter (of unknown mass) on rare occasion converts into a single photon. Current dark matter searches operating at microwave frequencies use a resonant cavity to coherently accumulate the field sourced by the dark matter and a near standard quantum limited (SQL) linear amplifier to read out the cavity signal. To further increase sensitivity to the dark matter signal, sub-SQL detection techniques are required. Here we report the development of a novel microwave photon counting technique and a new exclusion limit on hidden photon dark matter. We operate a superconducting qubit to make repeated quantum non-demolition measurements of cavity photons and apply a hidden Markov model analysis to reduce the noise to 15.7 dB below the quantum limit, with overall detector performance limited by a residual background of real photons. With the present device, we perform a hidden photon search and constrain the kinetic mixing angle to ≤ 1.68×10−15 in a band around 6.011 GHz (24.86 μeV) with an integration time of 8.33 s. This demonstrated noise reduction technique enables future dark matter searches to be sped up by a factor of 1300. By coupling a qubit to an arbitrary quantum sensor, more general sub-SQL metrology is possible with the techniques presented in this work.
Melonic field theories
Dario Benedetti Ecole Polytechnique - CPHT
The melonic limit of a field theory is a large-N limit in which melonic diagrams dominate, thus differing significantly from the cactus and planar limits of vector and matrix models. It was first discovered in tensor models in zero dimensions, viewed as an approach to quantum gravity, and later in the SYK model. More recently, it has found applications in quantum field theory on a fixed (flat) background as an analytic tool for the study of new fixed points of the renormalization group, i.e. new conformal field theories. In this talk, I will review the main features of the melonic limit, and in view of the recent developments I will revisit an old model by Amit and Roginsky with SO(3) internal symmetry, which is neither a tensor model nor a disordered model like SYK, and yet it has a similar melonic limit. Time permitting, I will also comment on similarities with the fishnet model by Kazakov et al, and on the (in)stability of all such models when complex scaling dimensions appear.
Spontaneous black hole scalarization
Hector Okada da Silva Max Planck Institute for Gravitational Physics (Albert Einstein Institute)
General Relativity remains to this day our best description of gravitational phenomena. Nonetheless, issues such its quantization and cosmological constant problem suggest Einstein’s theory might not be final theory of the gravitational interaction. Motivated by these questions, theorists have proposed a myriad of extensions to General Relativity over the decades. In this seminar, I will focus on theories with extra scalar fields. In particular, I will describe how some of these theories can evade Solar System constraints and yet yield to new effects in the strong-gravity regime of compact objects, i.e. neutron stars and black holes. This is achieved through a process known as spontaneous scalarization, in which a compact object growths 'scalar hair' once certain conditions are met and remains 'bald' otherwise. I will review the basics of this effect and then focus on recent efforts in understanding it for black holes both in isolation and in binaries.
Hypotheses about Satisfiability and their Consequences
Russell Impagliazzo (UC San Diego)No abstract available.Existentially Polytime Theorems
Jack Edmonds (University of Waterloo)An EP theorem means an NP predicate which is always true. Most of most loved discrete theorems are EP. Usually, but not always, an EP theorem can be proved by a polynomial time algorithm which finds an instance of what the theorem says exists. A few examples are described.The theory of quantum information: channels, capacities, and all that
Graeme Smith University of Colorado Boulder
Information theory offers mathematically precise theory of communication and data storage that guided and fueled the information age. Initially, quantum effects were thought to be an annoying source of noise, but we have since learned that they offer new capabilities and vast opportunities. Quantum information theory seeks to identify, quantify, and ultimately harness these capabilities. A basic resource in this context is a noisy quantum communication channel, and a central goal is to figure out its capacities---what can you do with it? I’ll highlight the new and fundamentally quantum aspects that arise here, such as the role of entanglement, ways to quantify it, and bizarre new kinds of synergies between resources. These ideas elucidate the nature of communication in a quantum context, as well as revealing new facets of quantum theory itself.
Prospects in Celestial Holography
Sabrina Pasterski Perimeter Institute for Theoretical Physics
Have you always wanted to know: What the symmetries of nature are? How black holes process quantum information? What the ultimate UV description of our universe is? Then join me as I continue to develop a new framework to describe scattering: Celestial Holography.
The Celestial Holography framework applies the holographic principle to spacetimes with vanishing cosmological constant by mapping 4D S-matrix elements to correlators in a 2D conformal field theory. This map possesses a number of surprising features. For example, it emphasizes infinite dimensional symmetry enhancements, which are typically hidden in IR factorization theorems for amplitudes; reorganizes collinear limits as CFT operator product expansions; and mixes UV and IR behavior in a manner that may allow us to make general claims about scattering not obvious from perturbation theory.
Can we show that the UV behavior of amplitudes must be stringy? Can we bootstrap celestial CFTs? Can we unify tools from Loop Quantum Gravity and String Theory?
Maybe, with your help!Theoretical Foundation of Solvers: Context, Directions and Open Problems
Vijay Ganesh (University of Waterloo), Laurent Simon (Bordeaux INP), and David Mitchell (Simon Fraser University)Topics: Vijay Ganesh: Perspectives on Practice and Theory of SAT Solving Abstract: Over the last two decades, SAT solvers have revolutionized many sub-fields of software engineering, security, and AI. This is largely due to a dramatic improvement in the scalability of these solvers vis-a-vis large real-world formulas. What is surprising is that the Boolean satisfiability problem is NP-complete, believed to be intractable, and yet these solvers easily solve industrial instances containing millions of variables and clauses in them. How can that be? In my talk, I will briefly survey what we know about the power of SAT solvers through the lens of parameterized and proof complexity, as well as how we can build better SAT solvers by combining machine learning and proof systems. Laurent Simon: Towards an (experimental) Understanding of SAT Solvers Abstract: Despite important progresses in the practical solving of SAT problems, the behavior of CDCL solvers are only partially understood. In this talk, we will review some of the experimental studies that have been conducted so far, uncovering some surprising structures CDCL solvers are working on. David Mitchell: On Using Structural Properties to Improve CDCL Solver Performance Abstract: Instance structure is widely discussed and studied in the SAT solving community, but has never been explicitly made use of in dominant "industrial" SAT solvers. We briefly review some structural properties of CNF formulas that have received attention, and some recent efforts to improve CDCL SAT solver performance using these properties.