Format results
- Bernd Finkbeiner (CISPA Helmholtz Center for Information Security)
Using SAT Solvers to Prevent Causal Failures in the Cloud
Ruzica Piskac (Yale University)Gravity Gradient Noise from Asteroids
Michael Fedderke Perimeter Institute for Theoretical Physics
Approximate Counting and Sampling
Kuldeep Meel (National University of Singapore)Tidal heating: a hunt for the horizon
Sayak Datta IUCAA - The Inter-University Centre for Astronomy and Astrophysics
Pseudo-Boolean Solving and Optimization
Jakob Nordström (University of Copenhagen & Lund University)Correlators in integrable models with Separation of Variables
Nikolay Gromov King's College London
A 2020s Vision of CMB Lensing
Marius Millea University of California, Berkeley
Reactive Synthesis
Bernd Finkbeiner (CISPA Helmholtz Center for Information Security)Reactive synthesis is the problem of translating a logical specification into a reactive system that is guaranteed to satisfy the specification for all possible behaviors of its environment. More than 60 years after its introduction by Alonzo Church, the synthesis problem is still one of the most intriguing challenges in the theory of reactive systems. Recent years have brought advances both in reasoning methods that can be used as tools in the synthesis process and in the synthesis approaches themselves. As a result, the complexity of the synthesized systems has risen steadily. However, the logical and algorithmic foundations of the synthesis problem are still far from complete. In this tutorial, I will give an overview of both the classic constructions and more recent ideas such as bounded synthesis and the synthesis of distributed systems. In the latter, we try to find a combination of process implementations that jointly guarantee that a given specification is satisfied. A reduction from multi-player games shows that the problem is in general undecidable. Despite this negative result, there is a long history of discoveries where the decidability of the synthesis problem was established for distributed systems with specific architectures, although usually with high computational cost. Compositional approaches and new specification languages like HyperLTL, which offer a more comprehensive logical representation of the distributed synthesis problem, are potential ways forward.Using SAT Solvers to Prevent Causal Failures in the Cloud
Ruzica Piskac (Yale University)Today's cloud systems heavily rely on redundancy for reliability. Nevertheless, as cloud systems become ever more structurally complex, independent infrastructure components may unwittingly share deep dependencies. These unexpected common dependencies may result in causal failures that undermine redundancy efforts. The state-of-the-art efforts, e.g., post-failure forensics, not only lead to prolonged failure recovery time in the face of structurally complex systems, but also fail to avoid expensive service downtime. We describe RepAudit, an auditing system which builds on the so-called fault graph, to identify causal failures in the cloud. To ensure the practicality, efficiency, and accuracy of our approach, we further equip RepAudit with a domain-specific auditing language framework, a set of high-performance auditing primitives. We leverage SAT solvers to construct efficient fault graph analysis algorithms. To empirically evaluate this claim, we run RepAudit on a real-world cloud storage dataset and we observed that RepAudit is 300 times more efficient than other state-of-the-art auditing tools. RepAudit technique has been used in Alibaba Group---the system is named CloudCanary---to perform real-time audits on service update scenarios to identify the root causes of correlated failure risks, and generate improvement plans with increased reliability. CloudCanary has successfully detected various correlated failure risks such as single-point microservice, common power source, and shared network components, preventing service outages ahead of time. The talk will conclude with an overview of the state of the arts on network verification community which widely relies on using SAT/SMT solver to check the correctness of network behaviors. References: [1] Ennan Zhai, Ruichuan Chen, David Isaac Wolinsky, Bryan Ford: Heading Off Correlated Failures through Independence-as-a-Service. OSDI 2014: 317-334 [2] Ennan Zhai, Ruzica Piskac, Ronghui Gu, Xun Lao, Xi Wang: An auditing language for preventing correlated failures in the cloud. PACMPL 1(OOPSLA): 97:1-97:28 (2017) This talk will be given in collaboration with Ennan Zhai, a researcher at Alibaba Group.Gravity Gradient Noise from Asteroids
Michael Fedderke Perimeter Institute for Theoretical Physics
The gravitational coupling of nearby massive bodies to test masses in a gravitational wave (GW) detector cannot be shielded, and gives rise to 'gravity gradient noise’ (GGN) in the detector. In this talk, I will discuss how any GW detector using local test masses in the Inner Solar System is subject to GGN from the motion of the field of 10^5 Inner Solar System asteroids, which presents an irreducible noise floor for the detection of GW that rises exponentially at low frequencies. This severely limits prospects for GW detection using local test masses for frequencies below (few) x 10^{-7} Hz. At higher frequencies, I’ll show that the asteroid GGN falls rapidly enough that detection may be possible; however, the incompleteness of existing asteroid catalogs with regard to small bodies makes this an open question around microHz frequencies, and I’ll outline why further study is warranted here. Additionally, I’ll mention some prospects for GW detection in the ~10 nHz-microHz band that could evade this noise source.
Programming Z3
Nikolaj Björner (Microsoft Research)This tutorial provides a programmer's introduction to Satisfiability Modulo Theories based on Z3. It illustrates how to define and solve SMT problems using scripts and walks through algorithms using exposed functionality. Pre-recorded videos not available for Friday session.Approximate Counting and Sampling
Kuldeep Meel (National University of Singapore)Pre-recorded videos not available for Friday session. The availability of efficient SAT solvers presents opportunity for designing techniques for problems "beyond SAT". We will discuss two such problems that have witnessed increase in interest over the past decade: counting and sampling. Given a formula, the problem of counting is to compute the total number of solutions while sampling seeks to sample solutions uniformly at random. Counting and sampling are fundamental problems with applications such as probabilistic reasoning, information leakage, verification, and the like. In this talk, we will discuss approach that combines the classical ideas of universal hashing with CDCL solvers to design scalable approximate counters and samplers.Einstein's Equivalence principle for superpositions of gravitational fields
Flaminia Giacomini ETH Zurich
The Principle of Equivalence, stating that all laws of physics take their special-relativistic form in any local inertial frame, lies at the core of General Relativity. Because of its fundamental status, this principle could be a very powerful guide in formulating physical laws at regimes where both gravitational and quantum effects are relevant. However, its formulation implicitly presupposes that reference frames are abstracted from classical systems (rods and clocks) and that the spacetime background is well defined. Here, we we generalise the Einstein Equivalence Principle to quantum reference frames (QRFs) and to superpositions of spacetimes. We build a unitary transformation to the QRF of a quantum system in curved spacetime, and in a superposition thereof. In both cases, a QRF can be found such that the metric looks locally flat. Hence, one cannot distinguish, with a local measurement, if the spacetime is flat or curved, or in a superposition of such spacetimes. This transformation identifies a Quantum Local Inertial Frame. These results extend the Principle of Equivalence to QRFs in a superposition of gravitational fields. Verifying this principle may pave a fruitful path to establishing solid conceptual grounds for a future theory of quantum gravity.
Tidal heating: a hunt for the horizon
Sayak Datta IUCAA - The Inter-University Centre for Astronomy and Astrophysics
The defining feature of a classical black hole horizon is that it is a "perfect absorber". Any evidence showing otherwise
would indicate a departure from the standard black-hole picture. Due to the presence of the horizon, black holes in binaries exchange
energy with their orbit, which backreacts on the orbit. This is called tidal heating. Tidal heating can be used to test the presence of a horizon.
I will discuss the prospect of tidal heating as a discriminator between black holes and horizonless compact objects, especially supermassive ones, in LISA.
I will also discuss a similar prospect for distinguishing between neutron stars and black holes in the LIGO band.
Pseudo-Boolean Solving and Optimization
Jakob Nordström (University of Copenhagen & Lund University)Pseudo-Boolean solving is the task of finding a solution to a collection of (linear) pseudo-Boolean constraints, also known as a 0-1 integer linear program, possibly optimizing some linear objective function. This problem can be approached using methods from conflict-driven clause learning (CDCL) SAT solving as in MaxSAT solvers, or "native" pseudo-Boolean reasoning based on variants of the cutting planes proof system, or (mixed) integer linear programming (MIP). The purpose of this tutorial is to provide a brief survey of pseudo-Boolean reasoning, MaxSAT solving, and integer linear programming, focusing on the potential for synergy between these different approaches and highlighting many open problems on both the applied and the theoretical side.Are we Living in the Matrix?
David Tong University of Cambridge
No. Obviously not. It's a daft question. But, buried underneath
this daft question is an extremely interesting one: is it possible to
simulate the known laws of physics on a computer? Remarkably, there is a
mathematical theorem, due to Nielsen and Ninomiya, that says the answer is
no. I'll explain this theorem, the underlying reasons for it, and some
recent work attempting to circumvent it.Proof Complexity
Sam Buss (UC San Diego)These videos provide an introduction to proof complexity, especially from the point of view of satisfiability algorithms. There are four videos. Part A introduces proof complexity, and discusses Frege proofs, abstract proof systems, resolution, and extended Frege proofs and extended resolution. Part B discusses the propositional pigeonhole principle, and upper and lower bounds on the complexity of proofs of the pigeonhole principle in the extended Frege proof system, the Frege proof systems, and resolution. Part C discusses the CDCL satisfiability algorithms from the point of view from proof complexity, including discussion of clause learning, trivial resolution, unit propagation, restarts, and RUP and (D)RAT proof traces. Part D discusses cutting planes, the Nullstellsatz and Polynomial Calculus proof systems and concludes with a short discussion of automatizability. Parts B and C are independent of each other. Part D has a modest dependency on Part B, but can also be watched independently.Correlators in integrable models with Separation of Variables
Nikolay Gromov King's College London
I will review recent progress in application of separation of variables method.
In particular I will review the construction for the integrable spin chains with gl(N) symmetry.
By finding, for the first time, the matrix elements of the SoV measure explicitly I will show how to compute various correlation functions and wave function overlaps in a simple determinant form.
General philosophy of application of these methods to the problems related to AdS/CFT, N=4 SYM etc. will be discussed too.
A 2020s Vision of CMB Lensing
Marius Millea University of California, Berkeley
With much of the cosmological information in the primary CMB having already been mined, the next decade of CMB observations will revolve around the secondary CMB lensing effect, which will touch nearly all aspects of observation in some way. At the same time, the increasingly low noise levels of these future observations will render existing "quadratic estimator" methods for analyzing CMB lensing obsolete. This leaves us in an exciting place where new methods need to be developed to fully take advantage of the upcoming generation of CMB data just on our doorstep. I will describe my work developing such new lensing analysis tool, made possible by Bayesian methods, modern statistical techniques, and borrowing ideas from machine learning. I will present the recent first-ever application of such methods to data (from the South Pole Telescope; https://arxiv.org/abs/2012.01709) and discuss prospects for this analysis in the future with regards to not just lensing but also primordial B modes, reionization, and extragalactic foreground fields.