Jordi Levy (Artificial Intelligence Research Institute, Spanish National Research Council), Stefan Szeidar (TU Wien), and Ralf Rothenberger (University of Potsdam)
A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents, where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms—V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with max_i Ai, where Ai is the number of actions for the ith player. This is in sharp contrast to the size of the joint action space which is \prod_i Ai. V-learning (in its basic form) is a new class of single-agent RL algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into a RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.
In this talk, I will present a simple reduction that demonstrates the cryptographic hardness of learning a single periodic neuron over isotropic Gaussian distributions in the presence of noise. The proposed "hard" family of functions, which are well-approximated by one-layer neural networks, take the general form of a univariate periodic function applied to an affine projection of the data. These functions have appeared in previous seminal works which demonstrate their hardness against gradient-based (GD) methods (Shamir'18), and Statistical Query (SQ) algorithms (Song et al.'17). Our result shows that if (polynomially) small noise is added to the labels, the intractability of learning these functions applies to all polynomial-time algorithms, beyond gradient-based and SQ algorithms, under the aforementioned cryptographic assumptions. Furthermore, we show that for exponentially small noise a polynomial-time algorithm based on lattice basis reduction can bypass the SQ and GD hardness. This is based on joint work with Min Jae Song and Joan Bruna.
Current machine learning (ML) systems are remarkably brittle, raising serious concerns about their deployment in safety-critical applications like self-driving cars and predictive healthcare. In such applications, models could encounter test distributions that differ wildly from the training distributions. Trustworthy ML thus requires strong robustness guarantees from learning, including robustness to worst-case distribution shifts. Robustness to worst-case distribution shifts raises several computational and statistical challenges over ‘standard’ machine learning. In this talk, I will present two formal settings of worst-case distribution shifts motivated by adversarial attacks on test inputs and presence of spurious correlations like image backgrounds. Empirical observations demonstrate (i) an arms race between attacks and existing heuristic defenses necessitating provable guarantees much like cryptography (ii) increased sample complexity of robust learning (iii) resurgence of the need for regularization in robust learning. We capture each of these observations in simple theoretical models that nevertheless yield principled and scalable approaches to overcome the hurdles in robust learning, particularly via the use of unlabeled data.
The classical synthesis problem is to find a program that meets a correctness specification given as a logical formula. Recent work on synthesis and optimization illustrates many potential benefits of allowing the user to supplement the logical specification with a syntactic template that constrains the space of allowed implementations. The formulation of the syntax-guided synthesis problem (SyGuS) is aimed at standardizing the core computational problem common to these proposals in a logical framework. In this talk, we first describe how a wide range of problems such as automatic synthesis of loop invariants, program optimization, program repair to defend against timing-based attacks, and learning programs from examples can be formalized as SyGuS instances. We then explain the counterexample-guided-inductive-synthesis strategy for solving the SyGuS problem, its different instantiations, and experimental evaluation over benchmarks from a variety of applications.
As a concrete application of syntax-guided synthesis, we will describe the Sharingan system that uses program synthesis to generate network classification programs at the session layer from positive and negative traffic examples. Sharingan accepts raw network traces as inputs, and reports potential patterns of the target traffic in a domain specific language designed for specifying session-layer quantitative properties. Our experiments show that Sharingan is able to correctly identify patterns from a diverse set of network traces and generates explainable outputs, while achieving accuracy comparable to state-of-the-art learning-based systems.
We conclude by discussing challenges and opportunities for future research in syntax-guided program synthesis.
In the traditional approach to synthesis, the system has to satisfy its specification in all environments. Thus, the components that compose the environment can be seen as if their only objective is to conspire to fail the system. In real life, the components that compose the environment are often entities that have objectives of their own. The approach taken in the field of game theory is to assume that interacting agents are rational, and thus act to achieve their own objectives. Adding rationality to the synthesis setting softens the universal quantification on the environments, and motivates the study of rational synthesis. There, we seek a system that satisfies its specification in all rational environments. The above can be formalized in two different ways. The first is cooperative rational synthesis, where the desired output is a stable profile of strategies to all components in which the objective of the system is satisfied. The second is non-cooperative rational synthesis, where the desired output is a strategy for the system such that its objective is satisfied in every stable profile where she follows this strategy.
The talk introduces the two types of rational synthesis and surveys their game-theoretical aspects. In particular, we discuss quantitative rational synthesis and relate cooperative and non-cooperative rational synthesis with the two notions of equilibrium inefficiency in game theory, namely price of stability and price of anarchy.
The talk is based on joint work with Shaull Almagor, Dana Fisman, Yoad Lustig, Giuseppe Perelli, and Moshe Y. Vardi.
Over the past 15 years or so, the Strong Exponential Time Hypothesis (SETH) has been very useful for proving conditional hardness for many problems: it's a problem at the heart of fine-grained complexity. In this talk, I will discuss another way in which SETH has been useful.
The empty pigeonhole principle asserts that, if there are more pigeonholes than pigeons, one pigeonhole must be empty. The corresponding class of total function problems contains all of FNP, and its natural problems include applications of the union bound and several well known explicit constructions. Higher up in the polynomial hierarchy, one finds total function problems related to tournaments and the Sauer-Shelah lemma.
Jordi Levy (Artificial Intelligence Research Institute, Spanish National Research Council), Stefan Szeidar (TU Wien), and Ralf Rothenberger (University of Potsdam)
Topics:
Jordi Levy: On the Formal Characterization of Industrial SAT Instances
Abstract: Over the last 20 years, SAT solvers have experienced a great improvement in their efficiency when solving practical or industrial SAT problems. This success is surprising, considering that SAT is an NP-complete problem. Over the years, the particular structure of industrial instances has been proposed, informally, as a reason. This has led, for instance, the organizers of the SAT competition to distinguish two tracks, random and industrial, and solvers wining in one track have a poor performance on the other, and vice versa. We think that a better characterization of this structure may be decisive in future improvements of practical SAT solving.
During my talk, I'll start revisiting the notions that try to characterize how difficult is to be solved a SAT instance in practice, especially, the notion of "hardness" or tree-like space resolution. I'll discuss some properties that are shared by most industrial SAT instances used in the SAT competition, such as scale-free structure, high modularity, and low fractal dimension, and how these features may influence the performance of practical SAT solvers. I'll finish introducing some models of random SAT instances that try to reproduce these characteristics.
Stefan Szeidar: Algorithmic utilization of structure in SAT instances
Abstract: In this talk, I will survey various concepts that have been proposed to capture the hidden structure in SAT instances. I will focus on concepts that provide worst-case runtime guarantees for SAT algorithms and discuss a suitable theoretical framework. I will also discuss how these concepts relate to each other and propose questions for further research.
Some of the talk's content is covered by the extended and updated Chapter 17, Fixed-Parameter Tractability, of the Handbook of Satisfiability, 2nd edition, 2021.
Ralf Rothenberger: Structural Parameters - Heterogeneity and Geometry
Abstract: Two characteristic properties seem to be prevalent in the majority of real-world SAT instances, heterogeneous degree distribution and locality. In this talk I present our most recent results trying to understand the impact of these two properties on SAT by studying the resolution proof size of random k-SAT models that allow to control heterogeneity and locality.
Even more than 25 years after the Pentium FDIV bug, automated verification of arithmetic circuits, and most prominently gate-level integer multipliers, still imposes a challenge. Approaches which purely rely on SAT solving or on decision diagrams seem to be unable to solve this problem in a reasonable amount of time.
In this talk, we will demonstrate a verification technique that is based on algebraic reasoning and is currently considered to be one of the most successful verification techniques for circuit verification. In this approach the circuit is modelled as a set of polynomial equations. For a correct circuit we need to show that the specification is implied by the polynomial representation of the given circuit. However parts of the multiplier, i.e., final stage adders, are hard to verify using only computer algebra. We will present a hybrid approach which combines SAT and computer algebra to tackle this issue.
An EP theorem means an NP predicate which is always true. Most of most loved discrete theorems are EP. Usually, but not always, an EP theorem can be proved by a polynomial time algorithm which finds an instance of what the theorem says exists. A few examples are described.
Jordi Levy (Artificial Intelligence Research Institute, Spanish National Research Council), Stefan Szeidar (TU Wien), and Ralf Rothenberger (University of Potsdam)