Format results
- Weina Wang (Carnegie Mellon University)
On The Exploration In Load-Balancing Under Unknown Service Rates
Weina Wang (CMU)Sample Complexity Of Policy-Based Methods Under Off-Policy Sampling And Linear Function Approximation
Siva Theja Maguluri (Georgia Institute of Technology)The Compensated Coupling (or Why the Future is the Best Guide for the Present)
Sid Banerjee (Cornell University)Conclusive Remarks
-
Luca Ciambelli Perimeter Institute for Theoretical Physics
-
Celine Zwikel Perimeter Institute
-
Wald-Zoupas vs. improved Noether charge: anomalies as soft terms at scri
Simone Speziale Aix-Marseille University
Higher-Dimensional Expansion of Random Geometric Complexes
Tselil Schramm (Stanford University)Carrollian spaces at infinity: an embedding space picture
Jakob Salzer Université Libre de Bruxelles
On the Power of Preconditioning in Sparse Linear Regression
Raghu Meka (UCLA)Dualizability in higher Morita categories
Eilind Karlsson Technical University of Munich (TUM)
Stochastic Bin Packing with Time-Varying Item Sizes
Weina Wang (Carnegie Mellon University)In today's computing systems, there is a strong contention between achieving high server utilization and accommodating the time-varying resource requirements of jobs. Motivated by this problem, we study a stochastic bin packing formulation with time-varying item sizes, where bins and items correspond to servers and jobs, respectively. Our goal is to answer the following fundamental question: How can we minimize the number of active servers (servers running at least one job) given a budget for the cost associated with resource overcommitment on servers? We propose a novel framework for designing job dispatching policies, which reduces the problem to a policy design problem in a single-server system through policy conversions. Through this framework, we develop a policy that is asymptotically optimal as the job arrival rate increases. This is a joint work with Yige Hong at Carnegie Mellon University and Qiaomin Xie at University of Wisconsin–Madison.Constant Regret in Exchangeable Action Models: Overbooking, Bin Packing, and Beyond
Daniel Freund (MIT)Many problems in online decision-making can be viewed through a lens of exchangeable actions: over a long time-horizon of length T some arrival process allows for many opportunities to take a particular action, but the exact timing of actions is irrelevant for the eventual outcome. Examples of this phenomenon include bin packing (where it does not matter when we put an item of a given size into a given bin), knapsack (where it does not matter when we accept an item of a given value), and a range of matching problems among others (where it does not matter when we assign a request type to a given resource). In this talk we survey a number of results that capture the conditions under which such problems give rise to algorithms with uniform loss guarantees, i.e., where the additive loss relative to an optimal solution in hindsight is bounded independent of the horizon length. Our conditions characterize permissible geometries of the objective function, minimal requirements on the arrival process, and uncertainty about the underlying arrival processes, including the length of the time horizon. Based on joint works with Jiayu (Kamessi) Zhao, Chamsi Hssaine, Sid Banerjee, Thodoris Lykouris, Wentao Weng, Elisabeth Paulson, and Brad SturtOn The Exploration In Load-Balancing Under Unknown Service Rates
Weina Wang (CMU)Today’s computing systems consist of servers that offer highly heterogeneous service rates due to the diverse machine types and the dynamic computing environment. In such systems, to guarantee low latency, load-balancing algorithms need to consider not only the amount of work on servers but also their service rates. However, because of the dynamic nature of the computing environment, the service rates cannot be obtained beforehand and thus have to be learned on the fly. In this talk, we consider a load-balancing system that consists of multiple servers with unknown service rates. Each incoming job needs to be assigned to one of the servers upon arrival. Our goal is to develop learning-integrated job-assigning algorithms that perform comparably to their counterparts under known service rates. Although some facets of this problem resemble the well-studied multi-armed bandit problem, our findings reveal that the exploration component is fundamentally different. In particular, we develop a meta-algorithm that integrates learning into a large class of load-balancing algorithms. The proposed algorithm uses almost no exploration, yet it achieves a constant regret in the accumulated latency of jobs. A similar free exploration phenomenon has been observed in a prior paper, but in a different setting with a single server. This is joint work with Yifei Huang and Zhiyuan Tang, both from Tsinghua University.Sample Complexity Of Policy-Based Methods Under Off-Policy Sampling And Linear Function Approximation
Siva Theja Maguluri (Georgia Institute of Technology)In this work, we study policy-space methods for solving the reinforcement learning (RL) problem. We first consider the setting where the model is known (MDP setting) and show that the Natural Policy Gradient (NPG) algorithm (and other variants) have linear (geometric) convergence without the need of any regularization. In contrast to optimization approaches used in the literature, our approach is based on approximate dynamic programing. We then consider a linear function approximation variant of it and establish its convergence guarantees. Finally, we consider the RL setting where a critic is deployed for policy evaluation when the model is unknown. We consider a critic that is based on TD learning, and uses off-policy sampling with linear function approximation. This leads to the infamous deadly-triad issue. We propose a generic algorithm framework of a single time-scale multi-step TD-learning with generalized importance sampling ratios that enables us to overcome the high variance issue in off-policy learning, and establish its finite-sample guarantees. We show that this leads to an overall sample complexity of O(epsilon^-2).The Compensated Coupling (or Why the Future is the Best Guide for the Present)
Sid Banerjee (Cornell University)What makes online decision-making different from other decision-making/optimization problems? While it seems clear that the unique features are the sequential nature of taking actions and uncertainty in future outcomes, most techniques for solving such problems tend to obfuscate these features - so are these the best ways to think about these settings? I will present the compensated coupling: a simple paradigm for reasoning about and designing online decision-making policies, based on a sample-pathwise accounting of their performance compared to some benchmark policy. This approach generalizes many standard results used in studying Markov decision processes and reinforcement learning, but also gives us new policies which are much simpler and more effective than existing heuristics. For a large class of widely-studied control problems including online resource-allocation, dynamic pricing, generalized assignment, online bin packing, and bandits with knapsacks, I will illustrate how these new algorithms achieve constant regret (i.e., additive loss compared to the hindsight optimal which is independent of the horizon and state-space) under a wide range of conditions. Time permitting, I will try and describe how we can use this technique to incorporate side information and historical data in these settings, and achieve constant regret with as little as a single data trace.Conclusive Remarks
-
Luca Ciambelli Perimeter Institute for Theoretical Physics
-
Celine Zwikel Perimeter Institute
-
Wald-Zoupas vs. improved Noether charge: anomalies as soft terms at scri
Simone Speziale Aix-Marseille University
In the last few years, various authors have extended the covariant phase space to include arbitrary anomalies, and a notion of improved Noether charge. After reviewing this construction and discussing examples of anomalies, I will point out that the covariance requirements of the seminal Wald-Zoupas paper permit the presence of a special class of anomalies. To illustrate the meaning of such anomalies, one can look at the case of future null infinity where they take the form of the soft terms in the flux-balance laws.Higher-Dimensional Expansion of Random Geometric Complexes
Tselil Schramm (Stanford University)A graph is said to be a (1-dimensional) expander if the second eigenvalue of its adjacency matrix is bounded away from 1, or almost-equivalently, if it has no sparse vertex cuts. There are several natural ways to generalize the notion of expansion to hypergraphs/simplicial complexes, but one such way is 2-dimensional spectral expansion, in which the expansion of vertex neighborhoods (remarkably) witnesses global expansion. While 1-dimensional expansion is known to be achieved by, e.g., random regular graphs, very few constructions of sparse 2-dimensional expanders are known, and at present all are algebraic. It is an open question whether sparse 2-dimensional expanders are "abundant" and "natural" or "rare." In this talk, we'll give some evidence towards abundance: we show that the set of triangles in a random geometric graph on a high-dimensional sphere yields an expanding simplicial complex of arbitrarily small polynomial degree.Carrollian spaces at infinity: an embedding space picture
Jakob Salzer Université Libre de Bruxelles
In this talk, I will discuss certain homogeneous spaces of the Poincaré group that correspond to the well-known asymptotic regions of asymptotically flat spacetimes: time-, space-, and light-like infinity. I will then show that all of these spaces admit a uniform description as surfaces embedded in a higher-dimensional space. I will conclude with some comments concerning the computation of correlators of conformal carrollian theories from this embedding space picture.On the Power of Preconditioning in Sparse Linear Regression
Raghu Meka (UCLA)Sparse linear regression is a fundamental problem in high-dimensional statistics, but strikingly little is known about how to solve it efficiently without restrictive conditions on the design matrix. This talk will focus on the (correlated) random design setting, where the covariates are independently drawn from a multivariate Gaussian and the ground-truth signal is sparse. Information theoretically, one can achieve strong error bounds with O(klogn) samples for arbitrary covariance matrices and sparsity k; however, no efficient algorithms are known to match these guarantees even with o(n) samples in general. As for hardness, computational lower bounds are only known with worst-case design matrices. Random-design instances are known which are hard for the Lasso, but these instances can generally be solved by Lasso after a simple change-of-basis (i.e. preconditioning). In the talk, we will discuss new upper and lower bounds clarifying the power of preconditioning in sparse linear regression. First, we show that the preconditioned Lasso can solve a large class of sparse linear regression problems nearly optimally: it succeeds whenever the dependency structure of the covariates, in the sense of the Markov property, has low treewidth -- even if the covariance matrix is highly ill-conditioned. Second, we construct (for the first time) random-design instances which are provably hard for an optimally preconditioned Lasso. Based on joint works with Jonathan Kelner, Frederic Koehler, Dhruv Rohatgi.Dualizability in higher Morita categories
Eilind Karlsson Technical University of Munich (TUM)
The Morita 2-category has as objects associative algebras, 1-morphisms are bimodules and 2-morphisms are given by bimodule homomorphisms. Equivalent objects in this category are exactly Morita equivalent algebras. A vast generalisation of this as a higher category is the so-called higher Morita category, denoted Alg_n. It has two constructions, one due to Haugseng, and one due to Scheimbauer which uses (constructible) factorization algebras. In the latter, Gwilliam-Scheimbauer has proven that every object of Alg_n is n-dualizable. Hence, by the Cobordism Hypothesis, every object gives rise to an n-dimensional (fully extended framed) topological field theory. A natural question to ask is “Which objects of Alg_n are also (n+1)-dualizable?”. This talk is on work in progress (for n=2) to prove a conjecture due to Lurie answering this question.
Zoom link: https://pitp.zoom.us/j/98595913913?pwd=Vlo1aWtXVlZBVjYxNFlTM2VZY2s3Zz09
Theory and Phenomenology of Continuous Spin Particles
Massless particles are always assumed to have Lorentz invariant helicity. However, the generic massless particles allowed by Lorentz invariance and quantum mechanics are "continuous spin particles," whose infinitely many discrete helicity states mix under little group transformations by an amount determined by the spin scale. Previous work at the Perimeter Institute has shown that CSPs have well-behaved soft emission amplitudes, are described by a simple free field theory, and reduce to the photon or graviton at momenta above the spin scale. In this talk, I will show how to couple CSP fields to classical particles. When treating on-shell CSP emission and absorption, the resulting dynamics are causal and readily calculable. Remarkably, they are also unique, depending on only the spin scale. This opens the door to precision experiments which probe the spin scale of the photon and graviton.
Zoom Link: https://pitp.zoom.us/j/94728811371?pwd=Y3RoRnJuUzNzT0FmRVFiazI5czc5Zz09