Format results
New physics in flat Moire bands
Erez Berg Weizmann Institute of Science
What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Sham Kakade (University of Washington & Microsoft Research)An Alternative Softmax Operator for Reinforcement Learning
Michael Littman (Brown University)Contextuality-by-default for behaviours in compatibility scenarios
Alisson Cordeiro Alves Tezzin Universidade Estadual Paulista (UNESP)
On the Global Convergence and Approximation Benefits of Policy Gradient Methods
Daniel Russo (Columbia University)Corruption Robust Exploration in Episodic Reinforcement Learning
Aleksandrs Slivkins (Microsoft Research NYC)Representation Learning and Exploration in Reinforcement Learning
Akshay Krishnamurthy (Microsoft Research)Special Topics in Astrophysics - Numerical Hydrodynamics - Lecture 13
Daniel Siegel University of Greifswald
Multiplayer Bandit Learning - From Competition to Cooperation
Simina Branzei (Purdue University)Multi-Player Multi-Armed Bandit: Can We Still Collaborate at Homes Without "Zoom"?
Yuanzhi Li (Carnegie Mellon University)
The Connected Universe: Relating Early, Intermediate and Late Universe with cosmological data
Vivian Miranda University of Arizona
The standard model of cosmology is built upon on a series of propositions on how the early, intermediate, and late epochs of the Universe behave. In particular, it predicts that dark energy and dark matter currently pervades the cosmos. Understanding the properties of the dark sector is plausibly the biggest challenge in theoretical physics. There is, however, a broad assumption in cosmology that the Universe on its earlier stages is fully understood and that discrepancies between the standard model of cosmology and current data are suggestive of distinct dark energy properties. Uncertainties on the validity of this hypothesis are not usually taken into account when forecasting survey capabilities, even though our investigations might be obfuscated if the intermediate and early Universe did behave abnormally. In this colloquium, I propose a program to investigate dark energy and earlier aspects of our Universe simultaneously, through space missions in the 2020s in combination with ground-based observatories. This program will help guide the strategy for the future LSST and WFIRST supernovae and weak lensing surveys. My investigations on how properties of the early and intermediate Universe affect inferences on dark energy (and vice-versa) will also support community understanding of how future missions can be employed to test some of the core hypotheses of the standard model of cosmology.
New physics in flat Moire bands
Erez Berg Weizmann Institute of Science
Flat bands in Moire superlattices are emerging as a fascinating new playground for correlated electron physics. I will present the results of several studies inspired by these developments. First, I will address the question of whether superconductivity is possible even in the limit of a perfectly flat band. Then, I will discuss transport properties of a spin-polarized superconductor in the limit of zero spin-orbit coupling, where the topological structure of the order parameter space allows for a new dissipation mechanism not known from conventional superconductors. If time allows, I will also discuss the interpretation of new measurements of the electronic compressibility in twisted bilayer graphene, indicating a cascade of symmetry-breaking transitions as a function of the density of carriers in the system.
References:
https://arxiv.org/abs/2006.10073
Measuring neutrino oscillations with IceCube and beyond
Juan Pablo YanezNeutrino oscillations have been probed during the last few decades using multiple neutrino sources and experimental set-ups. In recent years, very large volume neutrino telescopes have started contributing to the field. These large and sparsely instrumented detectors observe atmospheric neutrinos for combinations of baselines and energies inaccessible to other experiments. IceCube, the largest neutrino telescope in operation, has used this to measure standard oscillations and place limits on exotic proposals, such as sterile neutrinos. In this talk, I will go over the newest results from IceCube as well as the improvements expected thanks to a new detector upgrade to be deployed in the near future.
What Are the Statistical Limits of Offline Reinforcement Learning With Function Approximation?
Sham Kakade (University of Washington & Microsoft Research)The area of offline reinforcement learning seeks to utilize offline (observational) data to guide the learning of (causal) sequential decision making strategies. The hope is that offline reinforcement learning coupled with function approximation methods (to deal with the curse of dimensionality) can provide a means to help alleviate the excessive sample complexity burden in modern sequential decision making problems. As such, the approach is becoming increasingly important to numerous areas of science, engineering, and technology. However, the extent to which this broader approach can be effective is not well understood, where the literature largely consists of sufficient conditions. This work focuses on the basic question of what are necessary representational and distributional conditions that permit provable sample-efficient offline reinforcement learning. Perhaps surprisingly, our main result shows that even if: i) we have realizability in that the true value function of _every_ policy is linear in a given set of features and 2) our off-policy data has good coverage over all features (under a strong spectral condition), then any algorithm still (information-theoretically) requires a number of offline samples that is exponential in the problem horizon in order to non-trivially estimate the value of _any_ given policy. Our results highlight that sample-efficient, offline policy evaluation is simply not possible unless significantly stronger conditions hold; such conditions include either having low distribution shift (where the offline data distribution is close to the distribution of the policy to be evaluated) or significantly stronger representational conditions (beyond realizability). This is joint work with Ruosong Wang and Dean Foster.An Alternative Softmax Operator for Reinforcement Learning
Michael Littman (Brown University)A softmax operator applied to a set of values acts somewhat like the maximization function and somewhat like an average. In sequential decision making, softmax is often used in settings where it is necessary to maximize utility but also to hedge against problems that arise from putting all of one's weight behind a single maximum utility decision. The Boltzmann softmax operator is the most commonly used softmax operator in this setting, but we show that this operator is prone to misbehavior. In this work, we study a differentiable softmax operator that, among other properties, is a non-expansion ensuring a convergent behavior in learning and planning. We introduce a variant of SARSA algorithm that, by utilizing the new operator, computes a Boltzmann policy with a state-dependent temperature parameter. We show that the algorithm is convergent and that it performs favorably in practice. (With Kavosh Asadi.)Contextuality-by-default for behaviours in compatibility scenarios
Alisson Cordeiro Alves Tezzin Universidade Estadual Paulista (UNESP)
The compatibility-hypergraph approach to contextuality (CA) and the contextuality-by-default approach (CbD) are usually presented as products of entirely different views on how physical measurements and measurement contexts should be understood: the latter is based on the idea that a physical measurement has to be seen by a collection of random variables, one for each context containing that measurement, while the imposition of the non-disturbance condition as a physical requirement in the former precludes such interpretation of measurements. The aim of our work is to present both approaches as entirely compatible ones and to introduce in the compatibility-hypergraph approach ideas which arises from contextuality-by-default. We
introduce in CA the non-degeneracy condition, which is the analogous of consistent connectedness (an important concept from CbD), and prove that this condition is, in general, weaker than non-disturbance. The set of non-degenerate behaviours defines a polytope, therefore one can characterize non-degeneracy using a finite set of linear inequalities. We introduce extended contextuality for behaviours and prove that a behaviour is non-contextual in the standard sense if and only if it is non-degenerate and non-contextual in the extended sense. Finally, we use extended scenarios and behaviours to shed new light on our results.
On the Global Convergence and Approximation Benefits of Policy Gradient Methods
Daniel Russo (Columbia University)Policy gradients methods apply to complex, poorly understood, control problems by performing stochastic gradient descent over a parameterized class of polices. Unfortunately, due to the multi-period nature of the objective, policy gradient algorithms face non-convex optimization problems and can get stuck in suboptimal local minima even for extremely simple problems. This talk with discus structural properties – shared by several canonical control problems – that guarantee the policy gradient objective function has no suboptimal stationary points despite being non-convex. Time permitting, I’ll then zoom in on the special case of state aggregated policies and a proof showing that policy gradient converges to better policies than its relative, approximate policy iteration.Corruption Robust Exploration in Episodic Reinforcement Learning
Aleksandrs Slivkins (Microsoft Research NYC)We initiate the study of episodic RL under adversarial corruptions in both the rewards and the transition probabilities of the underlying system. Our solution adapts to unknown level of corruption, degrading gracefully in the total corruption encountered. In particular, we attain near-optimal regret for a constant level of corruption. We derive results for "tabular" MDPs as well as MDPs that admit a linear representation. Notably, we provide the first sublinear regret guarantee that goes beyond i.i.d. transitions in the bandit-feedback model for episodic RL. We build on a new framework which combines the paradigms of "optimism under uncertainty" and "successive elimination". Neither paradigm alone suffices: "optimism under uncertainty", common in the current work on stochastic RL, cannot handle corruptions, while "successive elimination" works for bandits with corruptions, but is provably inefficient even for stochastic RL. Joint work Thodoris Lykouris, Max Simchowitz and Wen Sun https://arxiv.org/abs/1911.08689Representation Learning and Exploration in Reinforcement Learning
Akshay Krishnamurthy (Microsoft Research)I will discuss new provably efficient algorithms for reinforcement in rich observation environments with arbitrarily large state spaces. Both algorithms operate by learning succinct representations of the environment, which they use in an exploration module to acquire new information. The first algorithm, called Homer, operates in a block MDP model and uses a contrastive learning objective to learn the representation. On the other hand, the second algorithm, called FLAMBE, operates in a much richer class of low rank MDPs and is model based. Both algorithms accommodate nonlinear function approximation and enjoy provable sample and computational efficiency guarantees.Special Topics in Astrophysics - Numerical Hydrodynamics - Lecture 13
Daniel Siegel University of Greifswald
Multiplayer Bandit Learning - From Competition to Cooperation
Simina Branzei (Purdue University)The stochastic multi-armed bandit model captures the tradeoff between exploration and exploitation. We study the effects of competition and cooperation on this tradeoff. Suppose there are k arms and two players, Alice and Bob. In every round, each player pulls an arm, receives the resulting reward, and observes the choice of the other player but not their reward. Alice's utility is ΓA+λΓB (and similarly for Bob), where ΓA is Alice's total reward and λ∈[−1,1] is a cooperation parameter. At λ=−1 the players are competing in a zero-sum game, at λ=1, they are fully cooperating, and at λ=0, they are neutral: each player's utility is their own reward. The model is related to the economics literature on strategic experimentation, where usually players observe each other's rewards. With discount factor β, the Gittins index reduces the one-player problem to the comparison between a risky arm, with a prior μ, and a predictable arm, with success probability p. The value of p where the player is indifferent between the arms is the Gittins index g=g(μ,β)>m, where m is the mean of the risky arm. We show that competing players explore less than a single player: there is p∗∈(m,g) so that for all p>p∗, the players stay at the predictable arm. However, the players are not myopic: they still explore for some p>m. On the other hand, cooperating players explore more than a single player. We also show that neutral players learn from each other, receiving strictly higher total rewards than they would playing alone, for all p∈(p∗,g), where p∗ is the threshold from the competing case. Finally, we show that competing and neutral players eventually settle on the same arm in every Nash equilibrium, while this can fail for cooperating players. This is based on a joint work with Yuval Peres.Multi-Player Multi-Armed Bandit: Can We Still Collaborate at Homes Without "Zoom"?
Yuanzhi Li (Carnegie Mellon University)Multi-armed bandit is a well-established area in online decision making: Where one player makes sequential decisions in a non-stationary environment to maximize his/her accumulative rewards. The multi-armed bandit problem becomes significantly more challenging when there are multiple players in the same environment, while only one piece of reward is presented at a time for each arm. In this setting, if two players pick the same arm at the same round, they are only able to get one piece of reward instead of two. To maximize the reward, players need to collaborate to avoid "collision" -- i.e. they need to make sure that they do not all rush to the same arm (even if it has the highest reward). We consider the even more challenging setting where communications between players are completely disabled: e.g. they are separated in different places of the world without any "Zoom". We show that nearly optimal regret can still be obtained in this setting: Players can actually collaborate in a non-stationary environment without any communication (of course, without quantum entanglement either :))