Format results
Multi-Agent Reinforcement Learning: Theory, Algorithms, and Future Directions.
Eric MazumdarICTS:32462Multi-Agent Reinforcement Learning: Theory, Algorithms, and Future Directions.
Eric MazumdarICTS:32457Multi-Agent Reinforcement Learning: Theory, Algorithms, and Future Directions.
Eric MazumdarICTS:32458Panel Discussion
-
Gilbert Holder University of Illinois Urbana-Champaign
- Gwen Rudie
-
A hydrosimulations-based approach to relate the Fast Radio Burst dispersion measure -- redshift relation to the suppression of matter power spectrum
Kritti Sharma California Institute of Technology
PIRSA:25080007
Reinforcement Learning Bootcamp (Online)
Gaurav MahajanICTS:32460The course will cover the basics of reinforcement learning theory. We will start by implementing simple gradient-based algorithms in PyTorch and using them to solve standard control problems like CartPole and the Atari 2600 game Pong. Along the way, we will explore how to optimize both the sample complexity (the number of interactions with the environment) and the computational complexity (GPU hours) needed to learn an optimal policy.
Lecture notes, and setup instructions - https://gomahajan.github.io/icts/rlbootcamp.html
Statistical Optimal Transport (Online)
Sivaraman BalakrishnanICTS:32464Optimal transport studies the problem of rearranging one distribution into another while minimizing an associated cost. The past decade has witnessed tremendous progress in our understanding of the computational, methodological and statistical aspects of optimal transport (OT). Recent interest in OT has blossomed due to its close connections with diffusion models.
I will introduce the mathematical framework of OT, and then quickly transition to studying how well various objects in the OT framework (OT distances, and OT maps) can be estimated from samples of the underlying distributions.
Multi-Agent Reinforcement Learning: Theory, Algorithms, and Future Directions.
Eric MazumdarICTS:32462Reinforcement learning (RL) has been the driver behind many of the most significant advances in artificial intelligence over the past decade---ranging from achieving superhuman performance in complex games like Go and starcraft to applications in autonomous driving, robotics, and economic simulations. RL is even playing a crucial role in the fine tuning of large language models and the training of AI agents more broadly. Many of these problems, however, are fundamentally multi-agent in nature: an agent's success is inextricably linked to the decisions of others. Despite these empirical successes and a wealth of research on "single-agent" RL and its variants, multi-agent reinforcement learning (MARL) remains relatively under-explored theoretically with the presence of multiple learning agents giving rise to a unique set of challenges for algorithm design and analysis.
This tutorial will give an overview of the research landscape in MARL, aiming to highlight the core theoretical principles that enable agents to learn and adapt in the presence of others. Using the formal framework of Markov games and building on a foundation in game theory, we will explore the different solution concepts and algorithms in the field. The discussion will explore the inherent inefficiencies of classical algorithms like fictitious play and policy gradient algorithms and build toward the principles underpinning modern, provably efficient learning methods for large games (i.e., algorithms that make use of function approximation like deep neural networks). Ultimately, we will identify key open problems and promising new research directions for the future of multi-agent learning.
Multi-Agent Reinforcement Learning: Theory, Algorithms, and Future Directions.
Eric MazumdarICTS:32457Reinforcement learning (RL) has been the driver behind many of the most significant advances in artificial intelligence over the past decade---ranging from achieving superhuman performance in complex games like Go and starcraft to applications in autonomous driving, robotics, and economic simulations. RL is even playing a crucial role in the fine tuning of large language models and the training of AI agents more broadly. Many of these problems, however, are fundamentally multi-agent in nature: an agent's success is inextricably linked to the decisions of others. Despite these empirical successes and a wealth of research on "single-agent" RL and its variants, multi-agent reinforcement learning (MARL) remains relatively under-explored theoretically with the presence of multiple learning agents giving rise to a unique set of challenges for algorithm design and analysis.
This tutorial will give an overview of the research landscape in MARL, aiming to highlight the core theoretical principles that enable agents to learn and adapt in the presence of others. Using the formal framework of Markov games and building on a foundation in game theory, we will explore the different solution concepts and algorithms in the field. The discussion will explore the inherent inefficiencies of classical algorithms like fictitious play and policy gradient algorithms and build toward the principles underpinning modern, provably efficient learning methods for large games (i.e., algorithms that make use of function approximation like deep neural networks). Ultimately, we will identify key open problems and promising new research directions for the future of multi-agent learning.
Reinforcement Learning Bootcamp (Online)
Gaurav MahajanICTS:32456The course will cover the basics of reinforcement learning theory. We will start by implementing simple gradient-based algorithms in PyTorch and using them to solve standard control problems like CartPole and the Atari 2600 game Pong. Along the way, we will explore how to optimize both the sample complexity (the number of interactions with the environment) and the computational complexity (GPU hours) needed to learn an optimal policy.
Lecture notes, and setup instructions - https://gomahajan.github.io/icts/rlbootcamp.html
Public Lecture : Frontiers of Science
David GrossICTS:32459More information - https://www.icts.res.in/lectures/PL_Aug2025
Multi-Agent Reinforcement Learning: Theory, Algorithms, and Future Directions.
Eric MazumdarICTS:32458Reinforcement learning (RL) has been the driver behind many of the most significant advances in artificial intelligence over the past decade---ranging from achieving superhuman performance in complex games like Go and starcraft to applications in autonomous driving, robotics, and economic simulations. RL is even playing a crucial role in the fine tuning of large language models and the training of AI agents more broadly. Many of these problems, however, are fundamentally multi-agent in nature: an agent's success is inextricably linked to the decisions of others. Despite these empirical successes and a wealth of research on "single-agent" RL and its variants, multi-agent reinforcement learning (MARL) remains relatively under-explored theoretically with the presence of multiple learning agents giving rise to a unique set of challenges for algorithm design and analysis.
This tutorial will give an overview of the research landscape in MARL, aiming to highlight the core theoretical principles that enable agents to learn and adapt in the presence of others. Using the formal framework of Markov games and building on a foundation in game theory, we will explore the different solution concepts and algorithms in the field. The discussion will explore the inherent inefficiencies of classical algorithms like fictitious play and policy gradient algorithms and build toward the principles underpinning modern, provably efficient learning methods for large games (i.e., algorithms that make use of function approximation like deep neural networks). Ultimately, we will identify key open problems and promising new research directions for the future of multi-agent learning.
Panel Discussion
-
Gilbert Holder University of Illinois Urbana-Champaign
- Gwen Rudie
-
Mergers, Radio Jets, and Quenching Star Formation in Massive Galaxies: Quantifying Their Synchronized Cosmic Evolution and Assessing the Energetics
Timothy HeckmanThe existence of a population of massive quiescent galaxies with little to no star formation poses a challenge to our understanding of galaxy evolution. The physical process that quenched the star formation in these galaxies is debated, but the most popular possibility is that feedback from supermassive black holes lifts or heats the gas that would otherwise be used to form stars. In this paper, we evaluate this idea in two ways. First, we compare the cumulative growth in the cosmic inventory of the total stellar mass in quiescent galaxies to the corresponding growth in the amount of kinetic energy carried by radio jets. We find that these two inventories are remarkably well-synchronized, with about 50% of the total amounts being created in the epoch from z ≈ 1 to 2. We also show that these agree extremely well with the corresponding growth in the cumulative number of major mergers that result in massive (>10^11 M_ ʘ) galaxies. We therefore argue that major mergers trigger the radio jets and also transform the galaxies from disks to spheroids. Second, we evaluate the total amount of kinetic energy delivered by jets and compare it to the baryonic binding energy of the galaxies. We find the jet kinetic energy is more than sufficient to quench star formation, and the quenching process should be more effective in more massive galaxies. We show that these results are quantitatively consistent with recent measurements of the Sunyaev–Zel'dovich effect seen in massive galaxies at z ≈ 1.A hydrosimulations-based approach to relate the Fast Radio Burst dispersion measure -- redshift relation to the suppression of matter power spectrum
Kritti Sharma California Institute of Technology
PIRSA:25080007The effects of baryonic feedback on matter power spectrum are uncertain. Upcoming large-scale structure surveys require percent-level constraints on the impact of baryonic feedback effects on the small-scale ($k \gtrsim 1\,h\,$Mpc$^{-1}$) matter power spectrum to fully exploit weak lensing data. The sightline-to-sightline variance in the fast radio bursts (FRBs) dispersion measure (DM) correlates with the strength of baryonic feedback and offers unique sensitivity at scales upto $k \sim 100\,h\,$Mpc$^{-1}$. We analytically compute the variance in FRB DMs using the electron power spectrum, which is modeled as a function of cosmological and feedback parameters in IllustrisTNG suite of simulations in CAMELS project. We demonstrate its efficacy in capturing baryonic feedback effects across several simulation suites, including SIMBA and Astrid. We show that with 10,000 FRBs, the suppression of the matter power spectrum can be constrained to percent-level precision at large scales (k < 1 h/Mpc) and ~10% precision at small scales (k > 10 h/Mpc). Insights into the impact of baryons on the small-scale matter power spectrum gained from FRBs can be leveraged to mitigate baryonic uncertainties in cosmic shear analyses.