Search results from ICTS-TIFR
Format results
-
Collaborative Prediction via Tractable Agreement Protocols
Surbhi GoelICTS:32485Designing effective collaboration between humans and AI systems is crucial for leveraging their complementary abilities in complex decision tasks. But how should agents possessing unique, private knowledge—like a human expert and an AI model—interact to reach decisions better than either could alone? If they were perfect Bayesians with a shared prior, Aumann's classical agreement theorem suggests conversation leads to a prediction via agreement which is accuracy-improving. However, this relies on implausible assumptions about their knowledge and computational power.
We show how to recover and generalize these guarantees using only computationally and statistically tractable assumptions. We develop efficient "collaboration protocols" where parties iteratively exchange only low-dimensional information – their current predictions or best-response actions – without needing to share underlying features. These protocols are grounded in conditions like conversation calibration/swap regret, which relax full Bayesian rationality, and are computationally efficiently enforceable. First, we prove this simple interaction leads to fast convergence to agreement, generalizing quantitative bounds even to high-dimensional and action-based settings. Second, we introduce a weak learning condition under which this agreement process inherently aggregates the parties' distinct information, that is, agents via our protocols arrive at final predictions that are provably competitive with an optimal predictor having access to their joint features. Together, these results offers a new, practical foundation for building systems that achieve the power of pooled knowledge through tractable interaction alone.
This talk is based on joint work with the amazing Natalie Collina, Varun Gupta, Ira Globus-Harris, Aaron Roth, Mirah Shi.
-
-
-
An Introduction to Diffusion and Flow Models
Dheeraj NagarajICTS:32477In this series of talks, I will introduce basic elements of generative modeling with diffusion and flow models from first principles. This includes a short introduction to stochastic calculus, ordinary differential equations, evolution of probability measures, Fokker Planck equation, and the continuity equation. We will then apply these ideas to describe training and inference algorithms for diffusion models.
-
Statistical Optimal Transport (Online)
Sivaraman BalakrishnanICTS:32481Optimal transport studies the problem of rearranging one distribution into another while minimizing an associated cost. The past decade has witnessed tremendous progress in our understanding of the computational, methodological and statistical aspects of optimal transport (OT). Recent interest in OT has blossomed due to its close connections with diffusion models.
I will introduce the mathematical framework of OT, and then quickly transition to studying how well various objects in the OT framework (OT distances, and OT maps) can be estimated from samples of the underlying distributions.
-
Data assimilation: theory and practice
Amit ApteICTS:32480Data assimilation is a set of methods for incorporating sparse observations of a complex dynamical system, either deterministic or stochastic, into incomplete models of these systems. Mathematically this is the problem of nonlinear filtering and computationally, they are based on a variety of techniques including Markov chain Monte Carlo, optimization, importance sampling. This tutorial will begin with a quick introduction to the Bayesian underpinnings of data assimilation followed by applications to chaotic dynamical systems.
-
-
Data assimilation: theory and practice
Amit ApteICTS:32479Data assimilation is a set of methods for incorporating sparse observations of a complex dynamical system, either deterministic or stochastic, into incomplete models of these systems. Mathematically this is the problem of nonlinear filtering and computationally, they are based on a variety of techniques including Markov chain Monte Carlo, optimization, importance sampling. This tutorial will begin with a quick introduction to the Bayesian underpinnings of data assimilation followed by applications to chaotic dynamical systems.
-
An Introduction to Diffusion and Flow Models
Dheeraj NagarajICTS:32473In this series of talks, I will introduce basic elements of generative modeling with diffusion and flow models from first principles. This includes a short introduction to stochastic calculus, ordinary differential equations, evolution of probability measures, Fokker Planck equation, and the continuity equation. We will then apply these ideas to describe training and inference algorithms for diffusion models.
-
Reinforcement Learning Bootcamp (Online)
Gaurav MahajanICTS:32472The course will cover the basics of reinforcement learning theory. We will start by implementing simple gradient-based algorithms in PyTorch and using them to solve standard control problems like CartPole and the Atari 2600 game Pong. Along the way, we will explore how to optimize both the sample complexity (the number of interactions with the environment) and the computational complexity (GPU hours) needed to learn an optimal policy.
Lecture notes, and setup instructions - https://gomahajan.github.io/icts/rlbootcamp.html
-
Statistical Optimal Transport (Online)
Sivaraman BalakrishnanICTS:32476Optimal transport studies the problem of rearranging one distribution into another while minimizing an associated cost. The past decade has witnessed tremendous progress in our understanding of the computational, methodological and statistical aspects of optimal transport (OT). Recent interest in OT has blossomed due to its close connections with diffusion models.
I will introduce the mathematical framework of OT, and then quickly transition to studying how well various objects in the OT framework (OT distances, and OT maps) can be estimated from samples of the underlying distributions.
-
An Introduction to Diffusion and Flow Models
Dheeraj NagarajICTS:32475In this series of talks, I will introduce basic elements of generative modeling with diffusion and flow models from first principles. This includes a short introduction to stochastic calculus, ordinary differential equations, evolution of probability measures, Fokker Planck equation, and the continuity equation. We will then apply these ideas to describe training and inference algorithms for diffusion models.
Title | Speaker Profile(s) | Date | Collection | Type | Info |
---|---|---|---|---|---|
Collaborative Prediction via Tractable Agreement Protocols | Surbhi Goel | 2025‑08‑10 | View details | ||
TBA | Damek Davis | 2025‑08‑10 | View details | ||
Basic learning theory | Karthik Sridharan | 2025‑08‑08 | View details | ||
An Introduction to Diffusion and Flow Models | Dheeraj Nagaraj | 2025‑08‑08 | View details | ||
Statistical Optimal Transport (Online) | Sivaraman Balakrishnan | 2025‑08‑07 | View details | ||
Data assimilation: theory and practice | Amit Apte | 2025‑08‑07 | View details | ||
Poster Session | - | 2025‑08‑07 | View details | ||
Data assimilation: theory and practice | Amit Apte | 2025‑08‑07 | View details | ||
An Introduction to Diffusion and Flow Models | Dheeraj Nagaraj | 2025‑08‑07 | View details | ||
Reinforcement Learning Bootcamp (Online) | Gaurav Mahajan | 2025‑08‑07 | View details | ||
Statistical Optimal Transport (Online) | Sivaraman Balakrishnan | 2025‑08‑06 | View details | ||
An Introduction to Diffusion and Flow Models | Dheeraj Nagaraj | 2025‑08‑06 | View details |