Search results from ICTS-TIFR
Format results
-
-
Sampling, Privacy, and Spectral Geometry: Insights from Low-Rank Approximation
Nisheeth VishnoiICTS:31848 -
-
-
-
-
-
-
-
-
-
-
Detection and recovery of latent geometry in random graphs (Online)
Siqi LiuICTS:31830In recent years, random graph models with latent geometric structures have received increasing attention. These models typically involve sampling random points from a metric space, followed by independently adding edges with probabilities that depend on the distances between corresponding point pairs. A central computational challenge is to detect the underlying geometry and recover the latent coordinates of the vertices based solely on the observed graph. Unlike classical random graph models, geometric models exhibit richer structural properties, such as correlations between edges. These features make them more realistic representations of real world networks and data. However, our current understanding of the information-theoretic and computational thresholds for detection in these models remains limited. In this talk, we will survey known algorithmic results and computational hardness findings for several random geometric graph models. We will also highlight open directions for future research.
-
Sampling, Privacy, and Spectral Geometry: Insights from Low-Rank Approximation
Nisheeth VishnoiICTS:31848This talk explores how problems in private optimization—specifically, low-rank matrix approximation—give rise to novel tools and results in sampling and statistical physics. I will present two recent advances:
Sampling from Harish-Chandra–Itzykson–Zuber (HCIZ) Distributions via Private Optimization:
We introduce an efficient algorithm for computing private low-rank approximations and show how its structure enables efficient sampling from HCIZ measures, which are central to mathematical physics and random matrix theory.Spectral Sampling and Utility of the Gaussian Mechanism:
We provide a new analysis of the Gaussian Mechanism for differential privacy through the lens of Dyson Brownian motion, yielding refined spectral sampling guarantees and new bounds on eigenvalue gaps in random matrices.These results illustrate how sampling tasks arising from privacy constraints can lead to powerful connections between random matrix theory, optimization, sampling, and statistical physics.
-
The Localization Method for Proving High-Dimensional Inequalities
Santosh VempalaICTS:31841We review the localization method, pioneered by Lov\'asz and Simonovits (1993) and developed substantially by Eldan (2012), to prove inequalities in high dimension. At its heart, the method uses a sequence of transformations to convert an arbitrary instance to a highly structured one (often even one-dimensional). We will work out some illustrative examples.
-
The Localization Method for Proving High-Dimensional Inequalities
Santosh VempalaICTS:31840We review the localization method, pioneered by Lov\'asz and Simonovits (1993) and developed substantially by Eldan (2012), to prove inequalities in high dimension. At its heart, the method uses a sequence of transformations to convert an arbitrary instance to a highly structured one (often even one-dimensional). We will work out some illustrative examples.
-
Streaming algorithms: a tutorial
Jelani NelsonICTS:31842Streaming algorithms make one pass over a massive dataset, and should answer some queries on the data while maintaining a memory footprint sublinear in the data size. We show non-trivial streaming algorithms, and lower bounds, for computing various statistics of data streams
(counts, heavy hitters, and more) as well as for graph problems. -
Streaming algorithms: a tutorial
Jelani Osei NelsonICTS:31831Streaming algorithms make one pass over a massive dataset, and should answer some queries on the data while maintaining a memory footprint sublinear in the data size. We show non-trivial streaming algorithms, and lower bounds, for computing various statistics of data streams
(counts, heavy hitters, and more) as well as for graph problems.
-
The Proofs to Algorithms Method in Algorithm Design
Pravesh KothariICTS:31837I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design.
In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018).
Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.
-
The Proofs to Algorithms Method in Algorithm Design
Pravesh KothariICTS:31836I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design.
In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018).
Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.
-
The long path to \sqrt{d} monotonicity testers
C. SeshadhriICTS:31839Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids.
This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers.
I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.
-
The long path to \sqrt{d} monotonicity testers
C. SeshadhriICTS:31838Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids.
This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers.
I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.
-
Using Markov Chains before they mix.
Prasad RaghavendraICTS:31835Markov chains are among the most popular sampling algorithms both in theory and practice. There is a vast theory on understanding the mixing times of Markov chains. But what if the Markov chain does not mix fast? Can we still use such Markov chains in down-stream applications of sampling, and what theoretical guarantees can we show about these chains? In this talk, we will define a notion of "locally stationary measure" -- which is an analogue of local optima in convex optimization. We will see some generic methods to analyze the structure of distributions that non-mixing Markov chains sample from, along with applications to finding large independent sets in graphs, and finding planted cuts in random graphs. Finally, we will conclude the talk with a set of open questions on locally stationary measures. Based on joint work with Kuikui Liu, Prasad Raghavendra, Amit Rajaraman and David X Wu.
-
Using Markov Chains before they mix.
Prasad RaghavendraICTS:31829Markov chains are among the most popular sampling algorithms both in theory and practice. There is a vast theory on understanding the mixing times of Markov chains. But what if the Markov chain does not mix fast? Can we still use such Markov chains in down-stream applications of sampling, and what theoretical guarantees can we show about these chains? In this talk, we will define a notion of "locally stationary measure" -- which is an analogue of local optima in convex optimization. We will see some generic methods to analyze the structure of distributions that non-mixing Markov chains sample from, along with applications to finding large independent sets in graphs, and finding planted cuts in random graphs. Finally, we will conclude the talk with a set of open questions on locally stationary measures. Based on joint work with Kuikui Liu, Prasad Raghavendra, Amit Rajaraman and David X Wu.