Search results from ICTS-TIFR
Format results
-
-
-
-
-
Sampling, Privacy, and Spectral Geometry: Insights from Low-Rank Approximation
Nisheeth VishnoiICTS:31848 -
-
-
-
-
-
-
-
Tight Results for Online Convex Paging
Amit KumarICTS:31847The online convex paging problem models a broad class of cost functions for the classical paging problem. In particular, it naturally captures fairness constraints: e.g., that no specific page (or groups of pages) suffers an ``unfairly'' high number of evictions by considering
$\ell_p$ norms of eviction vectors for $p>1$. The case of the $\ell_\infty$ norm has also been of special interest, and is called min-max paging.We give tight upper and lower bounds for the convex paging problem for a broad class of convex functions. Prior to our work, only fractional algorithms were known for this general setting. Moreover, our general result also improves on prior works for special cases of the problem. For example, it implies that the randomized competitive ratio of the min-max paging problem is $\Theta(\log k\log n)$; this improves both the upper bound and the lower bound given in prior work by logarithmic factors. It also shows that the randomized and deterministic competitive
ratios for $\ell_p$-norm paging are $\Theta(p\log k)$ and $\Theta(pk)$ respectively.This is joint work with Anupam Gupta and Debmalya Panigrahi.
-
Efficient PCPs from HDX
Mitali BafnaICTS:31846The theory of probabilistically checkable proofs (PCPs) shows how to encode a proof for any theorem into a format where the theorem's correctness can be verified by making only a constant number of queries to the proof. The PCP Theorem [ALMSS] is a fundamental result in computer science with far-reaching consequences in hardness of approximation, cryptography, and cloud computing. A PCP has two important parameters: 1) the size of the encoding, and 2) soundness, which is the probability that the verifier accepts an incorrect proof, both of which we wish to minimize.
In 2005, Dinur gave a surprisingly elementary and purely combinatorial proof of the PCP theorem that relies only on tools such as graph expansion, while also giving the first construction of 2-query PCPs with quasi-linear size and constant soundness (close to 1). Our work improves upon Dinur's PCP and constructs 2-query, quasi-linear size PCPs with arbitrarily small constant soundness. As a direct consequence, assuming the exponential time hypothesis, we get that no approximation algorithm for 3-SAT can achieve an approximation ratio significantly better than 7/8 in time 2^{n/polylog n}.
In this talk, I will introduce PCPs and discuss the components that go into our proof. This talk is based on joint work with Dor Minzer and Nikhil Vyas, with an appendix by Zhiwei Yun.
-
Noise stability and sensitivity in continuum percolation
Yogeshwaran DICTS:31843We look at the stability and sensitivity of planar percolation models generated by a Poisson point process under re-sampling dynamics. Noise stability refers to whether certain global percolation events remain unchanged when a small fraction of points are re-sampled, while noise sensitivity means these events become nearly independent even when only arbitrarily small fraction of points are re-sampled.
To analyze these properties, one wants to estimate chaos coefficients in the Wiener-Itô chaos expansion for Poisson functionals, analogue of Fourier-Walsh expansion for functionals of random bits. Motivated by substantial progress in the case of random bits, we introduce two tools to estimate the chaos coefficients. First approach is via stopping sets, which serve as a continuum analogue of randomized algorithms, and the second approach is using pivotal and spectral samples, which provide sharper bounds.
We illustrate these methods using two well-known models: Poisson Boolean percolation, where unit balls are placed at Poisson points, and Voronoi percolation, where Voronoi cells based on Poisson points are randomly retained. Our focus will be on sharp noise sensitivity or stability of crossing events, specifically whether a large rectangle is traversed by a connected component of the percolation model.
The talk is based on joint projects with Chinmoy Bhattacharjee, Guenter Last, and Giovanni Peccati
-
Detection and recovery of latent geometry in random graphs (Online)
Siqi LiuICTS:31830In recent years, random graph models with latent geometric structures have received increasing attention. These models typically involve sampling random points from a metric space, followed by independently adding edges with probabilities that depend on the distances between corresponding point pairs. A central computational challenge is to detect the underlying geometry and recover the latent coordinates of the vertices based solely on the observed graph. Unlike classical random graph models, geometric models exhibit richer structural properties, such as correlations between edges. These features make them more realistic representations of real world networks and data. However, our current understanding of the information-theoretic and computational thresholds for detection in these models remains limited. In this talk, we will survey known algorithmic results and computational hardness findings for several random geometric graph models. We will also highlight open directions for future research.
-
Sampling, Privacy, and Spectral Geometry: Insights from Low-Rank Approximation
Nisheeth VishnoiICTS:31848This talk explores how problems in private optimization—specifically, low-rank matrix approximation—give rise to novel tools and results in sampling and statistical physics. I will present two recent advances:
Sampling from Harish-Chandra–Itzykson–Zuber (HCIZ) Distributions via Private Optimization:
We introduce an efficient algorithm for computing private low-rank approximations and show how its structure enables efficient sampling from HCIZ measures, which are central to mathematical physics and random matrix theory.Spectral Sampling and Utility of the Gaussian Mechanism:
We provide a new analysis of the Gaussian Mechanism for differential privacy through the lens of Dyson Brownian motion, yielding refined spectral sampling guarantees and new bounds on eigenvalue gaps in random matrices.These results illustrate how sampling tasks arising from privacy constraints can lead to powerful connections between random matrix theory, optimization, sampling, and statistical physics.
-
The Localization Method for Proving High-Dimensional Inequalities
Santosh VempalaICTS:31841We review the localization method, pioneered by Lov\'asz and Simonovits (1993) and developed substantially by Eldan (2012), to prove inequalities in high dimension. At its heart, the method uses a sequence of transformations to convert an arbitrary instance to a highly structured one (often even one-dimensional). We will work out some illustrative examples.
-
The Localization Method for Proving High-Dimensional Inequalities
Santosh VempalaICTS:31840We review the localization method, pioneered by Lov\'asz and Simonovits (1993) and developed substantially by Eldan (2012), to prove inequalities in high dimension. At its heart, the method uses a sequence of transformations to convert an arbitrary instance to a highly structured one (often even one-dimensional). We will work out some illustrative examples.
-
Streaming algorithms: a tutorial
Jelani NelsonICTS:31842Streaming algorithms make one pass over a massive dataset, and should answer some queries on the data while maintaining a memory footprint sublinear in the data size. We show non-trivial streaming algorithms, and lower bounds, for computing various statistics of data streams
(counts, heavy hitters, and more) as well as for graph problems. -
Streaming algorithms: a tutorial
Jelani Osei NelsonICTS:31831Streaming algorithms make one pass over a massive dataset, and should answer some queries on the data while maintaining a memory footprint sublinear in the data size. We show non-trivial streaming algorithms, and lower bounds, for computing various statistics of data streams
(counts, heavy hitters, and more) as well as for graph problems.
-
The Proofs to Algorithms Method in Algorithm Design
Pravesh KothariICTS:31837I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design.
In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018).
Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.
-
The Proofs to Algorithms Method in Algorithm Design
Pravesh KothariICTS:31836I will present a method developed roughly over the past decade and a half that reduces efficient algorithm design to finding "low-degree sum-of-squares" certificates -- thus proofs -- of uniqueness (or, more generally, "list uniqueness") of near-optimal solutions in input instances. This is a principled way of designing and analyzing a semidefinite programming relaxation + rounding algorithm for your target problem. This technique turns out be powerful tool in algorithm design.
In this tutorial, I will introduce this technique and cover special cases of a couple of recent important applications. The first comes from a recent renaissance of efficient high-dimensional robust statistical estimation, where the proofs-to-algorithms method played a central role in the eventual resolution of the robust Gaussian Mixture learning problem (dating back to Pearson in 1894 and a concrete version due to Vempala in 2010). The second will be drawn from combinatorial optimization. It will focus on finding planted cliques in the semirandom model, answering a question dating back to Feige and Kilian (2001) and, more recently, by Feige (2019) and Steinhardt (2018).
Both applications are glimpses of a rich research area in which young researchers may find interesting directions for further research.
-
The long path to \sqrt{d} monotonicity testers
C. SeshadhriICTS:31839Since the early days of property testing, the problem of monotonicity testing has been a central problem of study. Despite the simplicity of the problem, the question has led to a (still continuing) flurry of papers over the past two decades. A long standing open problem has been to determine the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids.
This talk is about the (almost) resolution of this question, by \sqrt{d} query "path testers". The path to these results is through a beautiful theory of "directed isoperimetry", showing that classic isoperimetric theorems on the Boolean hypercube extend to the directed setting. This fact is surprising, since directed graphs/random walks are often ill-behaved and rarely yield a nice theory. These directed theorems provide an analysis of directed random walks on product domains, which lead to optimal monotonicity testers.
I will present some of the main tools used in these results, and try to provide an intuitive explanation of directed isoperimetric theorems.