Search results from ICTS-TIFR
Format results
-
-
List Decoding Expander-Based Codes up to Capacity in Near-Linear Time
Shashank SrivastavaICTS:31736 -
-
-
-
-
-
-
-
-
-
-
Hypergraph Rainbow Problems, Kikuchi Matrices, and Local Codes
Pravesh KothariICTS:31738The classical rainbow cycle conjecture of Keevash, Mubayi, Verstrate, and Sudakov says that every properly edge-colored graph of average degree \Omega(log n) contains a "rainbow" cycle, i.e., a cycle with edges of all distinct colors. Recent advances have almost resolved this conjecture up to an additional log log n factor.
In this talk, I will describe hypergraph variants of the graph rainbow problem and show how they are intimately related to the existence of (linear) locally decodable and locally correctable codes.
I will then describe the "Kikuchi Matrix Method" to solve such problems - a general principle that converts hypergraph rainbow problems into related graph counterparts. Consequently, we obtain an exponential lower bound on linear 3 query locally correctable codes, a cubic lower bound on 3 query locally decodable codes (LDCs) and more generally, $k^{q/(q-2)}$ lower bound for all $q$-query LDCs. The case of even $q$ was known since 2004, but the best known bounds for odd q$ were obtained by treating a $q$ query code as a $q+1$ query code instead.For those familiar with the work in this area, this will be a combinatorial viewpoint on the spectral methods that yield similar results when the code is restricted to be linear.
Based on joint works with Omar Alrabiah, Arpon Basu, David Munha-Correia, Venkat Guruswami, Tim Hsieh, Peter Manohar, Sidhanth Mohanty, Andrew Lin, and Benny Sudakov
-
List Decoding Expander-Based Codes up to Capacity in Near-Linear Time
Shashank SrivastavaICTS:31736We will talk about a new framework based on graph regularity lemmas, for list decoding and list recovery of codes based on spectral expanders. These constructions often proceed by showing that the distance of local constant-sized codes can be lifted to a distance lower bound for the global code. The regularity framework allows us to similarly lift list size bounds for local base codes, to list size bounds for the global codes.
This allows us to obtain novel combinatorial bounds for list decoding and list recovery up to optimal radius, for Tanner codes of Sipser and Spielman, and for the distance amplification scheme of Alon, Edmonds, and Luby. Further, using existing algorithms for computing weak-regularity decompositions of sparse graphs in (randomized) near-linear time, these tasks can be performed in near-linear time.
Based on joint work with Madhur Tulsiani.
DIMACS (Rutgers University) and Institute for Advanced -
Expander graphs and optimally list-decodable codes
Madhur TulsianiICTS:31732We construct a new family of explicit codes that are list decodable to capacity and achieve an optimal list size of O(1/ϵ). In contrast to existing explicit constructions of codes achieving list decoding capacity, our arguments do not rely on algebraic structure but utilize simple combinatorial properties of expander graphs.
Our construction is based on a celebrated distance amplification procedure due to Alon, Edmonds, and Luby [FOCS'95], which transforms any high-rate code into one with near-optimal rate-distance tradeoff. We generalize it to show that the same procedure can be used to transform any high-rate code into one that achieves list decoding capacity. Our proof can be interpreted as a "local-to-global" phenomenon for (a slight strengthening of) the generalized Singleton bound.
As a corollary of our result, we also obtain the first explicit construction of LDPC codes achieving list decoding capacity, and in fact arbitrarily close to the generalized Singleton bound.
Based on joint work with Fernando Granha Jeronimo, Tushant Mittal, and Shashank Srivastava.
-
Efficient PCPs from HDX
Mitali BafnaICTS:31701The theory of probabilistically checkable proofs (PCPs) shows how to encode a proof for any theorem into a format where the theorem's correctness can be verified by making only a constant number of queries to the proof. The PCP Theorem [ALMSS] is a fundamental result in computer science with far-reaching consequences in hardness of approximation, cryptography, and cloud computing. A PCP has two important parameters: 1) the size of the encoding, and 2) soundness, which is the probability that the verifier accepts an incorrect proof, both of which we wish to minimize. In 2005, Dinur gave a surprisingly elementary and purely combinatorial proof of the PCP theorem that relies only on tools such as graph expansion, while also giving the first construction of 2-query PCPs with quasi-linear size and constant soundness (close to 1). Our work improves upon Dinur's PCP and constructs 2-query, quasi-linear size PCPs with arbitrarily small constant soundness. As a direct consequence, assuming the exponential time hypothesis, we get that no approximation algorithm for 3-SAT can achieve an approximation ratio significantly better than 7/8 in time 2^{n/polylog n}. In this talk, I will introduce PCPs and discuss the components that go into our proof. This talk is based on joint work with Dor Minzer and Nikhil Vyas, with an appendix by Zhiwei Yun.
-
GAP codes
Swastik KoppartyICTS:31723GAP codes are error-correcting codes based on evaluating degree-d m-variate polynomials at some
geometrically arranged points in F_q^m. For m constant and any epsilon > 0, these codes can achieve rate 1-epsilon and distance Omega(1).
For Reed-Muller codes, where the evaluation points form all of F_q^m, the rate is is exponentially small in $m$.GAP codes turn out to have a a very simple description from the point of view of HDXs: they are Tanner codes on the complete $m$-dimensional complex with Reed-Solomon codes as the inner code.
I will talk about their construction, how to decode them, and the somewhat surprising fact that despite having much fewer evaluation points than Reed-Muller codes, GAP codes are also locally testable with O(n^{1/m}) queries.
Joint work with Harry Sha and Mrinal Kumar
(Based on part of this paper: https://arxiv.org/abs/2410.13470 ) -
Kneser's theorem for codes and $\ell$-divisible set families
Gilles ZémorICTS:31782A k-wise m-divisible set family is a collection F of subsets of {1,...,n} such that any intersection of k sets in F has cardinality divisible by m. If k=m=2, it is well-known that |F|
-
Concentration on HDX: Derandomization Beyond Chernoff
Max HopkinsICTS:31746Chernoff's bound states that for any $A \subset [N]$ the probability a random $k$-tuple $s \in {[N] \choose k}$ correctly `samples' $A$ (i.e. that the density of $A$ in $s$ is close to its mean) decays exponentially in the dimension $k$. In 1987, Ajtai, Komlos, and Szemeredi proved the "Expander-Chernoff Theorem", a powerful derandomization of Chernoff's bound stating that one can replace ${[N] \choose k}$ with the significantly sparser family $X_G(k) \subsetneq {[N] \choose k}$ of length-$k$ paths on an expander graph $G$ while maintaining essentially the same concentration. Their result, which allows amplification without significant blow-up in size or randomness, has since become a mainstay in theoretical computer science with breakthrough applications in derandomization, coding, pseudorandomness, cryptography, and complexity.
One natural way to view AKS is to say Expander-Walks are pseudorandom with respect to functions of their vertices, or against "degree 1" functions. In modern complexity, especially in the context of hardness amplification and PCPs, we often need concentration against higher degree functions, e.g. functions of edges or triples. Unfortunately, due to their inherent low-dimensionality, walks on expanders are not pseudorandom even at degree 2, and the construction of such a de-randomized object has remained largely open.
In 2017 Dinur and Kaufman offered a partial resolution to this question in high dimensional expanders, a derandomized family satisfying Chebyshev-type (inverse polynomial) concentration for higher degree functions. Their work led to breakthrough applications in agreement testing and PCPs (Bafna, Minzer, Vyas, and Yun STOC 2025), but left an exponential gap with known bounds for the complete hypergraph ${[N] \choose k}$ needed for further applications. In this talk, we close this gap and prove (strong enough) HDX indeed have Chernoff-type tails. Time willing, we will discuss the relation of these bounds to a powerful analytic inequality called reverse hypercontractivity and its applications to agreement tests with optimal soundness.
Based on joint work with Yotam Dikstein.
-
-
-
-
-