Theory Lunch

Once a week, the Berkeley theory community gets together, socializes, has lunch, and then listens to an informal board presentation on open questions, new results, old results worth not forgetting, or whatever is fit to entertain a crowd of theoreticians. At the theory lunch, we do not believe in using slides, waiting until the end to ask questions, or stopping the speaker when he or she runs out of time.

Unless otherwise specified, the event occurs Wednesdays 12:30-1:15p (lunch starts 30mins before talk from 12:00-12:30p) in the Wozniak Lounge (on the 4th floor of Soda Hall). If you want to join the mailing list, click here.

Fall 2024

  • Dec 4: Binghui Peng on Theoretical Limitations of Multi-Layer Transformers

    Transformers, especially the decoder-only variants, are the backbone of most modern large language models; yet we do not have much understanding of their expressive power except for the simple $1$-layer case. Due to the difficulty of analyzing multi-layer models, all previous work relies on unproven complexity conjectures to show limitations for multi-layer Transformers. In this work, we prove the first unconditional lower bound against multi-layer decoder-only transformers. For any constant $L$, we prove that any $L$-layer decoder-only transformer needs a polynomial model dimension ($n^{\Omega(1)}$) to perform sequential composition of $L$ functions over an input of $n$ tokens.

    As a consequence, our results give:

    • (1) the first (unconditional) depth-size trade-off for multi-layer transformers, exhibiting that the $L$-step composition task is exponentially harder for $L$-layer models compared to $(L+1)$-layer ones;
    • (2) an unconditional separation between encoder and decoder, exhibiting a hard task for decoders that can be solved by an exponentially shallower and smaller encoder;
    • (3) a provable advantage of chain-of-thought, exhibiting a task that becomes exponentially easier with chain-of-thought.

    On the technical side, we propose the multi-party autoregressive communication model that captures the computation of a decoder-only Transformer. We also introduce a new proof technique that finds a certain indistinguishable decomposition of all possible inputs iteratively for proving lower bounds in this model. We believe our new communication model and proof technique will be helpful to further understand the computational power of transformers.

  • Nov 20: Omar Alrabiah on Near-Tight Bounds for 3-Query Locally Correctable Binary Linear Codes via Rainbow Cycles

    In this talk, we will show that a binary linear code of block length $n$ that is locally correctable with 3 queries against a constant fraction of adversarial errors must have dimension at most $O(\log^2 n \log \log n)$. This is almost tight in view of quadratic Reed-Muller codes being a 3-query locally correctable code (LCC) with dimension $\Theta(\log^2 n)$. Our result improves, for the binary field case, the $O(\log^8 n)$ bound obtained in the recent breakthrough of (Kothari and Manohar, 2023) (and the more recent improvement to $O(\log^4 n)$ for linear codes by (Yankovitz, 2024)).

    Previous bounds for 3-query linear LCCs proceed by constructing a 2-query locally decodable code (LDC) from the 3-query linear LCC and applying the strong bounds known for the former. Our approach is more direct and proceeds by bounding the covering radius of the dual code, borrowing inspiration from (Iceland and Samorodnitsky, 2018). That is, we show that if $x \to (v_1 \cdot x, v_2 \cdot x, \ldots, v_n \cdot x)$ is an arbitrary encoding map for the 3-query LCC, then all binary vectors of length $k$ can be written as a $\tilde{O}(\log n)$-sparse linear combination of the $v_i$'s, which immediately implies $k \leq \tilde{O}(\log^2 n)$ via a simple counting argument.

    The proof of this fact proceeds by iteratively reducing the size of any arbitrary linear combination of at least $\tilde{\Omega}(\log n)$ of the $v_i$'s. We achieve this using the recent breakthrough result of (Alon, Bucić, Sauermann, Zakharov, and Zamir, 2023) on the existence of rainbow cycles in properly edge-colored graphs, applied to graphs capturing the linear dependencies underlying the local correction property.

    This is based on joint work with my advisor Venkat Guruswami.

  • Nov 13: Abhiram Kothapalli on Zero-Knowledge Folding Schemes

    Zero-knowledge proofs are powerful cryptographic protocols that allow a verifier to interactively check that a prover knows a secret witness for a claimed statement.

    In this talk, I will present my work on introducing a related primitive, zero-knowledge folding schemes. A folding scheme is an interactive protocol between a prover and a verifier that efficiently reduces the task of checking two NP statements into the task of checking a single statement.

    My work demonstrates that folding schemes, and more generally interactive reductions, are becoming increasingly important for developing modern zero-knowledge proofs.

  • Nov 6: June Vuong on Efficiently Learning and Sampling Multimodal Distributions with Data-Based Initialization

    We consider the problem of sampling a multimodal distribution with a Markov chain given a small number of samples from the stationary measure. The setting is motivated by learning and generative modeling, where one wants to generate samples from a complex distribution given a small number of data points.

    While it is known that multimodality prevents fast mixing, we show that if the Markov chain has a $k$-th order spectral gap, initialization from a set of $O(k/\epsilon^2)$ samples from the stationary distribution will, with high probability over the samples, efficiently generate a sample whose conditional law is $\epsilon$-close in TV distance to the stationary measure. In particular, this applies to mixtures of $k$ distributions satisfying a Poincaré inequality, with faster convergence when they satisfy a log-Sobolev inequality.

    Our bounds are stable to perturbations to the Markov chain, and in particular work for Langevin diffusion with score estimation error, as well as Glauber dynamics combined with approximation error from pseudolikelihood estimation. This justifies the success of data-based initialization for score matching methods despite slow mixing for the data distribution, and improves the results of [KV23] to have linear, rather than exponential, dependence on $k$.

    As a consequence of our results, we show for the first time that a natural class of low-complexity Ising measures can be efficiently learned from samples.

    Joint work with Holden Lee and Frederic Koehler.

  • Oct 30: Vatsal Sharan on Memory as a Lens to Understand Efficient Learning and Optimization

    What is the role of memory in learning and optimization? The optimal convergence rates (measured in terms of the number of oracle queries or samples needed) for various optimization problems are achieved by computationally expensive optimization techniques, such as second-order methods and cutting-plane methods.

    We will discuss if simpler, faster, and memory-limited algorithms, such as gradient descent, can achieve these optimal convergence rates for the prototypical optimization problem of minimizing a convex function with access to a gradient or a stochastic gradient oracle.

    Our results hint at a curious dichotomy—significantly improving the convergence rate of known memory-efficient techniques (linear-memory variants of gradient descent) may not be possible without using substantially more memory (quadratic memory for many of these problems). Therefore, memory could be a useful discerning factor to provide a clear separation between 'efficient' and 'expensive' techniques.

  • Oct 23: William He on Pseudorandomness Properties of Random Reversible Circuits

    Motivated by cryptography, quantum information theory, circuit complexity, and derandomization, there has been significant recent progress in the study of random permutations resembling a completely random permutation of n-bit strings. Of particular interest is the case where the measure of "resemblance" is approximation of the k-th moments of the matrix representations. Such random permutations are known as approximate k-wise independent permutations.

    In this talk, I will discuss some recent results that show that small random reversible circuits compute approximate k-wise independent permutations:

    i) We show that a random reversible circuit with $\tilde{O}(nk)$ gates computes a constant-approximate $k$-wise independent permutation. This result implies a generalization of Shannon's circuit lower bound argument.

    ii) We show that a random reversible circuit with $\tilde{O}(nk^2)$ layers of 1D-local gates arranged in a brickwork architecture computes an $\exp(-nk)$-approximate $k$-wise independent permutation. Using this, we show that a random reversible circuit with $\tilde{O}(\sqrt{n}k^3)$ layers of 1D-local gates arranged in a brickwork architecture computes a $\exp(-n^{1/2} k)$-approximate $k$-wise independent permutation.

    Based on joint works with William Gay (CMU), Lucas Gretta (Berkeley), Nicholas Kocurek (CMU), Ryan O'Donnell (CMU), and Angelos Pelecanos (Berkeley).

  • Oct 16: Euiwoong Lee on Recent Progress on Correlation Clustering

    Correlation Clustering is one of the most well-studied graph clustering problems. The input is a complete graph where each edge is labeled either "+" or "-", and the goal is to partition the vertex set into (an arbitrary number of) clusters to minimize (the number of + edges between different clusters) + (the number of - edges within the same cluster). Until recently, the best polynomial-time approximation ratio was 2.06, nearly matching the integrality gap of 2 for the standard LP relaxation.

    Since 2022, we have bypassed this barrier and progressively improved the approximation ratio, with the current ratio being 1.44. Based on a new relaxation inspired by the Sherali-Adams hierarchy, the algorithm introduces and combines several tools considered in different contexts, including "local to global correlation rounding" and "combinatorial preclusering". In this talk, I will describe the current algorithm as well as how it has been inspired by various viewpoints.

    Joint work with Nairen Cao, Vincent Cohen-Addad, Shi Li, Alantha Newman, and Lukas Vogl

  • Oct 09: Elyassaf Loyfer on Rate vs. Distance - Milestones and Obstacles Towards Improved Bounds

    The rate vs. distance problem is a major open problem of coding theory. Significant progress has stalled since 1977. The best-known upper bound, established by MRRW, is derived from Delsarte’s reformulation of the combinatorial question as a linear program (LP). This breakthrough sparked a line of research into stronger optimization problems. However, the more powerful formulations have remained difficult to analyze, and no improved bounds have been achieved yet. In this talk, I will present new results on the analysis of a recently developed LP hierarchy. I will discuss the techniques employed, current limitations, and possible strategies to deal with these challenges.

    Based on joint work with Leonardo Nagami Coregliano, Fernando Granha Jeronimo, Chris Jones, and Nati Linial

  • Oct 02: Michael Chapman on Subgroup Tests and the Aldous--Lyons conjecture

    A common theme in mathematics is that limits of finite objects are well behaved. This allows one to prove many theorems about finitely approximable objects, while leaving the general case open --- examples for this are Gottschalk's conjecture, Kaplansky's direct finiteness conjecture, and many more. When the topology of the space is somewhat coarse, it becomes very hard to decide whether every object is approximable by finite ones, or whether there exist non-approximable objects. Some of the more famous problems in various fields of mathematics can be framed this way; this includes Connes' embedding problem (CEP) in functional analysis, the Aldous--Lyons conjecture in probability theory, the soficity problem of Gromov--Weiss in group theory, and more.

    In 2019, Ji--Natarajan--Vidick--Wright--Yuen resolved CEP in the negative, namely, they showed that there are non-approximable characters of the free group. The amazing thing about their result is that it is deduced from complexity theory, and specifically from undecidability in certain (quantum) interactive proof systems. Inspired by their work, we suggest a novel interactive proof system which is related to the Aldous--Lyons conjecture in the following way: If the Aldous--Lyons conjecture was true, then every language in this interactive proof system is decidable. A key concept we introduce for this purpose is that of a Subgroup Test, which is our analogue of a Non-local Game. By providing a reduction from the Halting Problem to this new proof system, we refute the Aldous-Lyons conjecture.

    This talk is based on a joint work with Lewis Bowen, Alex Lubotzky, and Thomas Vidick.

    No special mathematical background will be assumed.

  • Sep 25: Adam Block on Understanding the Role of Horizon in Imitation Learning

    In Imitation Learning (IL), a learner attempts to mimic the behavior of some expert by learning a policy that, when rolled out in some environment, produces actions similar to those the expert would have taken. Due to the recent empirical successes of IL in autonomous driving, robotics, and natural language processing, there has been widespread deployment of IL algorithms in a variety of fields. Classical results in IL differentiate between two regimes: an offline regime that reduces the setting to supervised learning and an online setting allowing for interactive querying of the expert. This earlier theory has suggested that the online regime is significantly more powerful than the fully offline setting, especially in the role that the horizon of the problem plays in the sample complexity of the resulting algorithms. In this talk, we revisit these results and discuss a new approach that clarifies the distinction between offline and online IL, and provides horizon-free bounds even in the fully offline setting.

    Based on joint work with Dylan Foster and Dipendra Misra.

  • Sep 18: Ansh Nagda on On approximability of the Permanent of PSD matrices

    The permanent is a well-studied combinatorial quantity that is often viewed as a computationally hard cousin of the determinant. In this talk, we will study the permanent over the (initially strange-seeming) domain of positive semidefinite (PSD) matrices. Over this domain, the permanent is better interpreted as an analytic quantity rather than a combinatorial one. The permanent of PSD matrices arises naturally in quantum physics and probability, and there has been a recent series of work in understanding its computational tractability.

    We will present recent progress on efficient approximability of PSD permanents. Firstly we show that this problem necessarily requires an exponential approximation ratio by proving NP-hardness of approximating it better than $\exp(\gamma n)$, where $\gamma$ is the Euler-Mascheroni constant. Secondly, we give exponentially improved approximation ratios over the best previously known algorithms. Both our results come from understanding the relationship between PSD permanents and a certain matrix norm problem.

    Based on arXiv:2404.10959. Joint work with Farzam Ebrahimnejad and Shayan Oveis Gharan.

  • Sep 10: Orr Paradise on Models that prove their own correctness

    This talk introduces Self-Proving models, a new class of models that formally prove the correctness of their outputs via an Interactive Proof system. After reviewing some related literature, I will formally define Self-Proving models and their per-input (worst-case) guarantees. I will then present algorithms for learning these models and explain how the complexity of the proof system affects the complexity of the learning algorithms. Finally, I will show experiments where Self-Proving models are trained to compute the Greatest Common Divisor of two integers, and to prove the correctness of their results to a simple verifier.

    No prior knowledge of autoregressive models or Interactive Proofs will be assumed of the listener. This is a joint work with Noga Amit, Shafi Goldwasser, and Guy Rothblum.

Spring 2022

  • May 12: Kasper Green Larsen on Towards Optimal Lower Bounds for k-means Coresets

    Given a set of points in Euclidian space, the k-means problem consists of finding a set of k points called centers, such that the sum of distances squared of every data point to its closest center is minimized. The k-means problems is at the heart of modern data analysis and massive data applications have given rise to the notion of a coreset: a small (weighted) subset of the input point set preserving the cost of any solution to the problem up to a multiplicative $(1 + \epsilon)$ factor, hence reducing from large to small scale the input to the problem.

    While there has been an intensive effort to understand what is the best coreset size possible, there is still a significant gap between the state- of-the-art upper and lower bounds. In this talk, we make progress by showing an $\Omega(k/\epsilon^2)$ lower bound. This improves over $Omega(k/\sqrt{\epsilon})$ by Baker, Braverman, Huang, Jiang, Krauthgamer and Wu from ICML’20 and nearly matches a new upper bound of $O(min(k^2/\epsilon^2,k/\epsilon^3))$.

    The talk is based on joint work with Vincent Cohen-Addad, David Saulpic and Chris Schwiegelshohn and was accepted to STOC’22.

  • May 05: Huy Tuan Pham on A proof of the Kahn-Kalai conjecture

    The threshold of an increasing boolean function (or of an increasing graph property) is the density at which a random set (or a random graph) transitions from unlikely satisfying to likely satisfying the function (or property). Kahn and Kalai conjectured that this threshold is always within a logarithmic factor of the expectation threshold, a quantity often much easier to compute. In probabilistic combinatorics and random graph theory, the Kahn-Kalai conjecture directly implies a number of difficult results, such as Shamir’s problem on hypergraph matchings, or the threshold for containing a bounded degree spanning tree. I will discuss recent joint work with Jinyoung Park that resolves the Kahn-Kalai conjecture. Time permitting, I will discuss our resolution of some conjectures and questions of Talagrand on extreme events of suprema of certain non-Gaussian empirical processes, which was in fact the precursor to the resolution of the Kahn-Kalai conjecture.

  • Apr 28: Chinmay Nirkhe on Complexity of quantum search vs decision

    It is a useful fact in classical computer science that many search problems are reducible to decision problems; this has led to decision problems being regarded as the de facto computational task to study in complexity theory. We explore search-to-decision reductions for quantum search problems, wherein a quantum algorithm makes queries to a classical decision oracle to output a desired quantum state. In particular, we focus on search-to-decision reductions for QMA, and show that there exists a quantum polynomial-time algorithm that can generate a witness for a QMA problem up to inverse polynomial precision by making one query to a PP decision oracle. We complement this result by showing that QMA-search does not reduce to QMA-decision in polynomial-time, relative to a quantum oracle.

    We also explore the more general state synthesis problem, in which the goal is to efficiently synthesize a target state by making queries to a classical oracle encoding the state. We prove that there exists a classical oracle with which any quantum state can be synthesized to inverse polynomial precision using only one oracle query and to inverse exponential precision using two oracle queries. This answers an open question of Aaronson from 2016, who presented a state synthesis algorithm that makes O(n) queries to a classical oracle to prepare an n-qubit state, and asked if the query complexity could be made sublinear.

  • Apr 14: Yang P. Liu on Maximum Flow and Minimum-Cost Flow in Almost-Linear Time

    We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with $m$ edges and polynomially bounded integral demands, costs, and capacities in $m^{1+o(1)}$ time. Our algorithm builds the flow through a sequence of $m^{1+o(1)}$ approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized $m^{o(1)}$ time using a dynamic data structure.

    Our framework extends to an algorithm running in $m^{1+o(1)}$ time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives an almost-linear time algorithm for several problems including entropy-regularized optimal transport, matrix scaling, $p$-norm flows, and isotonic regression.

    Joint work with Li Chen, Rasmus Kyng, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva.

  • Apr 07: Jonathan Ullman on The Limits of Pan Privacy and Shuffle Privacy for Learning and Estimation

    There has been a recent wave of interest in intermediate trust models for differential privacy that eliminate the need for a fully trusted central data collector, but overcome the limitations of local differential privacy. This interest has led to the introduction of the shuffle model (Cheu et al., EUROCRYPT 2019; Erlingsson et al., SODA 2019) and revisiting the pan-private model (Dwork et al., ITCS 2010). The message of this line of work is that, for a variety of low-dimensional problems — such as counts, means, and histograms — these intermediate models offer nearly as much power as central differential privacy. However, there has been considerably less success using these models for high-dimensional learning and estimation problems. In this work, we show that, for a variety of high-dimensional learning and estimation problems, both the shuffle model and the pan-private model inherently incur an exponential price in sample complexity relative to the central model. For example, we show that, private agnostic learning of parity functions over d bits requires $Ω(2^{d/2})$ samples in these models, and privately selecting the most common attribute from a set of d choices requires $Ω(d^{1/2})$ samples, both of which are exponential separations from the central model. Our work gives the first non-trivial lower bounds for these problems for both the pan-private model and the general multi-message shuffle model.

    Joint work with Albert Cheu

  • Mar 31: Arun Ganesh on Langevin Diffusion: An Almost Universal Algorithm for Differentially Private (Convex) Optimization

    In this talk we revisit the problem of differentially private empirical risk minimization (DP-ERM) and differentially private stochastic convex optimization (DP-SCO). Interestingly, the known algorithms that achieve the optimal bounds for \epsilon-DP and (\epsilon,\delta)-DP are different, as well as the algorithms achieving the optimal bounds for convex and strongly convex losses. A natural question is thus whether a universal algorithm exists for private convex optimization. We show that a well-studied continuous time algorithm from statistical physics, called Langevin diffusion (LD), simultaneously provides optimal privacy/utility trade-offs for both DP-ERM and DP-SCO, under \epsilon-DP, and (\epsilon,\delta)-DP, for both convex and strongly convex losses. Using the uniform stability properties of LD, we provide the optimal excess population risk guarantee under \epsilon-DP, which was an open problem since [BST14]. Our work initiates a systematic study of DP continuous time optimization which we believe may have ramifications in the design of discrete time DP optimization algorithms, analogous to that in the non-private setting.

    This talk will assume little prior knowledge of DP or stochastic calculus. Based on joint work with Abhradeep Guha Thakurta and Jalaj Upadhyay.

  • Mar 10: Tselil Schramm on Testing thresholds for sparse random geometric graphs

    We study random geometric graphs on the unit sphere in the high-dimensional setting, where the dimension grows to infinity with the number of vertices. Our central question is: given such a graph, when is it possible to identify the underlying geometry? As the dimension grows relative to the number of vertices, the edges in the graph become increasingly independent, and the underlying geometry becomes less apparent. In this talk I will present recent progress on this question: we show that in sparse geometric random graphs, if the dimension is at least polylogarithmic in the number of vertices, then the graphs are statistically indistinguishable from Erdos-Renyi graphs, and the underlying geometry disappears.

    Based on joint work with Siqi Liu, Sidhanth Mohanty, and Elizabeth Yang.

  • Mar 03: Sam Gunn on On certified randomness from quantum advantage experiments

    Certified randomness has a long history in quantum information, with many potential applications. Recently Aaronson (Aaronson 2018, 2020) proposed a novel certified randomness protocol based on existing random circuit sampling (RCS) experiments – placing this application within the reach of near-term quantum devices. The security of his protocol, however, relies on non-standard complexity-theoretic conjectures which were not previously studied in the literature. In joint work with Roozbeh Bassirian, Adam Bouland, Bill Fefferman, and Avishay Tal, we prove two versions of Aaronson’s conjectures unconditionally in a black-box Fourier sampling setting.

    In this talk I will describe the setting and applications, and then outline the proof of one of our results. I will try to make it as self-contained as possible, without assuming much about background knowledge in quantum computing.

  • Feb 24: Venkat Guruswami on The existence of Reed-Solomon codes list-decodable up to capacity

    We prove the existence of Reed-Solomon (RS) codes of any desired rate $R \in (0,1)$ that are combinatorially list-decodable up to a radius approaching $1-R$, which is the information-theoretic limit. This is established by starting with the full-length Reed-Solomon code of dimension k over a polynomially larger field and "puncturing" it by including $k/R$ randomly chosen codeword positions.

    Our puncturing result is more general and yields a derandomization result applicable to random linear codes---it shows that the list-decodability with a specific list-size (and more generally, any "local" property) of a random puncturing of any low-bias code approaches that of fully random linear codes. Thus, all current (and future) list-decodability bounds shown for random linear codes extend automatically to random puncturings of any low-bias code.

    To obtain our result about RS codes, we establish some hashing properties of field trace maps that allow us to reduce the list-decodability of RS codes to its associated trace code, and then apply our puncturing theorem to the latter.

    Joint work with Jonathan Mosheiff (CMU).

  • Feb 17: Jason Li on Gomory-Hu Tree in Subcubic Time

    In 1961, Gomory and Hu showed that the max-flow values of all pairs of vertices in an undirected graph can be computed using only n−1 calls to any max-flow algorithm, and succinctly represented them in a tree (called the Gomory-Hu tree later). Even assuming a linear- time max-flow algorithm, this yields a running time of O(mn) for this problem; with current max-flow algorithms, the running time is ~O(mn + n^{5/2}). We break this 60-year old barrier by giving an ~O(n^2)-time algorithm for the Gomory-Hu tree problem, already with current max-flow algorithms. For unweighted graphs, our techniques show equivalence (up to poly-logarithmic factors in running time) between Gomory-Hu tree (i.e., all-pairs max-flow values) and a single- pair max-flow.

    In this talk, I survey some of the recent algorithmic techniques and strategies that played a central role to obtaining this result: expander graphs and locality, the isolating cuts lemma, and single-source mincuts with a random pivot. The aim is to make the talk as self-contained as possible.

Fall 2020

  • Dec 09: Aaron Sidford on Interior Point Methods for Nearly Linear Time Algorithms

    Linear programming is a foundational continuous optimization problem and special cases, e.g. maximum cardinality bipartite matching, are of the most fundamental problems in combinatorial optimization. In this talk, I will survey recent advances in solving these problems using interior point methods. I will discuss how new interior point methods can be leveraged to efficiently reduce these problems to data structures problems which can be solved efficiently with techniques from randomized numerical linear algebra and graph theory. This talk will highlight recent joint work which showed that linear programs with d variables and n constraints can be solved with high probability to high precision in time Otilde(nd + poly(d)) and that a variety of bipartite matching problems, e.g. optimal transport, on n-node -m-edge graphs can be solved to with high probability to high precision in time Otilde(m + n^1.5). These results constitute the first polynomial time methods for solving these problems to high precision which run in nearly-linear time on sufficiently large and dense instances. No previous experience with interior point methods required.

    This talk will touch upon joint work with Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang including 2009.01802 and 2002.02304.

  • Nov 25: Kuikui Liu on New Tools for Analysis of Markov Chains via High-Dimensional Expansion

    The Markov Chain Monte Carlo paradigm is one of the most widely used methods for sampling from challenging distributions, both practically and theoretically. We discuss a recent line of work that leverages the perspective of high-dimensional expanders and their local-to-global properties to analyze Markov chains in the discrete setting. We will highlight numerous connections with other areas including geometry of polynomials, statistical physics, and more. Specific applications include sampling from bases of matroids, and spin systems in statistical physics. Based on several joint works with Nima Anari, Zongchen Chen, Shayan Oveis Gharan, Eric Vigoda, and Cynthia Vinzant.

  • Nov 18: Mark Sellke on Chasing Convex Bodies

    In the chasing convex bodies problem, a player receives a request sequence K_1,...,K_T of convex sets in R^d and chooses online points x_t in K_t. The player's cost is the length of the path (x_0, x_1,...,x_T), and the goal is to give an online algorithm with bounded competitive ratio against the offline optimal path. This problem was posed in 1993 as a natural metrical task system and also reappeared more recently as a competitive analysis version of online convex optimization.

    In this talk, I will explain how to achieve a near-optimal competitive ratio based on the Steiner point, a classical object in convex geometry. We will see other approaches and related problems along the way. This is based on joint works with Sébastien Bubeck, Bo'az Klartag, Yin Tat Lee, Yuanzhi Li, and Yuval Rabani.

  • Nov 11: Mary Wootters on Sharp Thresholds for Random Subspaces, and Applications to LDPC Codes

    What combinatorial properties are likely to be satisfied by a random subspace over a finite field? For example, is it likely that not too many points lie in any Hamming ball? What about any cube? We show that there is a sharp threshold on the dimension of the subspace at which the answers to these questions change from "extremely likely" to "extremely unlikely," and moreover we give a simple characterization of this threshold for different properties. Our motivation comes from error correcting codes, and we use this characterization to establish the list-decodability of random Low Density Parity-Check (LDPC) codes. This talk is based on joint work with Jonathan Mosheiff, Nicolas Resch, Noga Ron-Zewi, and Shashwat Silas.

  • Oct 28: Aayush Jain on Indistinguishability Obfuscation from Well-Founded Assumptions

    We present a construction of an indistinguishability obfuscation scheme, whose security rests on the subexponential hardness of four well-founded assumptions. We show the existence of an indistinguishability Obfuscation scheme for all circuits assuming sub-exponential security of the following assumptions:

    • The Learning with Errors (LWE) assumption with arbitrarily small subexponential modulus-to-noise ratio,
    • The SXDH assumption with respect to bilinear groups of prime order p,
    • Existence of a Boolean Pseudorandom Generator (PRG) in \mathsf{NC}^0 with arbitrary polynomial stretch, that is, mapping n bits to n^{1+\tau} bits, for any constant \tau>0.
    • The Learning Parity with Noise (LPN) assumption over \mathbb{Z}_p with error-rate \ell^{-\delta}, where \ell is the dimension of the secret and \delta>0 is an arbitrarily small constant.

    Further, assuming only polynomial security of these assumptions, there exists a compact public-key functional encryption scheme for all circuits.

    The main technical novelty is the introduction and construction of a cryptographic pseudorandom generator that we call a Structured-Seed PRG (sPRG), assuming LPN over \mathbb{Z}_p and PRGs in \mathsf{NC}^0.

    During the talk, I will discuss how structured seed PRGs have evolved from different notions of novel pseudorandom generators proposed in the past few years, and how an interplay between different areas of theoretical computer science played a major role in providing valuable insights leading to this work. Time permitting, I will go into the details of how to construct sPRGs.

    Joint work with Huijia (Rachel) Lin and Amit Sahai..

  • Oct 21: Andrea Lincoln on New Techniques for Proving Fine-Grained Average-Case Hardness

    In this talk I will cover a new technique for worst-case to average-case reductions. There are two primary concepts introduced in this talk: "factored" problems and a framework for worst-case to average-case fine-grained (WCtoACFG) self reductions. We will define new versions of OV, kSUM and zero-k-clique that are both worst-case and averagecase fine-grained hard assuming the core hypotheses of fine-grained complexity. We then use these as a basis for fine-grained hardness and average-case hardness of other problems. Our hard factored problems are also simple enough that we can reduce them to many other problems, e.g. to edit distance, k-LCS and versions of Max-Flow. We further consider counting variants of the factored problems and give WCtoACFG reductions for them for a natural distribution. To show hardness for these factored problems we formalize the framework of [Boix-Adsera et al. 2019] that was used to give a WCtoACFG reduction for counting k-cliques. We define an explicit property of problems such that if a problem has that property one can use the framework on the problem to get a WCtoACFG self reduction. In total these factored problems and the framework together give tight fine-grained average-case hardness for various problems including the counting variant of regular expression matching.

    Based on joint work with Mina Dalirrooyfard and Virginia Vassilevska Williams.

  • Oct 14: Nathan Klein on A (Slightly) Improved Approximation Algorithm for Metric TSP

    In this talk, I will describe recent work in which we obtain a 3/2-epsilon approximation algorithm for metric TSP, for some epsilon>10^{-36}. This slightly improves over the classical 3/2 approximation algorithm due to Christofides [1976] and Serdyukov [1978]. The talk will focus on giving an overview of the key ideas involved. This is joint work with Anna Karlin and Shayan Oveis Gharan.

  • Oct 07: Tarun Kathuria on An O(m^{4/3+o(1)}) algorithm for exact unit-capacitated max flow

    In this talk, I’ll present an algorithm for computing s-t max flows in O(m^{4/3+o(1)}) time in unit capacity graphs. The algorithm is inspired by potential reduction Interior Point Methods for linear programming. Instead of using scaled gradient/Newton steps of a potential function, we consider minimizing the potential function exactly and show how this allows us to directly find a centered point efficiently. Then using the weighted central path framework of Madry and Liu-Sidford, we show that our steps also benefit from maximizing weights carefully, which can be efficiently solved using work of Kyng et al., which allows us to get the desired runtime.

  • Sep 30: Simons Fellows on Lightning Talks

    A special theory lunch where the fellows at Simons Institute this semester will give lightning talks.
    This is a great opportunity to meet the fellows and learn about what they are working on.

  • Sep 23: Sidhanth Mohanty on Explicit near-Ramanujan graphs of every degree

    For every constant d > 3 and epsilon > 0, we give a deterministic poly(n)-time algorithm that outputs a d-regular graph on Θ(n) vertices that is epsilon-near-Ramanujan; i.e., its eigenvalues are bounded in magnitude by 2sqrt(d-1) + epsilon (excluding the single trivial eigenvalue of d)

  • Sep 16: Orr Paradise on Rigid Matrices From Rectangular PCPs (or: Hard Claims Have Complex Proofs)

    We introduce a variant of PCPs, that we refer to as rectangular PCPs, wherein proofs are thought of as square matrices, and the random coins used by the verifier can be partitioned into two disjoint sets, one determining the row of each query and the other determining the column. We construct PCPs that are efficient, short, smooth and (almost-)rectangular. As a key application, we show that proofs for hard languages in $\mathsf{NTIME}(2^n)$, when viewed as matrices, are rigid infinitely often. This strengthens and simplifies a recent result of Alman and Chen [FOCS, 2019] constructing explicit rigid matrices in $\mathsf{FNP}$. Namely, we prove the following theorem: There is a constant $\delta \in (0,1)$ such that there is an $\mathsf{FNP}$-machine that, for infinitely many $N$, on input $1^N$ outputs $N \times N$ matrices with entries in $\mathbb{F}_2$ that are $\delta N^2$-far (in Hamming distance) from matrices of rank at most $2^{\log N/\Omega(\log \log N)}$. Our construction of rectangular PCPs starts with an analysis of how randomness yields queries in the Reed-Muller-based outer PCP of Ben-Sasson, Goldreich, Harsha, Sudan and Vadhan [SICOMP, 2006; CCC, 2005]. We then show how to preserve rectangularity under PCP composition and a smoothness-inducing transformation. This warrants refined and stronger notions of rectangularity, which we prove for the outer PCP and its transforms. In this talk, I will focus on introducing and motivating rectangular PCPs, and their application to matrix rigidity. No prior knowledge in PCPs is assumed. Joint work with Amey Bhangale, Prahladh Harsha, and Avishay Tal.

Spring 2020

  • Mar 11: No Lunch (Campus closed)
  • Mar 04: Nima Anari on Geometry of polynomials and distributions with limited correlations

    The analysis of random walks on high-dimensional expanders has recently enabled us to resolve open questions about sampling/counting in combinatorial structures like matroids. The key to this connection was showing that matroids have “spectrally negative” correlations. In this talk, I will show how to go beyond the setting of matroids and analyze random walks in the presence of limited positive correlations. I will show applications to the hardcore model on bounded-degree graphs, the monomer distribution in monomer-dimer systems, and if time permits, other combinatorial distributions arising from polynomials with no roots in a sector of the complex plane.

  • Feb 26: No Lunch
  • Feb 19: Simons Fellows on Lightning Talks

    A special theory lunch where the fellows at Simons Institute this semester will give lightning talks.
    This is a great opportunity to meet the fellows and learn about what they are working on.

  • Feb 12: Mika Göös on Automated Proof Search: The Aftermath

    In a breathtaking breakthrough, Atserias and Muller (FOCS'19, Best Paper) settled the complexity of finding short proofs in Resolution, the most basic propositional proof system. Namely, given an unsatisfiable CNF formula F, they showed it is NP-hard to find a Resolution refutation of F in time polynomial in the length of the shortest such refutation.

    In this talk, we present a simple proof of the Atserias--Muller theorem. The new proof generalises better: We obtain analogous hardness results for Nullstellensatz, Polynomial Calculus, Sherali--Adams, and (with more work) Cutting Planes. An open problem is to include Sum-of-Squares in this list.

    Based on joint works with Susanna de Rezende, Sajin Koroth, Ian Mertz, Jakob Nordström, Toniann Pitassi, Robert Robere, Dmitry Sokolov.

  • Feb 05: Elizabeth Yang on High-Dimensional Expanders from Expanders

    We present an elementary way to transform an expander graph into a simplicial complex where all high order random walks have a constant spectral gap, i.e., they converge rapidly to the stationary distribution. As an upshot, we obtain new constructions, as well as a natural probabilistic model to sample constant degree high-dimensional expanders. In particular, we show that given an expander graph G, adding self loops to G and taking the tensor product of the modified graph with a high-dimensional expander produces a new high-dimensional expander. Our proof of rapid mixing of high order random walks is based on the decomposable Markov chains framework introduced by Jerrum et al in 2004. We will also present an updated analysis due to a new result from last month.

    This is joint with Siqi Liu and Sidhanth Mohanty.

  • Jan 29: Alexandra Kolla on Statistical physics approaches to Unique Games

    We show how two techniques from statistical physics can be adapted to solve a variant of the notorious Unique Games problem, potentially opening new avenues towards the Unique Games Conjecture. We exhibit efficient algorithms for a natural generalisation of Unique Games based on approximating a suitable partition function via (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion. We also show that a modest improvement to the parameters for which we give results would refute the Unique Games Conjecture. Based on joint work with M. Coulson, E. Davies, V. Patel, and G. Regts.

Fall 2019

  • Dec 04: Barna Saha on How hard is computing All-Pairs Shortest Path?

    All-pairs shortest path (APSP) on a weighted graph can be computed in O(n^3) time where n is the number of vertices and till date there does not exist any truly subcubic algorithm for it. Assuming cubic-hardness of APSP, many problems over graphs and matrices have been shown to be cubic-hard as well. But is the APSP conjecture true?

    In this talk, we will revisit the APSP conjecture. Computing APSP and matrix multiplication over (min,+) semiring are well-known to be equivalent in complexity. We will look at progresses and challenges of computing (min,+) matrix multiplication in subcubic time. We will also discuss related problems (e.g. RNA Folding, Language Edit Distance) for which subcubic algorithms have been designed only recently.

  • Nov 27: No Lunch (Thanksgiving break)
  • Nov 20: Ashish Goel on Beyond Voting - Markets and Platforms for Societal Decision Making.

    YouTube competes with Hollywood as an entertainment channel, and also supplements Hollywood by acting as a distribution mechanism. Twitter has a similar relationship to news media, and Coursera to Universities. But there are no online alternatives for making democratic decisions at large scale as a society. In this talk, we will describe algorithmic and market-inspired approaches towards large scale decision making that we are exploring. In particular, we will describe three recent results:

    (1) We will describe Stanford's Participatory Budgeting (PB) platform, used by many cities in North America, along with novel voting methods inspired by PB, including knapsack voting. We will also show how a series of incremental votes can lead to an optimum solution to many budgeting problems. The incremental voting algorithms are inspired by prediction markets, where each subsequent participant provides a small correction to the market

    (2) We will describe how one can construct a market for public-decision-making inspired by the celebrated work of Foley and others on public good markets.

    (3) We will describe a deliberation mechanism where a group comes to a decision by a series of pairwise negotiations. We will show that this results in provably good decisions on median spaces.

    The above results are in increasing order of interaction among decision makers -- in the first, individuals are reacting to an entire decision made by the rest of the society; in the second, individuals are participants in a market that looks very much like a traditional Fisher market, and in the third, participants interact with other participants directly as opposed to via aggregated prices.

    This represents joint work with Tanja Aitamurto, Brandon Fain, Nikhil Garg, Vijay Kamble, Anilesh Krishnaswamy, David Marn, Kamesh Munagala, Benjamin Plaut, and Sukolsak Sakshuwong.

  • Nov 13: Reza Gheissari on Tensor PCA and planted signal recovery via Gradient Flow and Langevin dynamics

    Consider the algorithmic problem of recovering $v \in \mathbb S^{N-1}$ from an i.i.d. Gaussian $k$-tensor spiked by the rank-one tensor $v^{\otimes k}$. Maximum likelihood estimation for this is an example of optimizing on a complex high-dimensional landscape. Borrowing ideas from the dynamics of spin glasses, we seek to understand the mechanisms for, and obstructions to, such planted signal recovery problems in high dimensions, via locally optimizing algorithms. Specifically, we identify recovery thresholds for gradient descent and Langevin dynamics on general landscapes consisting of a planted non-linear signal function perturbed by isotropic Gaussian noise. Based on joint work with G. Ben Arous and A. Jagannath.

  • Nov 06: John Wright on NEEXP in MIP*

    A long-standing puzzle in quantum complexity theory is to understand the power of the class MIP of multi-prover interactive proofs with shared entanglement. This question is closely related to the study of entanglement through non-local games, which dates back to the pioneering work of Bell. In this work we show that MIP contains NEEXP (non-deterministic doubly-exponential time), exponentially improving the prior lower bound of NEXP due to Ito and Vidick. Our result shows that shared entanglement exponentially increases the power of these proof systems, as the class MIP of multi-prover interactive proofs without shared entanglement is known to be equal to NEXP.

    This is joint work with Anand Natarajan.

  • Oct 30: No Lunch
  • Oct 23: Mohammad Mahmoody on Computational Concentration of Measure

    Many metric probability spaces are known for being "concentrated". A famous result in this area (also known as the blowing up lemma) states that any product probability space (of high dimension) is concentrated under Hamming distance. In this work, we initiate a natural algorithmic/computational variant of the measure concentration phenomenon, in which the goal is to efficiently find the close points rather than proving their existence. In particular, we are given implicit access to a set S of non-negligible probability through oracle queries (note that S could be exponentially large to write down). We are also given a fresh sample point x from the probability space, and we are asked to find a "close" neighbor x' (for x) inside S. We prove computational concentration for any product space under Hamming distance. We also show an application of this result to derive (poly-time) targeted poisoning attacks against learning algorithms when there is an initial non-negligible vulnerability.

    Based on joint works with Omid Etesami and Saeed Mahloujifar (SODA 2020 and ALT 2019).

  • Oct 16: Yuval Ishai on Homomorphic Secret Sharing

    A homomorphic secret-sharing (HSS) scheme is a secret-sharing scheme that allows locally mapping shares of a secret x to compact shares of a function of x. The talk will survey the current state of the art on HSS, covering efficient constructions, applications in cryptography and complexity theory, and open questions.

  • Oct 09: No Lunch (Weather Emergency)
  • Oct 02: Guy Rothblum on Gentle Measurement of Quantum States and Differential Privacy

    We prove a new connection between gentle measurement (where one wants to measure n quantum states, in a way that damages the states only by a little) and differential privacy (where one wants to query a database about n users, in a way that reveals only a little about any individual user). The connection is bidirectional, though with loss of parameters in going from DP to gentle measurement. Exploiting this connection, we present a new algorithm for approximating the outcomes of many measurements on a collection of quantum states, a task called "shadow tomography". The new algorithm has the advantages of being gentle and online (the measurements can be chosen adaptively).

    Joint work with Scott Aaronson.

  • Sep 25: Omri Ben-Eliezer on Sampling against an adaptive adversary

    Suppose we receive a stream of data, sampling each arriving data element with some predetermined probability. It is well known that if the elements in the stream are chosen in advance, then a constant number of random samples suffice to make the sampled subset “represent” the whole stream with good probability. However, what if the elements are chosen adaptively, by an adversary that knows exactly, at any point along the stream, which elements were sampled previously? Does the same sample size suffice to guarantee “representability”?

    The simplistic answer is negative, and we demonstrate this by describing an efficient adaptive attack. However, this attack is "theoretical only", requiring the universe of elements to be exponential in the stream length. This is not a coincidence: We show how to make the sampling algorithm robust against adaptive adversaries when the universe is smaller. As an application, this yields a sublinear-query adversarially robust randomized streaming algorithm for a wide range of data analysis problems, even in high dimensions.

    Joint work with Eylon Yogev. Full version: https://arxiv.org/abs/1906.11327

  • Sep 18: David Wajc on Online Matching with General Arrivals

    The online matching problem was introduced by Karp, Vazirani and Vazirani nearly three decades ago. In that seminal work, they studied this problem in bipartite graphs with vertices arriving only on one side, and presented optimal deterministic and randomized algorithms for this setting. In comparison, more general arrival models, such as edge arrivals and general vertex arrivals, have proven more challenging and positive results are known only for various relaxations of the problem. In particular, even the basic question of whether randomization allows one to beat the trivially-optimal deterministic competitive ratio of 1/2 for either of these models was open. In this paper, we resolve this question for both these natural arrival models, and show the following.

    1. For edge arrivals, randomization does not help — no randomized algorithm is better than 1/2 competitive.
    2. For general vertex arrivals, randomization helps — there exists a randomized (1/2+Ω(1))-competitive online matching algorithm.

    Based on joint work with Buddhima Gamlath, Michael Kapralov, Andreas Maggiori and Ola Svensson at EPFL, to appear in FOCS 2019. A preprint can be found here: http://www.cs.cmu.edu/~dwajc/pdfs/gamlath19.pdf

  • Sep 11: Tamer Mour on Trapdoor Hash Functions and Their Applications

    Consider a scenario where Alice has a large input x, and Bob asks to learn a small part of x, say of length n bits. In a world where Alice and Bob trust each other, this is easy to achieve using a protocol with optimal download rate, i.e. a protocol where Alice sends exactly n bits. However, if the parties want to keep their inputs private from each other, then known secure solutions require that Alice's message grows multiplicatively in the security parameter, and thus their download rate is far from optimal. In this work we ask: can we construct secure protocols with download rate 1, where the length of Alice's message grows only with n?

    To positively answer this question, we introduce a new primitive called trapdoor hash functions (TDHs), which can be seen as a hybrid of a hash function and a trapdoor function. Using trapdoor hash, we obtain the first secure protocols with optimal download rate for many fundamental and useful functionalities, under various standard assumptions, including DDH, QR and LWE.

    Based on a joint work with Nico Döttling, Sanjam Garg, Yuval Ishai, Giulio Malavolta and Rafail Ostrovsky

  • Sep 04: Ofer Grossman on Error correcting codes for uncompressed messages

    Most types of messages we transmit (e.g., video, audio, images, text) are not fully compressed, since they do not have known efficient and information theoretically optimal compression algorithms. When transmitting such messages, standard error correcting codes fail to take advantage of the fact that messages are not fully compressed.

    In this talk, we will show that when messages cannot be compressed to the information theoretic limit, it is sub-optimal to use standard error correction. We construct a probabilistic encoding procedure for this setting that achieves better tradeoffs between data rates and error-resilience compared to just applying a standard error correcting code.

    Based on a joint work with Justin Holmgren.

Spring 2019

  • May 01: Vijay Vazirani on Matching is as Easy as the Decision Problem, in the NC Model

    We give an NC reduction from search to decision for the problem of finding a minimum weight perfect matching, provided edge weights are polynomially bounded. As a consequence, for settling the long-standing open problem of obtaining an NC perfect matching algorithm, it suffices to obtain an \NC{} algorithm for the decision problem. We believe this new fact has qualitatively changed the nature of this open problem.

    The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, recently Goldwasser and Grossman gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs.

    An equivalent way of stating our main result is: We give an NC algorithm for finding a minimum weight perfect matching in a general graph with polynomially bounded edge weights, provided our algorithm is given access to an oracle for the decision problem. Our result is built on the work of Anari and Vazirani which used planarity of the input graph critically; in fact, in three different ways. The main challenge was to adapt these steps to general graphs by appropriately trading planarity with the use of the decision oracle.

    Based on the following joint paper with Nima Anari: https://arxiv.org/pdf/1901.10387.pdf

  • Apr 24: Yin Tat Lee on Linear Programs and Empirical Risk Minimizations in n^2.38 time via Sketching

    We give an alternative way to solve linear programs in n^2.38. This method is based on sketching instead of sampling. Our new algorithm can be naturally extended to empirical risk minimizations.

    Joint work with Zhao Song and Richard Zhang

  • Apr 17: Sidhanth Mohanty on Robust Community Detection in regular graphs

    Given either a random d-regular graph or a random k-colorable d-regular graph, for what values of k is there an efficient algorithm to distinguish between the two distributions? Heuristics based on statistical physics predict that efficient algorithms for this problem exist when k is at most \sqrt{d-1}. This talk will present the main ideas behind algorithms that solve the more general distinguishing problem of community detection arbitrarily close to the predicted thresholds which also satisfy certain robustness guarantees, i.e. they still correctly solve the distinguishing problem after a small number of adversarial perturbations.

  • Apr 10: Aleksandar Nikolov on Sticky Brownian Rounding and its Applications to Constraint Satisfaction Problems

    Semidefinite programming is a powerful tool in the design and analysis of approximation algorithms for combinatorial optimization problems. In particular, the random hyperplane rounding method of Goemans and Williamson has been extensively studied for more than two decades, resulting in various extensions to the original technique and beautiful algorithms for a wide range of applications. Despite the fact that this approach yields tight approximation guarantees for some problems, like Max Cut, for many others, like Max Sat, Max DiCut, and constraint satisfaction problems with global constraints, the tight approximation ratio is still unknown. One of the main reasons for this is the fact that very few techniques for rounding semi-definite relaxations are known.

    In this work, we present a new general and simple method for rounding semi-definite programs, based on Brownian motion. Our approach is inspired by recent results in algorithmic discrepancy theory. We develop and present tools for analyzing our new rounding algorithms, utilizing mathematical machinery from the theory of Brownian motion, complex analysis, and partial differential equations. Focusing on constraint satisfaction problems, we apply our method to several classical problems, including Max Cut, Max 2-Sat, and Max DiCut, and derive new algorithms that are competitive with the best known results. We further show that our algorithms can be used, together with the Sum of Squares hierarchy, to approximate constraint satisfaction problems subject to multiple global cardinality constraints.

    Joint work with Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Roy Schwartz, and Mohit Singh

  • Apr 03: Ben Fisch on Tight Proofs of Space

    The talk will survey proofs-of-space (PoS), providing an overview of what is achieved by existing constructions and the variety of techniques used. A PoS is an interactive protocol in which a prover demonstrates that it is persistently using some minimum amount of space. The prover posts a commitment specifying the size of this space N, and then repeatedly answers challenges from a public-coin verifier with succinct proofs. The prover is only able to produce valid proofs with non-negligible probability if it persistently stores incompressible advice of size Omega(N). However, there is a spectrum of security definitions for what this protocol should guarantee more precisely. Following the overview, I will discuss my recent results on arbitrarily tight proofs of space. Many constructions leave a large gap between the space N that the prover claims to be using and the lower bound on how much space a successful adversarial prover must use. An arbitrarily tight PoS construction can be tuned to make this gap arbitrarily small, while maintaining efficiency.

  • Mar 27: No Lunch (Spring Break)
  • Mar 20: Gautam Kamath on Privately Learning High-Dimensional Distributions

    We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian in R^d and learning a product distribution in {0,1}^d in total variation distance. The sample complexity of our algorithms nearly matches the sample complexity of the optimal non-private learners for these tasks in a wide range of parameters. Thus, our results show that private comes essentially for free for these problems, providing a counterpoint to the many negative results showing that privacy is often costly in high dimensions. Our algorithms introduce a novel technical approach to reducing the sensitivity of the estimation procedure that we call recursive private preconditioning, which may find additional applications.

    Joint work with Jerry Li, Vikrant Singhal, and Jonathan Ullman. Preprint available here: https://arxiv.org/abs/1805.00216.

  • Mar 13: Prasad Raghavendra on What are Polymorphisms?

    What makes 2-SAT easy, but 3-SAT NP-hard? Why is approximating MaxCut to 0.878 easy, but approximating to a 0.879 factor possibly hard? Why is submodular minimization easy? ``Polymorphisms" is one natural answer. In this talk, I will define polymorphisms and broadly survey the large body of work and future lines of inquiry.

  • Mar 06: No Lunch (Visit Days)
  • Feb 27: Leonard J. Schulman on Invitation to Causal Inference

    The scientific method rests upon the careful collection and analysis of empirical data. Much of this data is gathered for the purpose of potential intervention in medicine, public health, environmental regulation, social policy, etc. A decision-maker cannot take advantage of correlations and other structural characterizations of the data without knowing about causal relationships between variables.

    Historically, causality has been teased apart from correlation through controlled experiments. However, we must often make do with passive observation---and then this challenge has no easy answer.

    Remarkably, there are some situations in which statistically defensible causal inference is possible even in the absence of controlled experiments. This is the subject of the theory of structured causal models which has been developed mainly in the last three decades. I plan to briefly describe the framework and mention (a) some results, joint with Piyush Srivastava, about the robustness of this theory; (b) some ongoing work with students relating to inference of mixture models; (c) many open questions.

  • Feb 20: Paris Syminelakis on Algorithms for Kernel Evaluation in High Dimensions

    Kernel evaluation is a basic primitive in machine learning and scientific computing: given a set of n points P and a kernel function k(x,y), for a given query q produce a multiplicative approximation to \sum_{x \in P} k(x,q). This primitive is used in many-body simulations, non-parametric regression, topological data analysis and comparing and testing distributions. Although the problem has been well studied in low-dimensions, surprisingly in high-dimensions until recently the best known algorithm (under worst-case assumptions) was based on uniform random sampling. In this talk, we will give an overview of a recent line of work that obtains sub-linear algorithms for this problem for a variety of kernels and connects it to the complexity of well studied problems like Approximate Near Neighbors and Orthogonal Vectors. The main new ideas are novel ways to perform importance sampling in high dimensions through Randomized Space Partitions.

  • Feb 13: Chinmay Nirkhe on Good approximate quantum LDPC codes from spacetime Hamiltonians

    We study approximate quantum low-density parity-check (QLDPC) codes, which are approximate quantum error-correcting codes specified as the ground space of a frustration-free local Hamiltonian, whose terms do not necessarily commute. Such codes generalize stabilizer QLDPC codes, which are exact quantum error-correcting codes with sparse, low-weight stabilizer generators (i.e. each stabilizer generator acts on a few qubits, and each qubit participates in a few stabilizer generators). Our investigation is motivated by an important question in Hamiltonian complexity and quantum coding theory: do stabilizer QLDPC codes with constant rate, linear distance, and constant-weight stabilizers exist?

    We show that obtaining such optimal scaling of parameters (modulo polylogarithmic corrections) is possible if we go beyond stabilizer codes: we prove the existence of a family of [[N,k,d,ε]] approximate QLDPC codes that encode k = Ω(N/polylog N) into N physical qubits with distance d = Ω(N / polylog N) and approximation infidelity ε = 1/polylog N.

  • Feb 06: Sam Hopkins on How to Estimate the Mean of a Random Vector, Efficiently

    I will discuss a new algorithm for a (very) old statistical problem: given i.i.d. samples x_1,...x_n from a d-dimensional random vector X, estimate the mean of X.

    The focus will be on the question: what are the smallest confidence intervals achievable when we wish to make very mild assumptions on X, for instance only that its covariance exists? Are information-theoretically optimal confidence intervals achievable in polynomial time?

    In the course of answering the latter question affirmatively, I will introduce a new SDP-based notion of the median of a set of points in d dimensions.

    Based on the work: https://arxiv.org/abs/1809.07425

  • Jan 30: Nima Anari on Matroids Expand

    It has been conjectured by Mihail and Vazirani that if we take the graph formed by vertices and edges of a polytope whose vertices have 0/1 coordinates, then the edge expansion of this graph is at least 1. The original motivation for this conjecture was to study Markov Chain Monte Carlo methods that sample from particular set families called matroids. I will give a proof of the expansion conjecture for arbitrary matroids.

    The key to the proof lies in a connection between matroids and multivariate polynomials associated to them with high-dimensional expanders and random walks on them. I will talk about a property of multivariate polynomials which we call complete log-concavity, and how it can be used to analyze natural Markov chains used for sampling from some widely studied discrete distributions.

    Based on joint work with Shayan Oveis Gharan, Kuikui Liu, and Cynthia Vinzant.

  • Jan 23: Dorit Aharnov on Stoquastic PCP vs. Randomness

    We connect the major open question of derandomizing MA to a variant of a seemingly unrelated other major problem: the quantum PCP conjecture. Bravyi and Terhal [2008] gave a surprising quantum-oriented problem which is PromiseMA complete: it is called the "stoquastic local Hamiltonian problem", and the (non-trivial) containment in MA is related to a random walk on n-bit strings. We show that the gapped version of this problem (a quantum analogue of k-SAT with large promise gap) is in NP. This implies that the MA=NP question is equivalent to the question of existance of a gap-amplification procedure for stoquastic Local Hamiltonians (roughly: "stoquastic PCP"). With an additional (natural?) assumption on the gap amplification procedure, "stoquastic PCP" also implies RP=P.

    Joint work with Alex. B Grilo.

Fall 2018

  • Dec 05: Arun Ganesh on Optimal Sequence Length Requirements for Phylogenetic Tree Reconstruction with Indels

    We consider the phylogenetic tree reconstruction problem, which models the practical problem of reconstructing evolutionary trees given DNA (or other) sequences. In the model, there is a binary tree (which is unknown to us) with n leaves where the root is assigned a uniformly random bitstring, and each other node in the tree inherits a mutated version of its parent's bitstring. Three types of mutations are possible for each bit: substitution (where a bit is flipped), insertion (where a bit inserts a random bit to its right), and deletion (where a bit is deleted). We are given the bitstrings at the leaves, and our goal is to reconstruct the unknown tree with high probability. In particular, we would like to reconstruct the tree using bitstrings that are as short possible.

    In this talk, as a warm-up I'll review some simple algorithmic results and information-theoretic lower bounds for the substitution-only case. I'll then discuss the challenges introduced by insertions and deletions, and give a high-level overview of our recent results, which show how to reconstruct the tree with polylogarithmic sequence lengths when insertion and deletion occur with constant probability on each edge. Prior to our results, the best known results either used polynomial sequence lengths or required the probabilities of insertions and deletions to be at most O(\frac{1}{\log^2 n}) on each edge.

    Based on joint work with Qiuyi (Richard) Zhang.

  • Nov 28: Mohammad Hajiabadi on Trapdoor Functions from the Computational Diffie-Hellman Assumption

    Trapdoor functions (TDFs) are a fundamental primitive in cryptography. Yet, the current set of assumptions known to imply TDFs is surprisingly limited, when compared to public-key encryption. We present a new general approach for constructing TDFs. Specifically, we give a generic construction of TDFs from any Hash Encryption (Döttling and Garg [CRYPTO '17]) satisfying a property which we call recyclability. By showing how to build such hash-encryption schemes based on CDH, we obtain the first construction of TDFs with security proved under the CDH assumption. While TDFs from the Decisional Diffie-Hellman (DDH) assumption were previously known, the possibility of basing them on CDH had remained open since the introduction of the Diffie-Hellman assumption in the 70's.

    Joint work with Sanjam Garg.

  • Nov 21: No Lunch (Thanksgiving)
  • Nov 14: Turing Laureate Lunch - Andrew Yao
  • Nov 07: Turing Laureate Lunch - William Kahan
  • Oct 31: Turing Laureate Lunch - Michael Stonebraker
  • Oct 24: Turing Laureate Lunch - Richard Karp
  • Oct 17: Turing Laureate Lunch - Manuel Blum
  • Oct 10: Turing Laureate Lunch - David Patterson
  • Oct 03: Turing Laureate Lunch - Shafi Goldwasser and Silvio Micali
  • Sep 26: Urmila Mahadev on Classical Homomorphic Encryption for Quantum Circuits

    I'll discuss the question of blind quantum computation on the cloud: can a classical client delegate a desired quantum computation to a remote quantum server while hiding all data from the server? This question has been studied since 2005 in various settings, and I will begin by giving a brief history of previous work. I will then describe a recent result which shows that certain classical homomorphic encryption schemes suffice for this task.

  • Sep 19: Aloni Cohen on Towards Modeling Singling Out

    Data privacy laws---like HIPAA, FERPA, Title 13 in the US, and the GDPR in the EU---govern the use of sensitive personal information. They aim to delineate normative boundaries of appropriate use and impose steep penalties upon rule breakers. Conceptually, these laws are based on notions including personally-identifiable information, linkage, distinguishability, anonymization, and inference. Practically, adherence to these laws is often achieved using a variety of ad hoc privacy enhancing techniques, including $k$-anonymity, bucketing, rounding, pseudonymization, and swapping. It is becoming increasingly clear that these techniques are often inadequate for providing the privacy protection envisioned by these laws. New techniques for data privacy are being developed in industry, government, and the academy. But a significant conceptual gap still exists between legal and technical thinking around data privacy. This has resulted in uncertainty as to the which technical offerings are appropriate. We aim to address this uncertainty by translating between the legal and the technical.

    We suggest a formalization for the GDPR's notion of singling out. More specifically, we examine what it means for a data anonymization mechanism to ensure security against singling out in a data release. Ultimately, our goal is to be able to reason about whether certain classes of techniques (e.g., k-anonymity, differential privacy, pseudonymization) effectively prevent singling out attacks and to understand the strengths and weaknesses of the GDPR's protection against singling out more generally. As motivation for this work, I will describe our successful attack as part of the world's first bug bounty challenge for anonymized data re-identification.

    Based on joint work with Kobbi Nissim.

  • Sep 12: Nick Spooner on Multi-prover interactive proofs: Classical, quantum, and no-signaling

    The study of multiplayer games underlies many fundamental questions in both complexity theory and quantum mechanics. In this talk I will give an overview of how multi-prover interactive proofs provide a complexity-theoretic view of the power of quantum entanglement and no-signaling strategies. This reveals a rich and often surprising complexity landscape. In particular I will discuss results due to Ito and Vidick (2012), and Kalai, Raz and Rothblum (2013), which elucidate the power of entangled and no-signaling strategies, respectively.

  • Sep 05: Avi Wigderson on Group actions beget polytopes

    I'll give a variety of examples of subsets of Euclidean space which arise in a wide variety of mathematical contexts. Common to all, but surprising in many cases, is that they are convex polytopes. Moreover, they all can be seen as arising from group actions. This viewpoint, and general results of geometric invariant theory yield their convex and linear structures in a unified way, and motivates naming them moment polytopes.

    I will state a recent work which provides an efficient weak separation oracle for large classes of moment polytopes. This is particularly interesting as most are only implicitly described, and have exponentially many facets!

    I will assume no special background, but as this talk is short, I will have to be sketchy.

Spring 2018

  • May 02: Vijay Vazirani on Distributive Lattices, Stable Matchings, and Robust Solutions

    Our results are: -Introduce the problem of finding stable matchings that are robust to errors in the input. -An efficient algorithm for the following class of errors: Permute arbitrarily the preference list of any one boy or any one girl. This algorithm critically uses the next result. -A generalization of the classic theorem of Birkhoff (1937) on finite distributive lattices.

    Based on two recent joint papers with Tung Mai. https://arxiv.org/pdf/1804.00553.pdf https://arxiv.org/pdf/1804.05537.pdf

  • Apr 25: Fotis Iliopoulos on A New Perspective on Stochastic Local Search and the Lovasz Local Lemma

    We present a new perspective on the analysis of stochastic local search algorithms, via linear algebra, and use it to establish a new criterion for their convergence. Our criterion captures and unifies the analysis of all currently known LLL-inspired local search algorithms, including all current applications of the entropy compression method. It can be seen as a generalization of the Lovasz Local Lemma that quantifies the interaction strength of bad events, so that weak interactions form correspondingly small obstacles to algorithmic convergence. As a demonstration of its power, we use our criterion to analyze a complex local search algorithm for the classical problem of coloring graphs with sparse neighborhoods. We prove that any improvement over our algorithm would require a major (and unexpected) breakthrough in random graph theory, suggesting that our criterion reaches the edge of tractability for this problem. Finally, we consider questions such as the number of possible distinct final states and the probability that certain portions of the state space are visited by a local search algorithm. Such information is currently available for the Moser-Tardos algorithm and for algorithms satisfying a combinatorial notion of commutativity introduced of Kolmogorov. Our framework provides a very natural and more general notion of commutativity (essentially matrix commutativity) which allows the recovery of all such results with much simpler proofs.

    Joint work with Dimitris Achlioptas and Alistair Sinclair.

  • Apr 18: Ilya Razenshteyn on Nearest Neighbors for General Distances

    The Approximate Nearest Neighbor Search (ANN) problem is defined as follows. Given n points in \mathbb{R}^d, we would like to compute and store a small amount of auxiliary information and use it to answer queries of the following kind very quickly: given a query point, return one of the n points that is approximately closest to the query. The ANN problem can be defined with respect to any norm, and we would like to understand how the norm affects the complexity of the problem. We are interested in algorithms that have polynomial dependence on the dimension d.

    Traditionally, the ANN problem has been studied for the \ell_1 and \ell_2 norms, with a very few exceptions. I will outline a new framework based on nonlinear spectral gaps for the ANN problem over general norms, and discuss its implications. The main ingredient is a new randomized partitioning procedure for a set of points lying in a normed space. This procedure, in turn, builds on the recent geometric result of Naor: any graph embeddable in a d-dimensional normed space such that:

    • All edges have length at most one,
    • Most pairs of vertices are \gg \log d apart must have a sparse cut.

    Based on work joint with Alexandr Andoni, Assaf Naor, Aleksandar Nikolov, and Erik Waingarten.

  • Apr 11: Aaron Schild on Spectral Subspace Sparsification

    We introduce a new approach to spectral sparsification that approximates the quadratic form of the pseudoinverse of a graph Laplacian restricted to a subspace with dimension d. We show that sparsifiers with at most O(d(\log d)/\epsilon^2) edges exist. Our setting generalizes that of Schur complement sparsifiers. Our approach produces sparsifiers by sampling a uniformly random spanning tree of the input graph and using that tree to guide an edge elimination procedure that contracts, deletes, and reweights edges. In the context of Schur complement sparsifiers, our approach has two benefits over prior work. First, it produces a sparsifier in m^{1+o(1)} time rather than \tilde{O}(m + n\epsilon^{-2}) time, which is faster for small enough values of \epsilon. We directly exploit this to compute \epsilon-approximate effective resistances for a set P of vertex pairs in m^{1+o(1)} + \tilde{O}(|P|\epsilon^{-2}) time, which improves on a \tilde{O}(m + (n + |P|)\epsilon^{-2}) time algorithm from prior work (Durfee-Kyng-Peebles-Rao-Sachdeva '17). Secondly, it yields sparsifiers that are reweighted minors of the input graph. As a result, we give a near-optimal answer to an \ell_2-version of the Steiner point removal problem.

    A key ingredient of our algorithm is a subroutine of independent interest: a near-linear time algorithm that, given a set of vertices S, builds a data structure from which we can query a \delta-multiplicative approximation to the decrease in the effective resistance between two vertices after identifying all vertices in S to a single vertex with inverse polynomial additional additive error in O((\log n)/\delta^2) time.

    Joint work with Huan Li

  • Apr 04: Henry Yuen on How complex can entangled games be?

    An important goal in quantum complexity theory is to understand the power of interactive proofs involving entangled provers. In the early 90s, Babai, Fortnow and Lund showed that MIP, the set of problems decidable by (classical) multiprover interactive proofs, is exactly NEXP. Currently, we are far from an analogous characterization of the class MIP, analogue of MIP with entangled provers. All that is known is that MIP is at least as powerful as NEXP, but no complexity upper bound is known — it may even contain undecidable languages!

    I’ll discuss some new evidence that MIP* might be much, much harder than MIP: for any time-constructible function T(n), the class of problems solvable in nondeterministic time T(n) have interactive proofs with entangled provers where the completeness/soundness gap is 1/ log T(n).

    As a corollary of our results, we obtain an alternate proof of William Slofstra’s breakthrough result that the problem of determining whether a quantum multiprover interactive protocol accepts with probability 1 is undecidable.

    Joint work with Joseph Fitzsimons, Zhengfeng Ji and Thomas Vidick.

  • Mar 28: No Lunch (Spring Break)
  • Mar 21: Dima Kogan on The Discrete-logarithm Problem with Preprocessing

    We study discrete-log algorithms that use preprocessing. In this model, an adversary may use a very large amount of precomputation to produce an "advice" string about a specific group (e.g., NIST P-256). In a subsequent online phase, the adversary's task is to use the preprocessed advice to quickly compute discrete logarithms in the group. Motivated by surprising recent preprocessing attacks on the discrete-log problem, we study the power and limits of such algorithms.

    In particular, we focus on generic algorithms that operate in every cyclic group. We show a lower bound on the success probability of any generic discrete-log algorithm with preprocessing. Our lower bound, which is tight up to logarithmic factors, uses a synthesis of incompressibility techniques and classic methods for generic-group lower bounds. We apply our techniques to prove related lower bounds for the CDH, DDH, and multiple-discrete-log problems.

    Time permitting, we will also briefly discuss a new generic preprocessing attacks for certain decisional-type problems in groups.

    Joint work with Henry Corrigan-Gibbs

  • Mar 14: No Lunch (Visit Days)
  • Mar 07: Sam Hopkins on High-dimensional clustering, mixture models, and sum of squares proofs

    I will describe a new algorithm for some high-dimensional unsupervised learning problems, including clustering samples from mixtures of well-separated Gaussians. This is the first polynomial-time algorithm to tolerate between-cluster separation less than what is tolerated by greedy (single-linkage) clustering, a long-standing benchmark set by Dasgupta and Schulman in 2000. The design of the algorithm is based on a "proofs to algorithms" methodology: we first design a very simple proof of cluster identifiability in the well-separated mixture model, show that this proof is captured in the SoS proof system, and then use it as a dual certificate to analyze a sum-of-squares SDP.

    Based on joint work with Jerry Li, to appear in STOC 2018. (See also independent work by Kothari, Steinhardt, and Steurer, also to appear STOC 2018)

  • Feb 28: Manuel Sabin on Fine-Grained Derandomization: From Problem-Centric to Resource-Centric Complexity

    We show that popular hardness conjectures about problems from the field of fine-grained complexity theory imply structural results for resource-based complexity classes. Namely, we show that if either k-Orthogonal Vectors requires n^{k - o(1)} time or k-CLIQUE requires n^{\omega k/3 - o(1)} time (for \omega the matrix multiplication constant) to count on randomized machines for all but finitely many input lengths, then the following hold: BPP can be decided in polynomial time using only n^{\alpha} random bits on average over any efficient input distribution, for any constant \alpha > 0 BPP can be decided in polynomial time with no randomness on average over the uniform distribution DTIME [t(n)] \nsubseteq BPP, for time-constructible t(n)=n^{\omega(1)} These derandomizations improve over previous ones achieved from worst-case uniform assumptions by succeeding on all but finitely many input lengths, as is wanted in asymptotics. Previously, derandomizations from worst-case uniform assumptions were only know to succeed on infinitely many input lengths. It is specifically the structure and moderate hardness of the k-Orthogonal Vectors and k-CLIQUE problems that makes removing this restriction possible.

    Via this uniform derandomization, we connect the problem-centric and resource-centric views of complexity theory by showing that exact hardness assumptions made about specific problems like k-CLIQUE imply structural results like EXP \neqBPP. Viewed pessimistically, this result poses a barrier to proving some of the main assumptions of fine-grained complexity, lest we also attain major breakthroughs in classical complexity theory. Alternately, we can hope to make progress on problems like EXP vs BPP by working on very concrete conjectures about specific problems.

  • Feb 21: Thodoris Lykouris on Small-loss bounds for online learning with partial information

    We consider the problem of adversarial (non-stochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a black-box approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are non-negative, under the graph-based feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called "small-loss" o(\alpha L^{\star}) regret bounds with high probability, where \alpha is the independence number of the graph, and L^{\star} is the loss of the best action. Prior to our work, there was no data-dependent guarantee for general feedback graphs even for pseudo-regret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the black-box nature of our technique, we extend our results to many other applications such as semi-bandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators.

    In the special case of classical bandit and semi-bandit problems, we provide optimal small-loss, high-probability guarantees of \tilde{O}(\sqrt{dL^{\star}}) for actual regret, where d is the number of actions, answering open questions of Neu. Previous bounds for bandits and semi-bandits were known only for pseudo-regret and only in expectation. We also offer an optimal \tilde{O}(\sqrt{\kappa L^{\star}}) regret guarantee for fixed feedback graphs with clique-partition number at most \kappa.

    Joint work with Karthik Sridharan and Eva Tardos.

  • Feb 14: Ido Nachum on Learners that Use Little Information

    We will talk about learning algorithms that are restricted to using a small amount of information from their input sample. We introduce a category of learning algorithms we term d-bit information learners, which are algorithms whose output conveys at most d bits of information on their input. A central theme in this work is that such algorithms generalize. If time permits, we will talk about lower and upper bounds regarding the information complexity of learning.

  • Feb 07: Emanuele Natale on State-Efficient Plurality Consensus in Population Protocols

    We consider the classical Population Protocol model: given a generic directed strongly-connected graph, a finite-state automaton a_u, called agent, resides on each node u; at each discrete-time step, an edge (u,v) of the graph is selected, a_u reads the state of a_v and update its state accordingly.

    The only, classical assumption about how edges are chosen is that the scheduler satisfies the following fairness condition: if a configuration c appears infinitely often in the execution of the protocol, then it will also appear infinitely often any configuration that may be reached from c under a suitable sequence of edge-activation choices.

    We study the Exact Plurality Consensus Problem: each agent is initially assigned a value, and the goal of the agents is to eventually have a correct guess on which value is initially the most frequent one in the system.

    An open problem is what is the optimal space complexity for the Exact Plurality Consensus Problem, i.e. the number of states each agent should have in order to solve it. In fact, the best algorithm so far for the general multi-valued case had an exponential number of states.

    In this work, we provide the first exact plurality consensus protocol with \tilde{O}(k^2) states. We further prove that k^2 states are necessary.

  • Jan 31: Vijay Bhattiprolu on Approximability of matrix p-to-q norms

    We'll discuss some Algorithmic and hardness results of the p-to-q matrix norm problem for the different regimes of p and q. In this talk we'll focus more on the algorithmic side, where we'll discuss a generalization of Krivine's algorithm for the grothendieck problem to the case of p >= 2 >= q that allows us to improve on known approximation algorithms for this regime.

    Based on two recent joint works with Mrinalkanti Ghosh, Venkatesan Guruswami, Euiwoong Lee and Madhur Tulsiani.

  • Jan 24: Theory Retreat Presentation

Fall 2017

  • Dec 06: Akshayaram Srinivasan on Two-round Multiparty Secure Computation under Minimal Assumptions

    Secure Multiparty Computation (MPC) allows a set of mutually distrusting parties to compute a joint function of their private input such that nothing else apart from the output of the function is leaked. In this work, we give a round-optimal construction of MPC under minimal assumptions.

    Based on joint work with Sanjam Garg.

  • Nov 29: Aviad Rubinstein on Distributed PCP Theorems for Hardness of Approximation in P

    We present a new model of probabilistically checkable proof (PCP), which we call "Distributed PCP": A satisfying assignment (x in {0,1}^n) to a CNF formula is shared between two parties (Alice knows x1, ... x{n/2}, and Bob knows x_{n/2+1},...,x_n). Their goal is to jointly write a PCP for x, while exchanging little or no information.

    Using our new framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P.

    Mostly based on joint work with Amir Abboud and Ryan Williams.

  • Nov 22: No Lunch (Thanksgiving week)
  • Nov 15: Pasin Manurangsi on Parameterized Inapproximability of Dominating Set

    In the Dominating Set problem, we are given a graph G and an integer k, and the goal is to find a dominating set of size k in G. The parameterized regime focuses on the setting where k is small. In this case, the trivial algorithm solves the problem in O(n^{k + 1}) time and it can be sped up to O(n^k) for sufficiently large k. This is also known to be tight: any O(n^{k - \varepsilon})-time algorithm will refute the Strong Exponential Time Hypothesis (SETH). But what about approximation algorithms? E.g. can we find a dominating set of size 2k (given that G has a dominating set of size k) in time O(n^{k - \varepsilon})?

    We give a negative answer to this question by showing that, assuming SETH, no O(n^{k - \varepsilon})-time algorithm can approximate the problem to within (\log n)^{1/\text{poly}(k)} factor. Our result is obtained by establishing a connection between communication complexity and parameterized hardness of approximation, generalizing ideas from a recent breakthrough work of Abboud, Rubinstein and Williams (FOCS 2017). The connection is quite flexible and it also allows us to obtained lower bounds under other assumptions (the k-Sum Conjecture, ETH and W[1]-hardness). In this talk, I will outline this simple connection and, if time permits, discuss some related open questions. No prior knowledge on the topic is required.

    Based on joint work with Karthik C.S. and Bundit Laekhanukit

  • Nov 08: Ilias Diakonikolas on Computational Lower Bounds for Statistical Estimation Problems

    A number of techniques have been developed in information theory to establish sample complexity lower bounds for inference tasks. But how do we prove lower bounds on the computational complexity of statistical problems?

    In this talk, I will describe a technique that yields unconditional computational lower bounds for Statistical Query (SQ) algorithms — a powerful but restricted family of algorithms — for various high-dimensional statistical tasks. The prototypical application of our technique will be for the problem of learning mixtures of separated Gaussians. We show a super-polynomial gap between the sample complexity and the computational complexity of any Statistical Query algorithm for this problem.

    The talk will be based on this paper: https://arxiv.org/abs/1611.03473

    Joint work with Daniel Kane (UCSD) and Alistair Stewart (USC).

  • Nov 01: Jingcheng Liu on The Ising Partition Function: Zeros and Deterministic Approximation

    We study the problem of approximating the partition function of the ferromagnetic Ising model in graphs and hypergraphs. Our first result is a deterministic approximation scheme (an FPTAS) for the partition function in bounded degree graphs that is valid over the entire range of parameters β (the interaction) and λ (the external field), except for the case |λ|=1 (the zero-field'' case). A randomized algorithm (FPRAS) for all β,λ has long been known. Unlike most other deterministic approximation algorithms for problems in statistical physics and counting, our algorithm does not rely on thedecay of correlations'' property. Rather, we exploit and extend machinery developed recently by Barvinok, and Patel and Regts, based on the location of the complex zeros of the partition function, which can be seen as an algorithmic realization of the classical Lee-Yang approach to phase transitions. Time permitting I'll show a sketch of how to prove Lee-Yang type theorems. No familiarity with the subject is assumed.

    Based on joint work with Alistair Sinclair and Piyush Srivastava.

  • Oct 25: Fan Wei on Local max-cut in smoothed polynomial time

    In 1988, Johnson, Papadimitriou and Yannakakis wrote that "Practically all the empirical evidence would lead us to conclude that finding locally optimal solutions is much easier than solving NP-hard problems". Since then the empirical evidence has continued to amass but formal proofs of this phenomenon have remained elusive. A canonical (and indeed complete) example is the local max-cut problem for which no polynomial time method is known. In a breakthrough paper, Etscheid and Röglin proved that the smoothed complexity of local max-cut is quasi-polynomial. In this paper we prove smoothed polynomial complexity for local max-cut, thus confirming that finding locally optimal solutions for max-cut is much easier than solving it. In this short talk, I will give a gentle introduction to this problem and some proof sketch.

    Joint work with Omer Angel, Sebastien Bubeck, and Yuval Peres.

  • Oct 18: Igor Shinkar on Testing Linearity against No-Signaling Strategies

    No-signaling strategies are collections of distributions with certain non-local correlations. They have been studied in Physics as a strict generalization of quantum strategies to understand the power and limitations of Nature's apparent non-locality. Recently, they have received attention in Theoretical Computer Science due to their connections to Complexity and Cryptography.

    We initiate the study of Property Testing against no-signaling strategies, focusing on the classical problem of linearity testing. We prove that any no-signaling strategy that passes the linearity test with high probability must be close to a quasi-distribution over linear functions. Quasi-distributions generalize the notion of probability distributions over global objects (such as functions) by allowing negative probabilities, while at the same time requiring that local views follow standard distributions (with non-negative probabilities). Quasi-distributions arise naturally in the study of Quantum Mechanics as a tool to describe various non-local phenomena in Nature.

    Our analysis of the linearity test relies on Fourier analytic techniques applied to quasi-distributions. Along the way, we also establish a general equivalence between no-signaling strategies and quasi-distributions, which we believe will provide a useful perspective on the study of no-signaling strategies beyond Property Testing.

    Joint work with Alessandro Chiesa and Peter Manohar.

  • Oct 11: Thatchaphol Saranurak on Expander decomposition

    I will present an almost-linear-time algorithm for decomposing graphs into expanders. Formally, given an n-node m-edge graph G = (V, E), we partition V into {V_1, \dots, V_k} for some k, such that 1) the induced subgraph of each part G[V_i] has high expansion, and 2) the number of edges crossing each part is small. This tool is useful in various contexts (e.g. near-linear time algorithm in undirected graphs and directed graphs, dynamic graph algorithms, and also approximation algorithms).

    The algorithm is a simple black box reduction to the balanced sparsest cut problem and takes O(m^{1 + o(1)}) time. To the best of my knowledge, previous algorithms either 1) take \Omega(mn) time, or 2) only guarantee that each part V_i contains some subgraph with high expansion or is contained in some subgraph with high expansion. (For ours, exactly G[V_i] has high expansion.)

    Joint work with Danupon Nanongkai and Christian Wulff-Nilsen.

  • Oct 04: Aaron Schild on Localization of electrical flows

    We show that in any graph, the average length of a flow path in an electrical flow between the endpoints of a random edge is O(\log^2 n). This is a consequence of a more general result which shows that the spectral norm of the entrywise absolute value of the transfer impedance matrix of a graph is O(\log^2 n). This result implies a simple oblivious routing scheme based on electrical flows in the case of transitive graphs.

    Joint work with Satish Rao and Nikhil Srivastava.

  • Sep 27: Igor Pak on Complexity of short generating functions

    Short generating functions (GF) are power series defined as sums of terms ct^k/(1-t^a)(1-t^b).... One can think of each term as a GF of a generalized arithmetic progression. The size of a short GF A(t) is defined as the total bit length of the parameters a,b,c,k,... We study the problem whether a given GF has a short GF presentation of polynomial size.

    This turn out to be a hard problem both mathematically and computationally. We resolve it modulo some complexity assumptions. Notably, we show that the truncated theta function \sum_{k<N} t^{k^2} does NOT have a short GF presentation of polynomial size unless #P is in FP/poly.

    In the talk, I will spend much of the time giving a survey and motivating the problem. I will explain the connections to Presburger arithmetic, integer programming, number theory and discrete geometry.

    This is joint work with Danny Nguyen.

  • Sep 20: Raghu Meka on Learning discrete markov random fields with nearly optimal runtime and sample complexity

    We give a simple, multiplicative-weight update algorithm for learning undirected graphical models or Markov random fields (MRFs). The approach is new, and for the well-studied case of Ising models or Boltzmann machines, we obtain an algorithm that uses a nearly optimal number of samples and has quadratic running time (up to logarithmic factors), subsuming and improving on all prior work.

    Our main application is an algorithm for learning the structure of t-wise MRFs with nearly-optimal sample complexity (up to polynomial losses in necessary terms that depend on the weights) and running time that is n^{O(t)}. All prior work runs in time n^{Ω(d)} for graphs of degree d and does not generate a hypothesis close in statistical distance even for t=3. We observe that our runtime has the correct dependence on n and t assuming the hardness of learning sparse parities with noise.

    Our algorithm--the Sparsitron-- is easy to implement (has only one parameter) and holds in the online setting. It also gives the first solution to the problem of learning sparse Generalized Linear Models (GLMs).

    This is based on joint work with Adam Klivans.

  • Sep 13: Aleksander Madry on Matrix Scaling and Balancing via Second-order Optimization

    Matrix scaling and balancing are two fundamental linear algebraic primitives that have been studied for decades in the scientific computing literature. In this presentation, I will describe a new continuous optimization based framework that enables us to solve both these problems in a unified and principled manner.

    This framework relies on a certain second-order optimization primitive - called box-constrained Newton method, to efficiently minimize a broad class of so-called second-order robust functions. In the context of matrix scaling and balancing, one can then leverage and generalize the work on Laplacian system solving to obtain a very fast implementation of this primitive. This, in turn, leads to very efficient algorithms for the two problems of interest.

    Based on joint work with Michael B. Cohen, Dimitris Tsipras, and Adrian Vladu.

  • Sep 06: Nima Anari on Strongly Rayleigh Measures in Counting and Optimization

    Strongly Rayleigh (SR) measures form a subset of discrete measures that exhibit one of the strongest forms of negative dependence. They contain several well-known distributions, such as collections of independent Bernoulli random variables, random spanning trees in graphs, finite determinantal point processes, and more.

    In recent years, SR measures have found many algorithmic applications. I will briefly mention some of these results, drawing parallels to the algorithmic applications of continuous log-concave functions. These applications are roughly concerned with finding the mode of SR measures and their point-wise products, and integration/sampling from SR measures and their point-wise products.

    Time-permitting I will show elements of the proofs and show how properties of SR measures are used to derive these results.

    Based on joint works with Shayan Oveis Gharan, Alireza Rezaei, Amin Saberi, and Mohit Singh.

  • Aug 30: Euiwoong Lee on An Algorithm for k-Vertex Separator and Applications to Vertex Deletion Problems

    Given a graph G = (V, E) and an integer k, we study k-Vertex Separator, where the goal is to remove the minimum number of vertices such that each connected component in the resulting graph has at most k vertices. It has connections to several subareas of TCS (balanced graph partitioning, hypergraph vertex cover (HVC), and fixed parameter tractability (FPT)). Our main result is an O(log k)-approximation algorithm for k-Vertex Separator. Our result improves the best previous algorithms from graph partitioning and HVC.

    This algorithm can be used as a subroutine to give approximation algorithms for vertex deletion problems. One example is k-Path Transversal, where the goal is to remove the minimum number of vertices such that there is no simple path of length k. With additional ideas developed in FPT algorithms and graph minor theory, we present an O(log k)-approximation algorithm for k-Path Transversal.

    Based on joint work with Michal Wlodarczyk.

Spring 2017

  • May 10: Luca Trevisan on Some simple distributed network processes

    We will describe network processes in which, at each step, each node communicates with its neighbors, or a random subset of neighbors, and it updates its state to be "more like" the state of the neighbors.

    In a discrete setting, where there is a finite set of possible states, each node node updates to the state held by a plurality of sampled neighbors. Here we show that, in a complete communication network, there is a quick convergence to a consensus, regardless of the initial configuration and even in the presence of adversarial faults. If the set of possible states is ordered, and nodes update to the median of neighbors, convergence was known to be even faster, but less robust to adversarial tampering.

    In a continuous setting, each node holds a bounded real number, and it updates to the average of sampled neighbors. Here we show that if the graph has a clustered structure, such as the one generated by the stochastic block model, the nodes can identify the cluster they belong to based on the evolution of the local state. This holds even in an asynchronous model in which only two nodes are active at a time, and the study of the latter setting leads to interesting questions about the concentration of the product of iid random matrices.

    (Based on joint work with Luca Becchetti, Andrea Clementi, Pasin Manurangsi, Emanuele Natale, Francesco Pasquale, Riccardo Silvestri and Prasad Raghavendra.)

  • May 03: Tom Gur on Distribution Testing Lower Bounds via Reductions from Communication Complexity

    We present a new methodology for proving distribution testing lower bounds, establishing a connection between distribution testing and the simultaneous message passing (SMP) communication model. Extending the framework of Blais, Brody, and Matulef (CCC 2012), we show a simple way to reduce (private-coin) SMP problems to distribution testing problems. This method allows us to prove several new distribution testing lower bounds, as well as to provide simple proofs of known lower bounds.

    Our main result is concerned with testing identity to a specific distribution p, given as a parameter. In a recent and influential work, Valiant and Valiant (FOCS 14) showed that the sample complexity of the aforementioned problem is closely related to the lp[2/3]-quasinorm of p. We obtain alternative bounds on the complexity of this problem in terms of an arguably more intuitive measure and using simpler proofs. More specifically, we prove that the sample complexity is essentially determined by a fundamental operator in the theory of interpolation of Banach spaces, known as Peetre's K-functional. We show that this quantity is closely related to the size of the effective support of p (loosely speaking, the number of supported elements that constitute the vast majority of the mass of p). This result, in turn, stems from an unexpected connection to functional analysis and refined concentration of measure inequalities, which arise naturally in our reduction.

    Joint work with Eric Blais and Clément Canonne.

  • Apr 26: Alexandra Kolla on Signings, expander graphs and perfect 2-matchings

    The spectra of signed matrices have played a fundamental role in social sciences, graph theory, and control theory. In this talk, we investigate some computational problems of identifying symmetric signings of matrices with natural spectral properties. I will talk about a few results on matrix signings:

    1. NP-completeness of certain signing problems and their implications towards constructing Ramanujan expanders via lifts.

    2. A combinatorial characterization of matrices with invertible symmetric signings and an efficient algorithm using this characterization to verify whether a given matrix has an invertible symmetric signing, along with its implications to finding perfect 2-matchings. Time permitting, I will also discuss some other related signing problems.

  • Apr 12: Huacheng Yu on Crossing the Logarithmic Barrier for Dynamic Boolean Data Structure Lower Bounds

    This talk presents the first super-logarithmic lower bounds on the cell probe complexity of dynamic boolean (a.k.a. decision) data structure problems, a long-standing milestone in data structure lower bounds.

    We introduce a new method for proving dynamic cell probe lower bounds and use it to prove a $\tilde{\Omega}(\log^{1.5} n)$ lower bound on the operational time of a wide range of boolean data structure problems, most notably, on the query time of dynamic range counting over $\mathbb{F}_2$. Proving an $\omega(\lg n)$ lower bound for this problem was explicitly posed as one of five important open problems in the late Mihai Patrascu's obituary [Tho13]. This result also implies the first $\omega(\lg n)$ lower bound for the classical 2D range counting problem, one of the most fundamental data structure problems in computational geometry and spatial databases.

    Our technical centerpiece is a new way of "weakly" simulating dynamic data structures using efficient one-way communication protocols with small advantage over random guessing. This simulation involves a surprising excursion to low-degree (Chebychev) polynomials which may be of independent interest, and offers an entirely new algorithmic angle on the "cell sampling" method of Panigrahy et al. [PTW10].

    Joint work with Kasper Green Larsen and Omri Weinstein.

  • Apr 05: Arnab Bhattacharyya on On lower bounds for locally correctable codes

    A locally correctable code (LCC) is an error correcting code that allows correction of an arbitrary coordinate of a corrupted codeword by querying only a few coordinates. Finding the optimal tradeoffs between the query complexity and length of LCCs has been a long-standing open problem. In this talk, I'll survey two recent results on lower bounds for the length of constant-query LCCs.

    1) We show that any 2-query locally correctable code C: {0,1}^k -> [m]^n that can correct a constant fraction of corrupted symbols must have n > exp(k/(log m)) under the assumption of perfect completeness. Our result is tight upto constant factors in the exponent. The only previous lower bound that held for large m was Omega((k/log m)^2) due to [Katz & Trevisan '00]. Our result implies that LCC's with perfect completeness cannot yield 2-query private information retrieval schemes with sub-polynomial communication. Joint work with Sivakanth Gopi and Avishay Tal.

    2) Affine-invariant codes are codes whose coordinates form a vector space over a finite field and which are invariant under affine transformations of the coordinate space. We give tight lower bounds on the length of affine-invariant LCC's with constant query complexity. If C is a subset of [m]^{K^n} where K is a finite field and m is a constant, describing an affine-invariant q-query LCC, then the number of codewords of C is at most exp(O(n^{q-1})). The dependence on n is tight since it's achieved by the Reed-Muller code of order q-1. A similar lower bound was obtained by [Ben-Sasson & Sudan '11] but only for linear codes. Joint work with Sivakanth Gopi.

  • Mar 29: No Lunch (spring break)
  • Mar 22: Andrej Bogdanov on Threshold Secret Sharing Requires a Linear-Size Alphabet

    Shamir's scheme allows for a secret to be shared among n parties so that any t of them can reconstruct it, but no t - 1 of them can obtain any information about it. The scheme works for all choices of n and t, but even if the secret is a single bit the shares are about (log n)-bits long.

    Extending previous work of Kilian and Nisan, we show that this is necessary: For all 1 < t < n, any "perfect" scheme for a one-bit secret must use shares of size at least max{ log(t + 1), log(n - t + 2) } ≥ log (n + 3)/2. The proof involves a detour into zero-sum games. (Based on joint work with Siyao Guo and Ilan Komargodski)

  • Mar 15: No Lunch
  • Mar 08: Alireza Rezaei on An MCMC Approach for Sampling Determinantal Point Processes and Strongly Rayleigh Measures

    Strongly Rayleigh distributions are natural generalizations of product distributions and determinantal probability measures that satisfy the strongest form of negative dependence properties. We show that the "natural" Monte Carlo Markov Chain (MCMC) algorithm mixes rapidly in the support of a homogeneous strongly Rayleigh distribution. As a byproduct, our proof implies that Markov chains can be used to efficiently generate approximate samples of a k-determinantal point process.

    Based on a joint work with Shayan Oveis Gharan and Nima Anari.

  • Mar 01: Anindya De on Better bounds for trace reconstruction

    Let x be a n bit string. A trace of x is obtained by deleting every bit with probability say 1/10. Suppose you have access to independent traces of x and want to recover x from these traces. How many traces do you need to reconstruct x? The best known upper bound is 2^{n^{1/3}} and the best known lower bound is n^{1/2}. So, plenty of open questions here! Note that while the best known upper bound is algorithmic, even improving the information theoretic upper bound is completely open.

  • Feb 22: Ben Weitz on On the Bit Complexity of Sum-of-Squares Proofs

    It has often been claimed that degree d Sum-of-Squares proofs can be found efficiently (via the Ellipsoid algorithm), if they exist. Recently, Ryan O'Donnell noted that this is not necessarily true, and gave an example of a polynomial system and a non-negative polynomial that admits degree two proofs of non-negativity, but these proofs necessarily involve numbers with an exponential number of bits, causing the Ellipsoid algorithm to run in exponential time. In this talk I will describe O'Donnell's counterexample, and present some additional results concerning the bit complexity of SoS proofs. I will give a stronger counterexample, negatively answering a question O'Donnell posed in his paper about whether Boolean constraints imply bounded proofs, and give a set of conditions that are sufficient to imply a bound on the coefficients in an SoS proof. Time permitting, I will demonstrate how this condition is applicable for many common use-cases of the SoS algorithm, such as Max-CSP, Balanced Separator, Max-Clique, and Max-Bisection. Based on joint work with Prasad Raghavendra.

  • Feb 15: Avi Wigderson on Some basic problems and results from Invariant Theory (and its interactions with CS)

    Invariant theory deals with properties of symmetries - actions of groups on sets of objects. It has been slower to join its sibling mathematical theories in the computer science party, but we now see it more and more in studies of computational complexity, pseudorandomness, and the analysis of algorithms.

    I will discuss some basic questions and results of this field, and how they lead to fertile interaction with CS (in which symmetries occur perhaps more often than you think).

    If you need more motivation - Invariant Theory is beautiful! Here is Hermann Weyl's poetic quote about its origins: "The theory of invariants came into existence in the middle of the nineteenth century somewhat like Minerva: a grown-up virgin, mailed in the shining armor of algebra, she sprang forth from Cayley's Jovian head."

    No special background will be assumed.

  • Feb 08: Yevgeniy Dodis on Fixing Cracks in the Concrete: Random Oracles with Auxiliary Input, Revisited

    We revisit security proofs for various cryptographic primitives in the random oracle model with auxiliary input (ROM-AI): a (computationally unbounded) attacker A can compute arbitrary S bits of leakage z=z(O) about the random oracle O before attacking the system, and then use additional T oracle queries to O during the attack. This model was explicitly studied by Unruh in 2007 (CRYPTO 2007), but dates back to the seminal paper of Hellman in 1980 about time-space tradeoffs for inverting random functions, and has natural applications in settings where traditional random oracle proofs are not useful: (a) security against non-uniform attackers; (b) space-time tradeoffs; (c) security against preprocessing; (d) resilience to backdoors in hash functions.

    We obtain a number of new results about ROM-AI, but our main message is that ROM-AI is the "new cool kid in town": it nicely connects theory and practice, has a lot of exciting open questions, leads to beautiful math, short definitions, elegant proofs, surprising algorithms, and is still in its infancy. In short, you should work on it!

  • Feb 01: Marc Khoury on Restricted Constrained Delaunay Triangulations

    We introduce the restricted constrained Delaunay triangulation (restricted CDT). The restricted CDT generalizes the restricted Delaunay triangulation, allowing us to define a triangulation of a surface that includes a set of constraining segments. We define the restricted CDT as the dual of a restricted Voronoi diagram defined on a surface that we have extended by topological surgery. We prove various combinatorial properties of restricted CDTs, including sampling conditions under which restricted CDTs exist and are homeomorphic to an underlying surface.

  • Jan 25: Manuel Sabin on Average-Case Fine-Grained Hardness

    We present functions that are hard to compute on average for algorithms running in some fixed polynomial time, assuming widely-conjectured \emph{worst-case} hardness of certain problems from the study of fine-grained complexity. Examples of unconditional versions of such functions are known from before (Goldmann et al., Inf. Process. Lett. 1994), but these have been canonical functions that have not found further use than as a proof of this concept, while our functions are closely related to well-studied problems and have considerable algebraic structure.

    Based on the average-case hardness and structure of our functions, we outline the construction of a Proof of Work scheme and discuss possible approaches to constructing fine-grained One-Way Functions. We also show how our reductions make conjectures regarding the worst-case hardness of the problems we reduce from (and consequently the Strong Exponential Time Hypothesis) heuristically falsifiable in a sense similar to that of (Naor, CRYPTO 2003).

    We prove our hardness results in each case by showing fine-grained reductions from solving one of three problems - namely, Orthogonal Vectors (OV), 3SUM, and All-Pairs Shortest Paths (APSP) - in the worst case to computing our function correctly on a uniformly random input. The conjectured hardness of OV and 3SUM then gives us a functions that takes $\Omega(n^{2-o(1)})$ time to compute on average, and that of APSP gives us a function that take $\Omega(n^{3-o(1)})$ time. Using the same techniques we also obtain a conditional average-case time hierarchy of functions.

  • Jan 18: Eylon Yogev on Hardness of Continuous Local Search: Query Complexity and Cryptographic Lower Bounds

    The computational complexity of "Continues Local Search" type problems is captured by the complexity class CLS which is contained in the intersection of PLS and PPAD, two important subclasses of TFNP

    In this talk, I will show hardness results for CLS, which is the smallest non-trivial class among the currently defined subclasses of TFNP. To this end, I will show hardness for a new total search problem which we call EOML, and show that it is contained in CLS. The hardness results are in terms of black-box (where only oracle access to the function is given) and white-box (where the function is represented succinctly by a circuit). In the black-box case, we show instances for which any (unbounded) randomized algorithm must make exponentially many queries in order to find a local optimum. In the white-box case, we show hardness for computationally bounded algorithms under cryptographic assumptions.

    Joint work with Pavel Hubacek.

Fall 2016

  • Dec 14: Shiri Chechik on Fully dynamic all pairs shortest paths with worst case bounds

    In the fully dynamic all-pairs shortest paths problem we are asked to maintain the shortest paths between all pairs in a graph under insertions and deletions of nodes. After each such update in the graph, the algorithm may spend some processing time and then has to output the new all pairs distances. There is a vast literature on dynamic shortest paths spanning over three decades and many papers cover different aspects of this problem.

    In this talk, we introduce the problem and discuss the classic problem of dynamically maintaining shortest paths between all pairs of nodes of a directed weighted graph. We will consider the worst-case guarantees on the time needed to process a single update (in contrast to related results, the update time is not amortized over a sequence of updates).

    Joint work with Ittai Abraham and Sebastian Krinninger.

  • Dec 07: Pasin Manurangsi on Almost-polynomial Ratio ETH-hardness of Densest k-Subgraph

    In the Densest k-Subgraph problem, given an undirected graph G and an integer k, the goal is to find a subgraph of G on k vertices that contains maximum number of edges. Even though the state-of-the-art algorithm for the problem achieves only O(n^1/4) approximation ratio (Bhaskara et al. 2010), the problem has resisted previous attempts at proving hardness of approximation with a matching ratio; in fact, the best approximation ratio ruled out under any worst case assumption is only any constant (Raghavendra and Steurer 2010).

    In this talk, I will outline a simple sub-exponential time reduction and a proof which provides an almost polynomial (n^{1/polyloglog n}) ratio inapproximability of Densest k-Subgraph under the exponential time hypothesis.

  • Nov 30: Amir Abboud on A Sequence of Hardness Results for Longest Common Subsequence

    The LCS problem asks for the length of the Longest Common Subsequence of two strings of length n. A classic dynamic programming algorithm solves it in O(n^2) time, and it is perhaps the most basic problem on strings for which we don’t have near-linear time algorithms.

    I will discuss several recent papers proving that this innocent looking problem is, in fact, very hard.

  • Nov 23: No Lunch (Thanksgiving week)
  • Nov 16: No Lunch
  • Nov 09: Lin Yang on Streaming Symmetric Norms via Measure Concentration

    We characterize the streaming space complexity of every symmetric norm l (a norm on R^n invariant under sign-flips and coordinate-permutations), by relating this space complexity to the measure-concentration characteristics of l. Specifically, we provide matching upper and lower bounds (up to polylog factors) on the space complexity of approximating the norm of the stream, where both bounds depend on the median and maximum of l(x) when x is drawn uniformly from the l_2 unit sphere. The same quantity governs many phenomena in high-dimensional spaces, such as large-deviation bounds and the critical dimension in Dvoretzky's Theorem. The family of symmetric norms contains several well-studied norms, such as all lp norms, and indeed we provide a new explanation for the disparity in space complexity between p ≤2 and p>2. In addition, we apply our general results to easily derive bounds for several norms were not studied before in the streaming model, including for example the top-k norm and the k-support norm, which was recently shown to be effective for machine learning tasks.

    Joint work with: Jarosław Błasiok, Vladimir Braverman, Stephen R. Chestnut, Robert Krauthgamer

    The current version of the paper can be find at: http://arxiv.org/abs/1511.01111

  • Nov 02: Daniel Masny on CCA-secure public key encryption from low-density subset sum

    It is known that subset sum instances with low density are closely related to lattice problems such as the unique shortest vector problem (uSVP). Palacio, Lyubashevsky and Segev showed that subset sum could be seen as a variant of learning with errors (LWE) with deterministic noise. In this talk, we will use this observation to construct a chosen-ciphertext (CCA) secure public key encryption scheme. Even when an adversary has access to a decryption oracle, a break of the proposed scheme would imply solving low-density subset sum.

    Joint work with Sebastian Faust and Daniele Venturi

  • Oct 26: Alexandros Psomas on On Dynamic Fair Division

    In the single-resource dynamic fair division model there is a homogeneous resource that is shared between agents dynamically arriving and departing over time. When N players are present, the only fair solution is obvious: everyone gets an N-th. The obvious caveat of this approach: too many disruptions. For a new agent to get her fair share, all other agents must give up a small piece.

    When we fix the number of disruptions to the number of agents present the goal is to maximize fairness. I'll describe the optimal solution to this problem, and tell you all I know about how to divide multiple resources.

    Joint work with Eric Friedman and Shai Vardi.

  • Oct 19: Gil Cohen on Recent advances in randomness extractors and their applications

    We survey recent developments in randomness extractors, in particular, non-malleable extractors and their applications to Ramsey graphs and privacy amplification. We present the two new pseudo-random objects that are at the center of recent progress - correlation breakers and independence-preserving mergers.

  • Oct 12: Tselil Schramm on Strongly refuting random constraint satisfaction problems below the spectral threshold

    Random instances of 3SAT are known to be unsatisfiable with high probability when there at least 5N clauses. However, given a random 3SAT instance on N variables, the task of refuting the instance, or of certifying that the instance has no satisfying assignments, is hypothesized to be computationally hard if there are O(N) clauses—in fact, the best known efficient algorithms for refutation require instances with at least N^(3/2) clauses, a factor of N^(1/2) beyond the unsatisfiability threshold at 5N.

    In this talk, I will describe a new spectral algorithm that refutes 3SAT instances with fewer clauses, given more time. Our algorithm extends the best polynomial time algorithms at N^(3/2) clauses, interpolating between them and the exponential brute-force search at the unsatisfiability threshold at Theta(N) clauses.

    This result also implies tight upper bounds on the value of sum-of-squares relaxations for random CSPs, as well as the value of sum-of-squares relaxations for random polynomial maximization over the unit sphere (injective tensor norm).

    Based on joint work with Prasad Raghavendra and Satish Rao.

  • Oct 05: Nir Ailon on How Speeding Up an Algorithm Leads to Uncertainty: Fourier Transform Computation

    The Fourier transform is one of the most important linear operators in engineering. Computing it for a given input vector x in n-space can be done in time Theta(n log n) using Cooley and Tukey's FFT (Fast Fourier Transform). A lower bound better than trivial \Omega(n) has been evasive. In this talk I will show the surprising connection between speeding up FFT and uncertainty: If you could (theoretically) speed up FFT without changing the word size on the machine, then you would create uncertainty in the data in the form of numerical overflow or underflow. Alas, increasing the machine word size costs more time per multiplication/addition operation.

    I will explain this principle and present two very recent related bounds together with a main conjecture and some related open problems. The talk requires standard linear algebra and very basic familiarity with Shannon entropy.

  • Sep 28: Aviad Rubinstein on Dilemma Prisoners should upgrade their data plan

    I'll try to convince you that finding an approximate Nash equilibrium is really hard.

    Based on http://arxiv.org/abs/1606.04550 (to appear in FOCS 2016) http://arxiv.org/abs/1608.06580 (joint work w/ Yakov Babichenko)

  • Sep 21: Kira Goldner on The FedEx Problem

    Consider the following setting: a customer has a package and is willing to pay up to some value v to ship it, but needs it to be shipped by some deadline d. The customer is indifferent if the package is shipped early. Given the joint prior distribution from which (v, d) pairs are drawn, we characterize the auction that yields optimal revenue, contributing to the very limited understanding of optimal auctions beyond the single-parameter setting. Our work also demonstrates the importance of 'ironing' (and concavity) in revenue maximization, helping to illustrate why randomization is necessary to achieve optimal revenue. Finally, we strengthen the emerging understanding that duality is useful for both the design and analysis of optimal auctions in multi-parameter settings.

    Joint work with Amos Fiat, Anna Karlin, and Elias Koutsoupias.

  • Sep 14: Igor Shinkar on On Percolation and NP-Hardness

    We study the computational hardness of problems whose inputs are obtained by applying random noise to worst-case instances. For an appropriate notion of noise we show that a number of classical NP-hard problems on graphs remain essentially as hard on the noisy instances as they are in the worst-case.

    Focusing on the Graph Coloring problem, we establish the following result: Given any graph G, let H be a random subgraph of G obtained by deleting the edges of G independently with probability 0.5. Then, if $\chi(G)$ is large, then $\chi(H)$ is also large with high probability. This means that the chromatic number of any graph is ``robust'' to random edge deletions.

    Joint work with Huck Bennett and Daniel Reichman.

  • Sep 07: Andrea Lincoln on Deterministic Time-Space Tradeoffs for k-SUM

    Given a set of numbers, the k-SUM problem asks for a subset of k numbers that sums to zero. When the numbers are integers, the time and space complexity of k-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point.

    We present a time and space efficient deterministic self-reduction for the k-SUM problem which holds for both models, and has many interesting consequences. To illustrate:

    3-SUM is in deterministic time O(n^2 lglg(n)/lg(n)) and space O(\sqrt{n lg(n)/lglg(n)}), extending the results of Gold and Sharir. 3-SUM is in deterministic time O(n^2) and space O(\sqrt n), derandomizing an algorithm of Wang.

    A popular conjecture states that 3-SUM requires n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n^{.51})-space algorithm for 3-SUM requires at least n^{2-o(1)} time on the word-RAM

    Joint work with Virginia Vassilevska Williams, Josh Wang and Ryan Williams

Spring 2016

  • May 11: Dick Karp on Rearrangement in Restricted Space

    We consider a problem of rearranging objects within a set of sites. Each object has an initial site, a goal site and a size. Each site has a capacity. A rearrangement algorithm executes a sequence of moves; each move takes an object from one site to another. The algorithm succeeds if it reaches a state where every object is located at its goal. At any stage, the load on a site is the sum of the sizes of the objects residing at that site. A move can only be made to a site whose load does not exceed its capacity. The demand for a site is the sum of the sizes of the objects having that site as their goal. We present a distributed algorithm that delivers each object to its goal while moving each object at most once, provided that the capacity of each site is greater than or equal to its demand. We also give an optimal distributed algorithm in the case where some sites have a demand exceeding their capacity, and a dynamic version in which objects enter the system over time. There are motivating examples in which the sites are warehouses, processors, freight cars, library shelves and tracks on a disk.

  • May 04: Ran Gelles on Constant-rate coding for multiparty interactive communication is impossible

    We study coding schemes for multiparty interactive communication over synchronous networks that suffer from stochastic noise, where each bit is independently flipped with probability $\epsilon$. We analyze the minimal overhead that must be added by the coding scheme in order to succeed in performing the computation despite the noise.

    Our main result is a lower bound on the communication of any noise-resilient protocol over a star network with $n$-parties. Specifically, we show a task that can be solved by communicating $T$ bits over the noise-free network, but for which any protocol with success probability of $1-o(1)$ must communicate at least $\Omega(T \log n / \log \log n)$ bits when the channels are noisy. By a 1994 result of Rajagopalan and Schulman, the slowdown we prove is the highest one can obtain on any topology, up to a $\log \log n$ factor.

    We complete our lower bound with a matching coding scheme that achieves the same overhead; thus, the capacity of star networks is $\Theta(\log \log n / \log n)$. Our bounds prove that, despite several previous coding schemes with rate $\Omega(1)$ for certain topologies, no coding scheme with constant rate $\Omega(1)$ exists for arbitrary $n$-party noisy networks.

    Joint work with Mark Braverman, Klim Efremenko, and Bernhard Haeupler

  • Apr 27: Theory Retreat Presentation
  • Apr 20: Pratyay Mukherjee on Round Optimal Multi-party Computation using Fully-homomorphic Encryption

    In multi-party computation (MPC) several parties P_1.....P_N holding private inputs x_1,......x_N want to compute any arbitrary public function y:= F(x_1,.......,x_N). The security requirement is that the party should not learn anything except y even any number of them behave arbitrarily malicious way. In MPC one of the main objective has been to minimize the number of interactions a.k.a. rounds.

    Earlier work shows that it is impossible to construct an MPC protocol in 1 round. However, recent works constructed such protocol, albeit only from non-standard assumption like existence of Indistinguishability Obfuscation. In this work we construct the first MPC protocol from Learning with Errors assumption which reduces to worst-case lattice assumption.

    Our main tool is Fully-homomorphic Encryption (FHE) which closely follows a basic template put forth by Asharov et al. (Eurocrypt '12) which construct a 3-round protocol. The structure is as follows:

    Round-1 Parties interact to set up a common public key, PK of the FHE scheme. Round-2 Each party P_1 encrypts its input x_i under PK to generate ciphertext c_i and broadcasts. On receiving all c_i's each party homomorphically computes the ciphertext c := F(c_1.....c_N) which encrypts the output y = F(x_1,......x_N) Round-3 Finally parties jointly runs a one-round distributed decryption to get the output y

    We essentially remove the first round by replacing the FHE scheme by a multi-key FHE scheme which we construct in this work following the ideas of Clear and McGoldrick (Crypto '15). Intuitively, such scheme guarantees that even when P_i encrypt its input x_i under public key pk_i generated independently, it is possible to homomorphically compute on the ciphertexts.

    This is a joint work with Daniel Wichs (Northeastern Univ.)

    Please find more details here http://eprint.iacr.org/2015/345.pdf

  • Apr 13: Ariel Gabizon on Low-degree non-boolean dispersers for subspaces

    Consider a vector space F^n, over a finite field F, and an integer k<n. We wish to construct an explicit polynomial f(X1,..,Xn) with the property that its restriction to any k-dimensional subspace V in F^n, is non-constant. This question was motivated by research in randomness extractors - combinatorial objects that can be useful for derandomizing algorithms (and other purposes). In a work with Matt DeVos, we constructed such f of degree n/k, provided the characteristic of F is larger than n/k. I will present this construction, and pose reducing the degree of f as an open problem.

  • Apr 06: Holger Dell on Counting subgraphs modulo two

    In this talk, we will discuss the subgraph problem: given graphs H and G, is H a (not necessarily induced) subgraph of G? We consider the problem with the restriction that H comes from a family F of graphs, while G can be arbitrary. It is open to fully classify those families F for which the problem is in P and for which families it is not in P (under reasonable complexity assumptions). As a first step towards solving this open problem, we investigate the related problem of computing the parity of the number of occurrences of H in G, and we will see some partial results in this talk. Interestingly, we cannot rely on the concept of NP-hardness for the hard cases since F could be a very sparse family of graphs, which leads to NP-incomplete problems; instead, we study the problem in the framework of parameterized complexity, where this issue disappears (work in progress with Curticapean and Husfeldt).

  • Mar 30: Julian Shun on Sequential algorithms that are highly parallel

    Over the past several decades there has been significant research on deriving new parallel algorithms for a variety of problems, with the goal of designing highly parallel (polylogarithmic depth), work-efficient (linear in the sequential running time) algorithms. For some problems, however, one might ask if perhaps a standard sequential algorithm is already highly parallel if we simply execute sub-computations opportunistically when they no longer depend on any other uncompleted sub-computations. We study this question and show that this is indeed the case for a variety of problems, including the standard greedy maximal independent set algorithm and the Knuth's textbook random permutation algorithm. Beyond the intellectual curiosity of whether sequential algorithms are inherently parallel, the approach has several important benefits for the design of parallel algorithms, including allowing for simple and fast implementations as well as deterministic parallelism.

    Based on work published in SPAA '12 and SODA '15. Joint work with Guy Blelloch, Jeremy Fineman, Phillip Gibbons and Yan Gu.

  • Mar 23: No Lunch (spring break)
  • Mar 16: Amit Daniely on Complexity theoretic limitations on learning

    We introduce a new method to reduce random CSP problems to learning problems. Under a natural generalization of Feige's assumption about random K-SAT and K-XOR, this method substantially reduces the gaps between known upper and lower bounds. Concretely, it follows that:

    1. Learning DNFs is hard.
    2. Learning an intersection of super logarithmic number of halfspaces is hard.
    3. Learning Halfspaces in the presence of a tiny amount of noise is hard.
    4. Several additional learning problems are hard. In particular, virtually all problems that were previously shown intractable (under various, mostly cryptographic, assumptions).

    Based on joint work with Nati Linial and Shai Shelev-Shwartz

  • Mar 09: No Lunch
  • Mar 02: Rishi Gupta on A PAC Approach to Application-Specific Algorithm Selection

    The best algorithm for a computational problem generally depends on the "relevant inputs," a concept that depends on the application domain and often defies formal articulation. While there is a large literature on empirical approaches to selecting the best algorithm for a given application domain, there has been surprisingly little theoretical analysis of the problem. We adapt concepts from statistical and online learning theory to reason about application-specific algorithm selection. Our models capture several state-of-the-art empirical and theoretical approaches to the problem, ranging from self-improving algorithms to empirical performance models, and our results identify conditions under which these approaches are guaranteed to perform well. We present one framework that models algorithm selection as a statistical learning problem, and our work here shows that dimension notions from statistical learning theory, historically used to measure the complexity of classes of binary- and real-valued functions, are relevant in a much broader algorithmic context. We also study the online version of the algorithm selection problem, and give possibility and impossibility results for the existence of no-regret learning algorithms.

    Joint work with Tim Roughgarden.

  • Feb 24: Benjamin Weitz on Symmetric SDP Lower Bounds for the Perfect Matching Problem

    Yannakakis showed that the matching problem does not have a small symmetric linear program. Rothvo{\ss} recently proved that any, not necessarily symmetric, linear program also has exponential size. It is natural to ask whether the matching problem can be expressed compactly in a framework such as semidefinite programming (SDP) that is more powerful than linear programming but still allows efficient optimization. We answer this question negatively for symmetric SDPs: any symmetric SDP for the matching problem has exponential size. The two key technical ingredients underlying this result are showing that any small set of symmetric functions on the discrete space of matchings can be represented as low-degree polynomials, and giving an upper bound on the degree needed to derive polynomial identities that hold for every matching from a small set of axioms.

  • Feb 17: Stephen Alstrup on Recent results (STOC/FOCS’15 and SODA’16) on labeling schemes and induced universal graphs

    A graph G is said to be an induced-universal graph for a family F of graphs, if for every graph H of F there is an induced subgraph of G that is isomorphic to H. Directly related is adjacency labeling schemes.

    An adjacency labeling scheme for a given family of graphs is a way of assigning labels to the vertices of each graph from the family such that given the labels of two vertices in the graph, and no other information, it is possible to determine whether or not the vertices are adjacent in the graph.

    One recent results shows that there exists a graph G with O(n) nodes, such that any forest of n nodes is a node-induced subgraph of G. Equivalent using label of size log_2 n + O(1) bits one can decide adjacency for forest.

    Other labeling schemes looks at e.g. distance labeling, and often will distance oracles build directly on labeling schemes. A recent result shows that label of size n/2 log_2 3 + O(log^2n) bits is sufficient to decide distance between nodes in undirected unweighted graphs.

  • Feb 10: Laura Florescu on Spectral thresholds in the bipartite stochastic block model

    We consider a bipartite stochastic block model on vertex sets $V_1$ and $V_2$ of size $n_1$ and $n_2$ respectively, with planted partitions in each, and ask at what densities can spectral algorithms recover the partition of the smaller vertex set. The model was used before to give a unified algorithm for random planted hypergraph partitioning and planted random $k$-SAT. We locate a sharp threshold for detection of the partition, in the sense of the results of Mossel, Neeman, Sly and Massouli\'e for the stochastic block model. This gives the best known bounds for efficient recovery densities in planted $k$-SAT and hypergraph partitioning as well as showing a barrier to further improvement via the reduction to the bipartite block model. We also show some thresholds at which various spectral algorithms work. Joint work with Will Perkins.

  • Feb 03: Daniel Reichman on Smoothed analysis of connected graphs

    The main paradigm of smoothed analysis on graphs suggests that for any large graph $G$ in a certain class of graphs, perturbing slightly the edge set of $G$ at random (usually adding few random edges to $G$) typically results in a graph having much “nicer” properties. In this work, we study smoothed analysis on trees or, equivalently, on connected graphs. Given an $n$-vertex connected graph $G$, form a random supergraph $G^$ of $G$ by turning every pair of vertices of $G$ into an edge with probability $\frac{\epsilon}{n}$, where $\epsilon$ is a small positive constant. This perturbation model has been studied previously in several contexts, including smoothed analysis, small world networks, and combinatorics. Connected graphs can be bad expanders, can have a very large diameter, and can possibly contain no long paths. In contrast, we show that if $G$ is an $n$-vertex connected graph, then typically $G^$ has edge expansion $\Omega(\frac{1}{\log n})$, diameter $O(\log n)$, and vertex expansion $\Omega(\frac{1}{\log n})$ and contains a path of length $\Omega(n)$, where for the last two properties we additionally assume that $G$ has bounded maximum degree. Moreover, we show that if $G$ has bounded degeneracy, then typically the mixing time of the lazy random walk on $G^*$ is $O(\log^2 n)$. All these results are asymptotically tight.

  • Jan 27: Miklos Racz on Braess's paradox for the spectral gap in random graphs and delocalization of eigenvectors

    If you add an edge to a random graph, do its conductance properties almost always get better? Perhaps surprisingly the answer is no, and I will discuss how this is connected to delocalization properties of eigenvectors. This is based on joint work with Ronen Eldan and Tselil Schramm.

  • Jan 20: Alan Roytman on Packing small vectors

    Online d-dimensional vector packing models many settings such as minimizing resources in data centers where jobs have multiple resource requirements (CPU, memory, etc.). However, no online d-dimensional vector packing algorithm can achieve a competitive ratio better than d. Fortunately, in many natural applications, vectors are relatively small, and thus the lower bound does not hold. For sufficiently small vectors, an O(log d)-competitive algorithm was known. We improve this to a constant competitive ratio, arbitrarily close to e (where e is the base of the natural logarithm), given that vectors are sufficiently small.

    We give improved results for the two dimensional case. For arbitrarily small vectors, the First Fit algorithm for two dimensional vector packing is no better than 2-competitive. We present a natural family of First Fit variants, and for optimized parameters get a competitive ratio of approximately 1.48 for sufficiently small vectors.

    We improve upon the 1.48 competitive ratio - not via a First Fit variant - and give a competitive ratio arbitrarily close to 4/3 for packing small, two dimensional vectors. We show that no algorithm can achieve better than a 4/3 competitive ratio for two dimensional vectors, even if one allows the algorithm to split vectors among arbitrarily many bins.

    Joint work with Yossi Azar, Ilan Reuven Cohen, and Amos Fiat.

Fall 2015

  • Dec 09: Yin Tat Lee on Uniform Sampling for Matrix Approximation

    In this talk, I give an iterative row sampling algorithm for matrix approximation that runs in input-sparsity time and preserves row structure and sparsity.

    Random sampling has become a critical tool in solving massive matrix problems. For linear regression, a small, manageable set of data rows can be randomly selected to approximate a tall, skinny data matrix, improving processing time significantly. For theoretical performance guarantees, each row must be sampled with probability proportional to its statistical leverage score. Unfortunately, leverage scores are difficult to compute. A simple alternative is to sample rows uniformly at random. While this often works, uniform sampling will eliminate critical row information for many natural instances.

    In this talk, I explain what information uniform sampling does preserve. Specifically, I show that uniform sampling yields a matrix that, in some sense, well approximates a large fraction of the original. While this weak form of approximation is not enough for solving linear regression directly, I show how to use this to build an iterative method.

  • Dec 02: Di Wang on Accelerated First-order Method for Packing and Covering LPs

    The linear coupling method was introduced recently by Allen-Zhu and Orecchia for solving convex optimization problems with first order methods, and it provides a conceptually simple way to integrate a gradient descent step and mirror descent step in each iteration. In the setting of standard smooth convex optimization, the method achieves the same convergence rate as that of the accelerated gradient descent method of Nesterov. The high-level approach of the linear coupling method is very flexible, and it has shown initial promise by providing improved algorithms for packing and covering linear programs. We will go over the general framework, and how how to achieve nearly linear time $\epsilon$-approximation of packing and covering LPs with convergence $1/\epsilon$.

  • Nov 25: No Lunch (Thanksgiving week)
  • Nov 18: Akshayaram Srinivasan on On the Exact Cryptographic Hardness of Finding a Nash Equilibrium

    The exact hardness of computing a Nash equilibrium is a fundamental open question in algorithmic game theory. This problem is complete for the complexity class \ppad. It is well known that problems in PPAD cannot be \np-complete unless NP=coNP. Therefore, a natural direction is to reduce the hardness of PPAD to the hardness of problems used in cryptography.

    Bitansky, Paneth, and Rosen [FOCS 2015] prove the hardness of PPAD assuming the existence of quasi-polynomially hard indistinguishability obfuscation and sub-exponentially hard one-way functions. This leaves open the possibility of basing PPAD hardness on simpler, polynomially hard, computational assumptions.

    We make further progress in this direction and reduce PPAD hardness directly to polynomially hard assumptions. Our first result proves hardness of PPAD assuming the existence of polynomially hard indistinguishability obfuscation (IO) and one-way permutations. While this improves upon Bitansky et al.'s work, it does not give us a reduction to simpler, polynomially hard computational assumption because constructions of IO inherently seems to require assumptions with sub-exponential hardness. In contrast, {\em public key functional encryption} is a much simpler primitive and does not suffer from this drawback. Our second result shows that $\ppad$ hardness can be based on {\em polynomially hard} public key functional encryption and one-way permutations. Our results further demonstrate the power of polynomially hard public key functional encryption which is believed to be weaker than indistinguishability obfuscation.

    Joint work with Sanjam Garg and Omkant Pandey.

  • Nov 11: Greg Bodwin on The n^{4/3} Additive Spanner Bound is Tight

    A +k additive spanner is a sparse subgraph that preserves all pairwise distances up to additive error +k. A classical result in this field states that all graphs have +6 spanners on n^{4/3} edges, but no sparser spanners are known for any constant k. Meanwhile, current lower bound allow for the possibility of constant error spanners on O(n^{1 + eps}) edges for all eps > 0. It is considered a major open problem to prove the existence/nonexistence of these nearly-linear sized additive spanners.

    We resolve this question in the negative: there are graphs that provably have no spanners on n^{4/3 - eps} edges, unless the additive error is allowed to be polynomial in n. In this talk, we will describe a simple construction technique that produces one of these graphs. We will then discuss some extensions and implications of this new lower bound.

    Joint work with Amir Abboud.

  • Nov 04: Mark Zhandry on Anonymous Traitor Tracing: How to Embed Arbitrary Information in a Key

    In a traitor tracing scheme, each user is given a different decryption key. A content distributor can encrypt digital content using a public encryption key and each user in the system can decrypt it using her decryption key. Even if a coalition of users combines their decryption keys and constructs some “pirate decoder” that is capable of decrypting the content, there is a public tracing algorithm that is guaranteed to recover the identity of at least one of the users in the coalition given black-box access to such decoder. All prior approaches to traitor tracing require an external mechanism to keep track of the users in the system and their identifying information in order to ensure accountability. In this talk, I will explain how to remove this restriction, resulting in an anonymous traitor tracing system, where honest users' identities remain hidden, and only dishonest users’ identities are made public. At the core of our construction is the solution to an interesting algorithmic problem, which I will show how to solve. Joint work with Ryo Nishimaki and Daniel Wichs.

  • Oct 28: Nihar Shah on "Multiplicative" Incentive Mechanisms for Crowdsourcing

    The growing need for labeled training data has made crowdsourcing an important part of machine learning. In this work, we design incentive-compatible payment mechanisms for such crowdsourcing setups. We show that surprisingly, under a mild and natural ""no-free-lunch"" requirement, our mechanisms are the one and only incentive-compatible class. Moreover, our mechanisms make the smallest possible payment to spammers among all possible incentive-compatible mechanisms. Interestingly, our mechanisms have a ""multiplicative"" structure where the final payment for a task is a product of the payments for each individual question. Our mechanisms have a simple description, and preliminary experiments involving 900 worker-task pairs on Amazon Mechanical Turk show a two-fold reduction in the error rates under our mechanism. Based on joint work with Denny Zhou and Yuval Peres from Microsoft Research.

  • Oct 21: Yannai A. Gonczarowski on Cascading to Equilibrium: Hydraulic Computation of Equilibria in Resource Selection Games

    Drawing intuition from a (physical) hydraulic system, we present a novel framework, constructively showing the existence of a strong Nash equilibrium in resource selection games (i.e., asymmetric singleton congestion games) with nonatomic players, the coincidence of strong equilibria and Nash equilibria in such games, and the uniqueness of the cost of each given resource across all Nash equilibria. Our proofs allow for explicit calculation of Nash equilibrium and for explicit and direct calculation of the resulting (unique) costs of resources, and do not hinge on any fixed-point theorem, on the Minimax theorem or any equivalent result, on linear programming, or on the existence of a potential (though our analysis does provide powerful insights into the potential, via a natural concrete physical interpretation). A generalization of resource selection games, called resource selection games with I.D.-dependent weighting, is defined, and the results are extended to this family, showing the existence of strong equilibria, and showing that while resource costs are no longer unique across Nash equilibria in games of this family, they are nonetheless unique across all strong Nash equilibria, drawing a novel fundamental connection between group deviation and I.D.-congestion. A natural application of the resulting machinery to a large class of constraint-satisfaction problems is also described. (Joint work with Moshe Tennenholtz.)

  • Oct 14: Alessandro Chiesa on Interactive oracle proofs

    I will present interactive oracle proofs, a proof system model that combines aspects of interactive proofs and probabilistically checkable proofs. I will talk about some initial constructions within this model, as well as cryptographic applications to obtaining succinct non-interactive proofs.

  • Oct 07: Muli Safra on Open questions regarding boolean functions with potential applications

    Following some theorems of Talagrand I will suggest a few extensions regarding parameters of boolean functions. We'll try to discuss potential applications.

  • Sep 30: Gautam Kamath on Optimal Testing for Properties of Distributions

    Given samples from an unknown distribution p, is it possible to distinguish whether p belongs to some class of distributions C versus p being far from every distribution in C? This fundamental question has received tremendous attention in statistics, focusing primarily on asymptotic analysis, and more recently in information theory and theoretical computer science, where the emphasis has been on small sample size and computational complexity. Nevertheless, even for basic properties of distributions such as monotonicity, log-concavity, unimodality, independence, and monotone-hazard rate, the optimal sample complexity is unknown. We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our approach is an algorithm which solves the following problem: Given samples from an unknown distribution p, and a known distribution q, are p and q close in ? 2 -distance, or far in total variation distance? The optimality of our testers is established by providing matching lower bounds with respect to both n and ?. Finally, a necessary building block for our testers and an important byproduct of our work are the first known computationally efficient proper learners for discrete log-concave and monotone hazard rate distributions.

    Joint work with Jayadev Acharya and Costis Daskalakis. Paper to appear in NIPS 2015, preprint available at http://arxiv.org/abs/1507.05952.

  • Sep 23: Nikhil Srivastava on Ramanujan Graphs from Finite Free Convolutions

    We will show that a random d-regular graph is an optimal (i.e., Ramanujan) expander with nonzero probability. Notably, we will use tools inspired by asymptotic (i.e., large n limit) random matrix theory to prove statements about finite dimensional matrices. The mediating role will be played by the expected characteristic polynomials of the random matrices in question, exploiting in particular their real-rootedness and invariance properties.

    Joint work with Adam Marcus (Princeton) and Dan Spielman (Yale).

  • Sep 16: Tselil Schramm on Overcomplete tensor decomposition: speeding up sum-of-squares algorithms

    Recently, there have been several works using the sum-of-squares hierarchy to solve classical problems in machine learning. However, these sum-of-squares algorithms are very slow, in some cases running in quasi-polynomial time. I will present a fast spectral algorithm for the overcomplete tensor decomposition problem, which uses ideas from the sum-of-squares rounding scheme to achieve similar performance in polynomial time. Based on joint work with Sam Hopkins, Jonathan Shi and David Steurer.

Spring 2015

  • Apr 29: Shay Solomon on Dynamic Approximate Matchings: A Density-Sensitive Approach
  • Apr 22: Yaron Singer on Combinatorial Optimization under Noise
  • Apr 15: Sam Wong on The cutting plane method and submodular minimization
  • Apr 08: Surprise speaker
  • Apr 01: Prasad Raghavendra on Multiplicative weight updates and lower bounds for LP/SDP size
  • Mar 25: No Lunch (Spring Break)
  • Mar 18: No Lunch (Visit Days)
  • Mar 11: Ron Lavi on Competition among asymmetric sellers with fixed supply
  • Mar 04: No Lunch
  • Feb 25: Jingcheng Liu on Approximate counting via correlation decay

    Imagine you want to know how many chocolates are there in a box, just roughly, how would you go about it? Perhaps a natural way is to do divide-and-conquer i.e. splitting the chocolates recursively. Now suppose we have one chocolate for each independent set of a graph, can we apply the same idea to count the number of independent sets? In this talk I'll try to give an overview of various frameworks and sources of approximate counting problems, and also a sketch of how one could formalize this recursive-splitting idea into a simple but general algorithm. The algorithm will be efficient whenever these "chocolates" (set of solutions) exhibit a property known as correlation decay. Based on joint works with Dr. Pinyan Lu.

  • Feb 18: No Lunch
  • Feb 11: Antonio Blanca on Giant components and mixing times

    The random-cluster model (a.k.a. FK-model) on a finite graph G=(V,E) establishes a probability distribution over the subgraphs (V,A) of G which unifies the study of random graphs, spin systems and electrical networks. When G is the complete graph, the model exhibits a phase transition analogous to that in G(n,p) corresponding to the appearance of a "giant" component. By analyzing a particular Markov chain (known as the CM dynamics), we show how the smoothness of such transition has direct implications on the mixing time of the dynamics of the model.

  • Feb 04: Aviad Rubinstein on Nash equilibrium and birthday repitition
  • Jan 28: Jonah Brown-Cohen on Combinatorial optimization via polymorphisms

    An elegant characterization of the complexity of constraint satisfaction problems has emerged in the form of the the algebraic dichotomy conjecture. Roughly speaking, the characterization asserts that a CSP A is tractable if and only if there exist certain non-trivial operations known as polymorphisms to combine solutions to A to create new ones. In an entirely separate line of work, the unique games conjecture yields a characterization of approximability of Max-CSPs. Surprisingly, this characterization for Max-CSPs can also be reformulated in the language of polymorphisms.

    We study whether existence of non-trivial polymorphisms implies tractability beyond the realm of constraint satisfaction problems, namely in the value-oracle model which covers such examples as general submodular function minimization. Specifically, given a function f along with an appropriate operation that never increases the value of f, we design algorithms to minimize f.

Fall 2014

  • Dec 10: Sid Barman on An Approximate Version of Caratheodory's Theorem and Its Algorithmic Applications

    In this talk I will present an approximate version of Caratheodory's theorem and its algorithmic applications. In particular, I will show that for every vector in the convex hull of a set of vectors X there exists a nearby (under p-norm distance, for p at least 2) vector that can be expressed as a convex combination of at most b vectors of X, where the bound b is independent of the dimension of the vectors.

    Given the significance of Caratheodory's theorem, this approximate version is interesting in its own right. It also has notable algorithmic applications. Specifically, I will describe how this approximate version of Caratheodory's theorem leads to a new algorithm for computing approximate Nash equilibria in two-player games. This algorithm, in particular, provides a polynomial-time approximation scheme for Nash equilibrium in games where the sum of the payoff matrices is sparse. Moreover, for arbitrary two-player games the running time of the algorithm matches the best-known upper bound, which is obtained in (Lipton, Markakis, and Mehta 2003).

  • Dec 03: No Lunch (Simons Workshop)
  • Nov 26: No Lunch (Thanksgiving)
  • Nov 19: George Piliouras on Games, Dynamics, and Average-case analysis
  • Nov 12: Theory Retreat Presentation
  • Nov 05: Jamie Morgenstern on Simple Mechanisms with Simple Stragies
  • Oct 29: No Lunch (Simons Workshop)
  • Oct 22: Tselil Schramm on A Better Approximation for Correlation Clustering
  • Oct 15: Gary Miller on Generalized Laplacians
  • Oct 08: Sanjam Garg on Indistinguishability Obfuscation: Definition and Amplification from NC^1 to General Circuits
  • Oct 01: James Lee on Expansion Beyond The Spectral Gap: Curvature, Entropy, and Coupling
  • Sep 24: No Lunch (Simons Workshop)
  • Sep 17: Luca Trevisan on The Riemann Hypothesis for Graphs
  • Sep 10: Christos Papadimitriou on Learning on the cheap

Spring 2014

  • Apr 30: Dimitris Achlioptas on The Local Lemma as a Random Walk
  • Apr 23: Mario Szegedy on Impossibility Theorems and the Universal Algebraic Toolkit
  • Apr 16: Theory Retreat Presentation
  • Apr 09: Seung Woo Shin on DWave
  • Apr 02: Justin Thaler on A Crash Course on Fast Interactive Proofs
  • Mar 19: No Lunch (Spring Break)
  • Mar 12: No Lunch (Visit Days)
  • Mar 05: Jonathan Herman on Social Connectivity - Coalescence Process of Connected Components of a Random Acquaintances Graph Determined by Collisions of Random Walks
  • Feb 26: No Lunch
  • Feb 19: Shai Vardi on Local Computation
  • Feb 12: Gregory Valiant on Instance-Optimal Testing
  • Feb 05: Rishi Gupta on Model-Independent Algorithms for Social Networks

Fall 2013

  • Dec 03: Ning Tan on Symmetric LP and SDP relaxation
  • Nov 27: No Lunch (Thanksgiving)
  • Nov 20: Matt Weinberg on An algorithmic proof of Border's theorem
  • Nov 13: Seung Woo Shin on The D-Wave Controversy
  • Nov 06: Dominik Scheder on How to leak the darkest secrets in relative safety
  • Oct 30: No Lunch (FOCS)
  • Oct 23: Guoming Wang on A quantum algorithm for approximating the effective resistances of electrical networks
  • Oct 16: Aviad Rubinstein on Reconstructing a Distribution Over Permutations
  • Oct 09: Nihar Shah on Secret Sharing Across a Network with Low Communication Cost
  • Oct 02: Anindya De on Limit theorems for Gaussian chaoses and applications

    A fundamental fact about gaussians is that they are closed under linear combinations. What about more complicated combinations ? What are the conditions under which low-degree polynomials of gaussians are close to being gaussians ? In this talk, I will discuss some relevant limit theorems which partially answer this question and their applications.

    Based on works with Ilias Diakonikolas and Rocco Servedio.

  • Sep 25: Paul Christiano on Manipulation-proof reputation systems via online learning

    Consider the problem faced by buyers and sellers on a completely decentralized online marketplace: a merchant may fail to deliver a high quality product, a buyer may harass a seller with the threat of a bad review, market participants may actually be coordinated shills aiming to manipulate the system, there are no available enforcement mechanisms to punish dishonest behavior, random noise may mask subtle malicious behavior by buyers or sellers, etc. We show how to convert online learning guarantees into manipulation-proof reputation systems which replicate the performance of the optimal "walled-garden" up to an additive loss per user of the system (i.e., they perform as well as if they had included only the optimal epsilon fraction of the merchants, with an additive loss that depends on 1 / epsilon). This guarantee holds even when the overwhelming majority of participants in the system coordinate in order to adversarially degrade its performance.

  • Sep 18: Sev Gharibian on Approximation algorithms for quantum constraint satisfaction problems
  • Sep 11: James Lee on Lifts of polytopes, approximation, and linear programming

    In recent work with Chan, Raghavendra, and Steurer, we showed that any polynomial-sized linear program for max-cut has an integrality gap of 1/2, and any such LP for max-3sat has an integrality gap of 7/8. I will discuss the proof, open questions, and also resurgent claims that P=NP.

Spring 2013

  • May 08: Dick Karp and Alistair Sinclair on The Simons Institute

    Dick and Alistair will give a (non-technical) update on what's going on at the Simons Institute, and how it will impact all of our lives next academic year. This is primarily aimed at our students and postdocs, but others are also welcome.

  • May 01: Dick Karp on Algorithms for Finding Good Coins and Significant Disease Associations

    We derive efficient algorithms for two sequential decision problems:

    (1) We are given n coins and the ability to perform independent tosses of each coin. The heads probabilities of the coins are initially unknown. We are told that there exists one distinguished coin whose heads probability exceed the heads probabilities of all the other coins by at least a given value e. The problem is to minimize the expected number of coin tosses required to identify the distinguished coin.

    (2) We are given n pairs of coins and the ability to to perform independent tosses of each coin. The heads probabilities of the coins are initially unknown. In the ith pair, one of the coins has heads probability p(i) + e**(i) and the other has heads probability p(i)- e(i). The problem is to minimize the number of coin tosses to determine argmax**_i e(i)^2/(p(i) (1- p(i)).

    Problem 1 is a variant of the pure exploration version of the classical multi-armed bandit problem. The coins might correspond to treatments for a disease and the heads probability to the success probability of the treatment. The goal is to find the treatment with the highest success probability. In another setting, the coins correspond to algorithms for a task, and the heads probability is the probability that the algorithm performs correctly on an input drawn at random from a training set. The problem is to determine the algorithm with the highest probability of correct performance. A manuscript by Karthik Chandrasekharan and the speaker (2012) discusses a version in which the maximum heads probability is given in advance. Here we treat the case where the maximum heads probability is not given.

    In Problem 2 each pair of coins corresponds to a genetic mutation that may be associated with a given disease. The heads probabilities p(i) + e(i) and p(i) - e(i) are the probabilities that the mutation occurs, respectively, in cases (individuals with the disease) and controls (members of the general population). We seek the mutation for which the hypothesis that e(i)>0 has the greatest significance according to the chi-square test.

  • Apr 24: Anupam Gupta on Sparsest Cut for Bounded Treewidth Graphs
  • Apr 17: Shayan Oveis-Gharan on New Approaches to the Traveling Salesman Problem
  • Apr 10: Yaron Singer on Adaptive Seeding in Social Networks (and other Animals)
  • Apr 03: James Lee on Information theory against an adversary (or for Umesh, Quantum hypothesis testing)
  • Mar 20: No Lunch (Visit Days)
  • Mar 13: Ilan Adler on A direct reduction of PPAD Lemke-verified linear complementarity problems to bimatrix games
  • Mar 06: Nihar Shah on Codes for efficient and reliable distributed storage
  • Feb 27: Christos Papadimitriou on Game Theoretic Modeling of the Web
  • Feb 20: Paul Christiano on A Coordination Game in Networks
  • Feb 13: Siu Man Chan on Pebble Games and Complexity
  • Feb 06: Ravi Kannan on Non-negative Matrix Factorization
  • Jan 30: Anindya De on The Sum-of-Squares Hierarchy and Max Cut

Spring 2010

  • May 12: Lofti Zadeh on Computing with imprecise probabilities
  • May 05: Fritz Sommer on A review of mathematical problems in theoretical neuroscience
  • Apr 28: Thomas Watson on Barriers to average-case complexity
  • Apr 21: James Lee on A remarkable relationship between cover times of graphs and random projections
  • Apr 14: Dick Karp on Online algorithms for implicit hitting set problems
  • Apr 07: Christos Papadimitriou on Optimal auctions
  • Mar 31: Thomas Vidick on A lower bound on a decision problem

    We prove a communication complexity lower bound for the following problem: Alice gets a unit vector x, Bob a unit vector y, with the promise that either their inner product x.y > eps or x.y < -eps; their goal is to decide which. Our proof proceeds by round elimination and uses measure concentration techniques. (Joint work with Joshua Brody, Amit Chakrabarty, Oded Regev, and Ronald de Wolf.)

  • Mar 15: Lorenzo Orecchia on Fast algorithms for balanced cut and local random walks [Location: 410 Hearst Mining Building]
  • Mar 03: Free Lunch (report on theory retreat)
  • Feb 24: Anindya De on Extractors via complexity amplification
  • Feb 17: Ryan Williams on Improving exhaustive search implies superpolynomial lower bounds
  • Feb 10: Dirk Sudhiolt on Ant Colony Optimization

    Ant Colony Optimization (ACO) is a modern and very popular optimization paradigm inspired by the ability of ant colonies to find shortest paths between their nest and a food source. Despite its popularity, the theory of ACO is still in its infancy and a solid theoretical foundation is needed. I will present bounds on the running time of ACO systems for shortest path problems. This is joint work with Christian Horoba and it partly builds on previous results by Attiratanasunthron and Fakcharoenphol [Information Processing Letters, 105(3):88--92, 2008]. While the previous results were limited to single-destination shortest paths on directed acyclic graphs, our results hold for arbitrary directed graphs and the running time bounds have been improved. Our upper bound is asymptotically tight for large evaporation factors, holds with high probability, and transfers to the all-pairs shortest paths problem. There, a simple mechanism for exchanging information between ants with different destinations yields a significant improvement. A comparison with evolutionary and genetic approaches indicates that ACO is the best known metaheuristic for the all-pairs shortest paths problem.

  • Feb 03: Christos Papadimitriou on Risk in games
  • Jan 27: Sanjeev Arora on Computational complexity and information asymmetry in financial derivatives

Fall 2009

Spring 2009

Spring 2008

  • May 07: Kunal Talwar on Lower bounds for near neighbor search
  • Apr 30: Jonathan Katz on Complete fairness in secure two-party computation
  • Apr 23: Grant Schoenebeck on Integrality gaps in the Lasserre hierarchy
  • Apr 16: Yaron Singer on The hardness of being truthful
  • Apr 09: Jacob Abernethy on How to bet in a rigged casino using a random walk
  • Apr 02: Venkat Guruswami on Compressed sensing, Euclidean sections, and the JL lemma
  • Mar 26: No Lunch (spring break)
  • Mar 19: Alexander Rakhlin on Algorithms for online bandit
  • Mar 17: Greg Valiant on The complexity of Nash equilibria of action-graph games [Location: 410 Hearst Mining Building]
  • Mar 12: Allan Sly on MCMC on exponential random graphs
  • Mar 05: Luca Trevisan on Dense subsets of pseudorandom sets and arithmetic progressions in the primes
  • Feb 27: Madhur Tulsiani on Unique games on expanders
  • Feb 20: James Lee on Spectral bounds without conformal mappings
  • Feb 13: Ravi Kannan on Random matrices and generative models
  • Feb 06: Albert Atserias on Fractional edge covers and relational joins
  • Jan 30: Andrej Bogdanov on Pseudorandom generators for low-degree polynomials [Location: 373 Soda Hall]
  • Jan 23: Prahladh Harsha on Lower bounds for probabilistically checkable proofs of proximity [Location: 410 Hearst Mining Building]

Spring 2007

  • May 02: Nisheeth Vishnoi on Integrality gaps for sparsest cut
  • Apr 25: Sergey Yekhanin on Locally decodable codes
  • Apr 18: Luca Trevisan on Proving unsatisfiability of random k-SAT formulas
  • Apr 11: Jayanth Kannan on New routing protocols
  • Apr 04: Costis Daskalakis on Decoding error-correcting codes via linear programming
  • Mar 28: No Lunch
  • Mar 21: Jacob Abernethy on Experts algorithms, random sampling, and approximating the permanent
  • Mar 14: No Lunch
  • Mar 12: Lorenzo Orecchia on Expert algorithms [Location: 410 Hearst Mining Building]
  • Mar 07: Grant Schoenebeck on Integrality gaps for Lovasz-Schrijver relaxations
  • Feb 28: Konstantin and Yuri Makarychev on Near-optimal algorithms for maximum constraint satisfaction problems
  • Feb 21: Elchanan Mossel on Asymptotics of games
  • Feb 14: Umesh Vazirani on D-wave's "quantum computer"
  • Feb 07: Albert Atserias on Distinguishing SAT from polynomial-size circuits
  • Jan 31: Dick Karp on Noisy binary search
  • Jan 24: Robert Špalek on How negative weights make adversaries stronger

Fall 2006

  • Nov 29: Kamalika Chaudhuri on A clustering problem
  • Nov 22: No Lunch
  • Nov 15: Sandu Popescu on Non-local correlations and communication complexity
  • Nov 08: Alexandra Kolla on Honest-verifier zero knowledge
  • Nov 01: Omid Etesami on Ad auctions and market equilibrium
  • Oct 25: Ravi Kannan on Sampling in large matrices and tensors
  • Oct 18: Iordanis Kerenidis on The one-way communication complexity of the Boolean Hidden Matching problem
  • Oct 11: Nikhil Devanur on The bidirected cut relaxation of Minimum Steiner Tree
  • Oct 04: Mani Narayanan on Tractable graph matching problem
  • Sep 27: Costis Daskalakis on Reconstructing phylogenies from minimum information
  • Sep 20: Henry Lin on Network decompositions and the power of choice in Polya urns
  • Sep 13: Ben Reichardt on Fault tolerance in quantum computation
  • Sep 06: Dick Karp on Sorting partial orders

Spring 2006

Fall 2005

Spring 2005

Fall 2004

Spring 2004

  • May 19: Cyntia Dwork on Privacy in public databases
  • May 12: Hoeteck Wee on Obfuscation [Talk given by Luca Trevisan]
  • May 07: Leonard Schulman on Clustering
  • May 05: Frank McSherry on Spectral filtering and Gaussian mixtures
  • Apr 28: Tim Roughgarden on Correlated equilibria
  • Apr 21: Kamal Jain on Price-collecting Steiner forest [Location: 400 Cory Hall]
  • Apr 14: Kamalika Chaudhuri on Degree-restricted MST
  • Apr 07: Luca Trevisan on Inapproximability results proved using average-case assumptions
  • Mar 31: Andrej Bogdanov on Pseudorandomness against degree-2 adversaries
  • Mar 24: No Lunch (spring break)
  • Mar 15: Iordanis Kerenidis on Communication complexity [Location: 380 Soda Hall]
  • Mar 10: Ryan O'Donnell on Learning monotone functions
  • Mar 03: Kobbi Nissim on Communication versus computation
  • Feb 25: Nati Linial on Simplicial complexes [Location: 400 Cory Hall]
  • Feb 18: Arnab Nilim on Optimal Markov decision problems [Location: 400 Cory Hall]
  • Feb 11: Kris Hildrum on Peer-to-peer networks
  • Feb 04: Scott Aaronson on The communication complexity of agreement
  • Jan 28: No Lunch
  • Jan 21: Baruch Awerbuch on Adaptive routing

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

  • Nov 22: Free lunch (we will wish each other happy holidays)
  • Dec 06: Iordanis Kerenidis on Collaborative filtering
  • Nov 29: Satish Rao on Disjoint paths
  • Nov 22: No Lunch (Thanksgiving week)
  • Nov 15: Kenji Obata on Lower bound for property testing [Location: 373 Soda]
  • Nov 08: Sivan Toledo on Linear algebraic algorithms
  • Nov 01: David Wagner on Homomorphic signature schemes
  • Oct 25: Christos Papadimitriou on Computational game theory
  • Oct 18: Yuval Rabani on Directed multicuts
  • Oct 11: Amir Shpilka on Lower bounds for matrix product
  • Oct 04: Subhash Khot on A class of two-query PCPs
  • Sep 27: Luca Trevisan on Sub-linear time MST approximation
  • Sep 20: Beini Zhou on Black-box and non-black-box zero knowledge
  • Sep 13: Sanjeev Arora on The nondeterministic hierarchy theorem
  • Sep 11: Venkat Guruswami on Hypergraph coloring