# Workshop on Complexity of Computation, Communication, Descriptions and Proofs

This workshop is organized by the Laboratory of Theoretical Computer Science of the Higher School of Economics and is organized in conjunction with CSR 2017. The scope of the workshop covers various areas of Theoretical Computer Science.

Participation in the workshop is free, but the registration is needed.

Workshop photos

### Program of the workshop

Amnon Ta-Shma (Tel-Aviv University)

The question of finding an epsilon-biased set with close to optimal support size, or, equivalently, finding an explicit binary code with distance $\frac{1-\eps}{2}$ and rate close to the Gilbert-Varshamov bound, attracted a lot of attention in recent decades. In this paper we solve the problem almost optimally and show an explicit $\eps$-biased set over $k$ bits with support size $O(\frac{k}{\eps^{2+o(1)}})$. This improves upon all previous explicit constructions which were in the order of $\frac{k^2}{\eps^2}$, $\frac{k}{\eps^3}$ or $\frac{k^{5/4}}{\eps^{5/2}}$. The result is close to the Gilbert-Varshamov bound which is $O(\frac{k}{\eps^2})$ and the lower bound which is $\Omega(\frac{k}{\eps^2 \logeps})$.

The main technical tool we use is bias amplification with the $s$-wide replacement product. The sum of two independent samples from an $\eps$-biased set is $\eps^2$ biased. Rozenman and Wigderson showed how to amplify the bias more economically by choosing two samples with an expander. Based on that they suggested a recursive construction that achieves sample size $O(\frac{k}{\eps^4})$. We show that amplification with a long random walk over the $s$-wide replacement product reduces the bias almost optimally.

Dmitry Sokolov (St. Petersburg Department of Steklov Institute of Mathematics)

In early 90’s Krajíček introduced a so-called “interpolation technique” for proving lower bounds on the size of propositional proof systems. The crucial idea of this technique is an “extraction” of small monotone circuit for some hard function from a small proofs.

In order to describe the essence of this technique let us consider a monotone function *f* : {0, 1}^{n} → {0, 1} from the class 𝒩𝒫 that cannot be computed by small monotone circuit. Let *Z**e**r**o*(*x*, *r*) is the “reasonable encoding” with additional variables *r*, the fact that *x* ∈ *f*^{−1}(0), and let formula *O**n**e*(*x*, *q*) is the “reasonable encoding” with additional variables *q*, the fact that *x* ∈ *f*^{−1}(1). Krajíček showed for big enough class of proof systems (resolution, LK without cut rule, etc.) that if there is a proof of size *S* of the unsatisfiable formula *Z**e**r**o*(*x*, *r*)∧*O**n**e*(*x*, *q*), then one can create a circuit of size *p**o**l**y*(*S*, *n*) for the function *f*.

By using this general idea Pudlák gave a lower bound on the size of proofs in the Cutting Planes proof system (and it is still the only known technique for proving lower bounds for the CP).

In this talk we consider the ways of “extraction” of monotone circuits, limitation of this technique and recent results in this area.

Bruno Loff (University of Porto)

We will be interested in understanding the communication complexity of a composition problem. Here we have a two-party Boolean function *g*(*x*, *y*), and some function *f*(*z*) on *p* bits, and two players Alice and Bob who wish to compute the composition *f* ∘ *g*. I.e., Alice is given *x*_{1}, …, *x*_{p} and Bob is given *y*_{1}, …, *y*_{p}, and they wish to know *f*(*g*(*x*_{1}, *y*_{1}),…,*g*(*x*_{p}, *y*_{p})).

It is easy to see that the communication complexity of *f* ∘ *g* will never be more than the query complexity of *f* times the communication complexity of *g*. Both players can simulate the decision-tree for *f* and every time they wish to know a bit in *f*’s input, they communicate and solve the corresponding instance of *g*. We overview several new results showing that, for some choices of *g*, this turns out to be the optimal protocol for ***any*** *f*.

Grigory Yaroslavtsev (Indiana University, Bloomington)

I will introduce a model for Massively Parallel Computation (MPC) that has recently emerged to capture aspects of computation in big data processing systems (such as MapReduce and Spark). This model emphasizes depth of the computation as a key metric of performance under certain computational and communication constraints. I will show how to solve some basic computational problems such as sorting and graph connectivity in MPC. We will also see a summary of existing results and main open questions in the area and illustrate connections to other well-studied models used to describe computational and communication complexity in data processing. This talk will be self-contained and won’t assume prior familiarity with massively-parallel computation.

Stephen Fenner (The University of South Carolina)

I will describe a cluster of recent results that help to derandomize some parallel algorithms for graphs and matroids. Specifically, it has been shown that the problem of finding a perfect matching in a graph is in the complexity class quasi-NC, that is, it is solvable in polylogarithmic time (that is, $\log^{O(1)} n$ time) time using quasipolynomially many processors (that is, $2^{\log^{O(1)} n}$ many processors). This was first shown for bipartite graphs by F, Gurjar, and Thierauf in 2015, and a proof for general graphs was recently announced by Svensson and Tarnawski (April 2017). In the interim, Gurjar and Thierauf (2016) gave a quasi-NC algorithm for the Linear Matroid Intersection problem. All these results use similar techniques that are based in geometry---for example, discovering lower dimensional faces of the perfect matching polytope of a graph. I will emphasize these geometrical techniques in the talk.