• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Conference 'Problems in Theoretical Computer Science 2019'

This workshop is organized by the Laboratory of Theoretical Computer Science of the Higher School of Economics.
The scope of the workshop covers various areas of Theoretical Computer Science.

Day1
Friday, November 29, 2019
12:00 – 12:40
Bounded-depth Frege complexity of Tseitin formulas for all graphs

Dmitry M. Itsykson, PDMI RAS (room G410)

We prove that there is a constant $K$ such that \emph{Tseitin} formulas for an undirected graph $G$ requires proofs of size $2^{tw(G)^{\Omega(1/d)}}$ in depth-$d$ Frege systems for $d<\frac{K \log n}{\log \log n}$, where $tw(G)$ is the treewidth of $G$. This extends H{\aa}stad recent lower bound for the grid graph to any graph. Furthermore, we prove tightness of our bound up to a multiplicative constant in the top exponent. Namely, we show that if a Tseitin formula for a graph $G$ has size $s$, then for all large enough $d$, it has a depth-$d$ Frege proof of size $2^{tw(G)^{O(1/d)}} poly(s)$.
Through this result we settle the question posed by M. Alekhnovich and A. Razborov of showing that the class of Tseitin formulas is quasi-automatizable for resolution.
Joint work with Nicola Galesi, Artur Riazanov and Anastasia Sofronova.

 

12:50–13:30
Semialgebraic vs algebraic proofs: upper bounds

Edward A. Hirsch, PDMI RAS (room G410)

Algebraic and semialgebraic propositional proof systems are known for decades: the former systems are based on the derivation in an ideal, the latter are based on constructing a cone.
While exponential lower bounds for algebraic systems are long known, semialgebraic proof systems still resist proving such bounds for them (except for treelike or very weak systems).
Grochow and Pitassi suggested an algebraic proof system (with randomized verification)
working with circuits (Ideal Proof System, IPS), which is a very powerful proof system resembling Extended Frege system.
We introduce a somewhat analogous semialgebraic proof system (Cone Proof System, CPS) and ask whether CPS can be polynomially simulated in IPS.
In this talk we demonstrate a sufficient condition for such simulation. A conditional lower bound is proved in a subsequent talk.
This is a joint work with Y.Alexeev, D.Grigoriev, I.Tzameret

13:30–14:10
Semialgebraic vs algebraic proofs: lower bounds

Yaroslav Alexeev, SPbU (room G410)

We consider the two proof systems, IPS and CPS, discussed in the previous talk.
We prove a conditional exponential lower bound on the size of IPS proofs of a so-called Binary Value Principle (\sum x_i2^i = 1), which refutation is needed in the simulation.
Since BVP is easily refuted in CPS, we conclude that the existence of short IPS proofs for it is equivalent to the existence of a polynomial simulation of CPS in IPS as an algebraic system.
The condition is the following conjecture of Shub and Smale: it is impossible to compute a number divisible by n! using just polylog(n) arithmetic operations.
This is a joint work with Edward A. Hirsch, D.Grigoriev, I.Tzameret

15:30–16:10
Searching Runs in Streams

Oleg Merkurev, UrFU (room G410)

Runs, or maximal periodic substrings, capture the whole picture of periodic fragments in a string. Computing all runs of a string in the usual RAM model is a well-studied problem. We approach this problem in the streaming model, where input symbols arrive one by one and the available space is sublinear. We show that no streaming algorithm can identify all squares, and hence all runs, in a string. Our main contribution is a Monte Carlo algorithm which approximates the set of runs in the length-$n$ input stream $S$. Given the error parameter $\varepsilon=\varepsilon(n)\in(0,\frac12]$, the algorithm reports a set of periodic substrings such that for each run of exponent $\beta\ge 2+\varepsilon$ in $S$ a single its substring is reported, with the same period and the exponent at least $\beta-\varepsilon$; for runs of smaller exponent, zero or one substring of exponent at least 2 is reported. The algorithm uses $O(\frac{\log^2 n}{\varepsilon})$ words of memory and spends $O(\log n)$ operations, including dictionary operations, per symbol.

16:10–16:50
Almost tight lower bounds on regular resolution refutations of Tseitin Formulas for constant-degree graphs

Artur Ryazanov, PDMI RAS (room G410)

We show that size of any regular resolution refutation of Tseitin formula T(G,c) based on a graph G is at least 2^{\Omega(tw(G))/log n}, where n is the number of vertices in G and tw(G) is the treewidth of G. For constant degree graphs there is known upper bound 2^{O(tw(G))} [Alekhnovich, Razborov, FOCS-2002], so our lower bound is tight up to a logarithmic factor in the exponent.
In order to prove this result we show that any regular resolution proof of Tseitin formula T(G,c) of size S can be transformed to a read-once branching program of size S^{log n} computing a satisfiable Tseitin formula T(G,c'). Then we show that any read-once branching program computing satisfiable Tseitin formula T(G,c') has size at least 2^{\Omega(tw(G))}; the latter improves the result of [Glinskih, Itsykson, CSR-2019] and together with the transformation implies the main result.
Talk is based on joint work with Dmitry Itsykson, Danil Sagunov and Petr Smirnov.

17:20–18:00
Resolution over linear equations: lower bounds on Ordering and Dense Linear Ordering principles

Svyatoslav Gryaznov, PDMI RAS (room G410)

We consider proof system Res(+) introduced by Itsykson and Sokolov, which is an extension of Resolution proof system and operates with disjunctions of linear equations over F_2. We construct a framework based on Prover-Delayer games for proving lower bounds on tree-like Res(+) refutations for families of unsatisfiable CNF-formulas with certain properties. The framework generalizes the ideas used for proving exponential lower bound on Pigeonhole principle. We use the method to prove exponential lower bounds on tree-like Res(+) refutations for Ordering and Dense Linear Ordering principles. Both of these principles have Resolution refutations of polynomial size, therefore they are natural examples of formulas that are hard for tree-like Res(+) and easy for Resolution.

Day2
Saturday, November 30, 2019
11:00–11:40
On the Complexity of the Quantified Constraint Satisfaction Problem

Dmitriy Zhuk, MSU (room R506)

The Quantified Constraint Satisfaction Problem QCSP$(\Gamma)$ is the problem to evaluate a sentence of the form $\forall x_1 \exists y_1 \dots \forall x_n \exists y_n \ (R_{1}(\dots)\wedge\dots \wedge R_{s}(\dots))$, where $R_1,\dots,R_s$ are  relations from the constraint language $\Gamma$. This problem is a generalization of the Constraint Satisfaction Problem, where only existential quantifiers are allowed.
We study the complexity of QCSP$(\Gamma)$ for different constraint languages on finite sets. Unlike the Constraint Satisfaction Problem, where the problem for every constraint language is either tractable, or NP-complete, QCSP$(\Gamma)$ can be PSpace-complete. It was conjectured by Hubie Chen that QCSP$(\Gamma)$ is either tractable, or NP-complete, or PSpace-complete. We disproved this conjecture and showed that for some constraint languages $\Gamma$ the problem QCSP$(\Gamma)$ can be coNP-complete, DP-complete and $\Theta_{2}^{P}$.
Also, we characterized the complexity of the Quantified Constraint Satisfaction Problem for constraint languages on 3-element domain containing all unary singleton relations (so called idempotent case), i.e. we showed that for such languages QCSP$(\Gamma)$ is either tractable, or NP-complete, or coNP-complete, or PSpace-complete.

12:10–12:50
Polynomial Identity Testing and Sylvester-Gallai theorem

Alexey Milovanov, HSE (room R506)

Polynomial Identity Testing (PIT) is the problem of efficiently determining whether a multivariate polynomial is identically equal to zero. The polynomial is given as an arithmetic circuit or an algebraic formula.
There exists a simple polynomial-time randomized algorithm for solving PIT - just testing equality to zero at a ``random'' point. However, a deterministic  polynomial-time algorithm solving PIT is unknown.
Designing an efficient deterministic algorithm is one of the most challenging open problems in computational complexity (there exist a connection between derandomization of PIT and  proving circuit lower bounds).
Despite a deterministic polynomial algorithm for PIT for general algebraic circuits is unknown for some restriction classes of circuits it is possible to derandomize PIT. One of unexpected tool for proving such results is Sylvester-Gallai theorem: if there are finitely many distinct points on the real plane s.t., for every pair of distinct points, the line through them also contains a third point, then they all lie on the same line.
The talk will be about the last achievements in this area including the speaker's results about 4-depth circuits.

12:50–13:30
A Computational Model for Parsing Expression Grammars

Alexander Rubtsov, HSE (room R506)

Parsing expression grammars are generalization of top-down parsing languages from the sixties that generalized LR-grammars, but had been practically useless due to the computational limits. Now PEGs are widely implemented in different programming languages, so their language class should be investigated. It is not much known about PEGs: it is known that this class is between deterministic context-free languages and context-sensitive languages, but the question whether PEGs contain all CFLs is still open. We provide a computational model for PEGs and show with its aid that this class contain the regular closure of DCFLs. We also generalize this model and show its connection with 2-way pushdown automata and proof analogue of the Cook’s theorem about linear time parsing for 2-DPDA. The last result is quite similar to the Glück’s proof of the Cook’s theorem.
Joint work with N. Chudinov

15:00–15:40
Randomness tests and generators

Alexander Shen, LIRMM, CNRS (room R506)

Random bit generators are often used, but what does it mean? What do we expect from "random" bits? Can we find out whether the generator is giving "really random bits"? How people construct the generators and how they test them? What are the standards and certifications for generators and what do they really guarantee? What to do if you really need random bits? This talk provides a survey of relevant mathematical notions, current practices, their weaknesses and some possible improvements.

15:40–16:20
Collapsing Superstring Conjecture

Alexander S. Kulikov, PDMI RAS (room R506)

In the Shortest Common Superstring (SCS) problem, one is given a collection of strings, and needs to find a shortest string containing each of them as a substring. SCS admits 21123-approximation in polynomial time (Mucha, SODA'13). While this algorithm and its analysis are technically involved, the 30 years old Greedy Conjecture claims that the trivial and efficient Greedy Algorithm gives a 2-approximation for SCS.
We develop a graph-theoretic framework for studying approximation algorithms for SCS. The framework is reminiscent of the classical 2-approximation for Traveling Salesman: take two copies of an optimal solution, apply a trivial edge-collapsing procedure, and get an approximate solution. In this framework, we observe two surprising properties of SCS solutions, and we conjecture that they hold for all input instances. The first conjecture, that we call Collapsing Superstring conjecture, claims that there is an elementary way to transform any solution repeated twice into the same graph G. This conjecture would give an elementary 2-approximate algorithm for SCS. The second conjecture claims that not only the resulting graph G is the same for all solutions, but that G can be computed by an elementary greedy procedure called Greedy Hierarchical Algorithm.
While the second conjecture clearly implies the first one, perhaps surprisingly we prove their equivalence. We support these equivalent conjectures by giving a proof for the special case where all input strings have length at most 3. We prove that the standard Greedy Conjecture implies Greedy Hierarchical Conjecture, while the latter is sufficient for an efficient greedy 2-approximate approximation of SCS. Except for its (conjectured) good approximation ratio, the Greedy Hierarchical Algorithm provably finds a 3.5-approximation.

16:50–17:30
Factor complexity vs power avoidance in infinite words

Arseny Shur, UrFU (room R506)

Two main combinatorial characteristics of infinite sequences (words) over finite alphabets are factor complexity and local exponent. The first one characterizes the number of distinct blocks (factors) of fixed length in a sequence, and the second one describes the maximum order of a repetition in the sequence. There are a lot of papers on each of these characteristics, but surprisingly little was known till recently about their interaction. In this talk we present the results of systematic study of possible complexity in binary and ternary sequences with certain restrictions on the local exponent.

 

Day3
Sunday, December 1, 2019
11:00–11:40
An improvement of the upper bound for Gilmer-Kouck&yacute;-Saks communication game

Ivan Petrenko, MSU (room R504)

The GKS game was formulated by Justin Gilmer, Michal Koucky, and Michael Saks in their research of the sensitivity conjecture. We propose a new strategy for the GKS game to slightly improve the current known upper bound for the cost of the game and achieve a result of O(n^0.4693). 

12:10–12:50
Lower bounds for succinct data structures

Dmitry Kosolobov, UrFU (room R504)

In this talk we will discuss the lower bounds for query time in succinct data structures. In the model we consider, the data structure (typically) occupies only o(D) bits of space on top of the D bits taken by the read-only input (a string or an array, for instance) - hence, the term "succinct". Unlike most results on lower bounds, the results for this model usually rely on rather elementary proof techniques and yet, as it seems, the field still remains relatively unexplored. The size of available memory normally affects the query time in the model and, therefore, the lower bounds often look as trade-offs: the faster the queries, the more space is required. As an illustrative example, we consider in details the lower bound recently obtained by the speaker for the following classical problem: The longest common extension (LCE) problem is to preprocess a given string of length n into a data structure that uses S(n) bits on top of the read-only input and answers in T(n) time the queries LCE(i,j) computing the length of the longest string that occurs at both positions i and j in the input. We prove that the trade-off S(n)T(n) = \Omega(n log n) holds in the non-uniform cell-probe model provided that each letter occupies a separate memory cell and S(n) = \Omega(n). It is known that this trade-off is tight. We will also briefly discuss several other succinct data structures and known lower bounds for them, and will conclude with a list of open problems.

12:50–13:30
(Semi)Algebraic Proofs over {-1, +1} Variables

Dmitry Sokolov, KTH Royal Institute of Technology (room R504)

One of the major open problem in proof complexity is to prove lower bounds on $\AC_0[p]$-Frege proof systems. As a step toward this goal Impagliazzo, Mouli and Pitassi in a recent paper suggested to prove lower bounds on size for Polynomial Calculus over the $\{\pm 1\}$ basis. In this paper we show a technique for proving such lower bounds and moreover give lower bounds on size for Sum-of-Squares over the $\{\pm 1\}$ basis.
We show lower bounds on random $\Delta$-CNF formulas and formulas composed with a gadget. As a byproduct, we establish a separation between Polynomial Calculus and Sum-of-Squares over the $\{\pm 1\}$ basis by proving a lower bound on the Pigeonhole Principle.

15:00–15:40
Computational Complexity of Synchronization under Regular Constraints

Mikhail Volkov, UrFU (room R504)

Given a DFA, we study the question if it admits a synchronizing word that belongs to some fixed regular language L. We assume that L is given by some partial finite automaton called constraint automaton. We show that this synchronization problem becomes PSPACE-complete even for some constraint automata with two states and a ternary alphabet.
In addition, we characterize constraint automata for which the constrained synchronization problem is polynomial-time solvable. We classify the complexity of the constrained synchronization problem for binary or ternary constraint automata with two states completely and lift these results to larger classes of finite automata.
This is a joint work with Henning Fernau, Vladimir Gusev, Stefan Hoffmann, Markus Holzer, and Petra Wolf

15:40–16:20
Universal trees for communication protocols and De Morgan formulas

Alexander V. Smal, PDMI RAS (room R504)

A Boolean function F is universal for a class C if for every function H from C, H is a projection of F replacing variables with constants and literals. Let L_d be the class of all functions computed by De Morgan formulas of depth d. What is the smallest d' such that L_d'
contains a universal function for L_d? It is easy to see that d' = 2d is enough. Aaron Potechin showed that there is always a universal function in L_{5/3 d}. This talk covers work-in-progress on improving this bound.

16:50–17:30
An application of communication complexity, Kolmogorov complexity and extremal combinatorics to parity games

Alexander Kozachinskiy, HSE (room R504)

So-called separation automata are in the core of several recently invented quasi-polynomial time algorithms for parity games. An explicit q-state separation automaton implies an algorithm for parity games with running time polynomial in q. It is open whether a polynomial-state separation automaton exists. A positive answer will lead to a polynomial-time algorithm for parity games, while a negative answer will at least demonstrate impossibility to construct such an algorithm using separation approach.
In this work we prove exponential lower bound for a restricted class of separation automata. Our technique combines communication complexity and Kolmogorov complexity. One of our technical contributions belongs to extremal combinatorics. Namely, we prove a new upper bound on the product of sizes of two families of sets with small pairwise intersection. 
Joint work with M. Vyalyi