• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Seminar of the laboratory

The main aim of the seminar is to discuss both new and classical results in Theoretical Computer Science.
The information on the talks is given below.
The seminar is held in online on Wednesdays from 18:10 to 19:30.

Seminar leaders: Nikolay Vereshchagin, Mikhail Vyalyi, Artem Maksaev

17.01.2024 Svetlana Shirokovskikh "Superport networks and a generalization of Kirchhoff’s matrix-tree theorem"
in Russian

13.12.2023 Dmitry Sluch "One-way communication complexity of partial XOR functions" 
Boolean function F(x, y) is a XOR function if it can be represented as some function f of XOR of its inputs. XOR functions are relevant in communication complexity, because they encompass several other interesting functions, like functions based on Hamming distance and also for allowing Fourier analytic technique. For total XOR functions it is known that deterministic communication complexity of F is polynomially related to parity decision tree complexity of f and one-way communication complexity of F is exactly equal to non-adaptive parity decision tree complexity of f. We initiate the studies of a similar connection for partial functions. We show that in case of one-sided communication complexity whether these measures are equal, depends on the number of undefined inputs of f. We propose a Linear-algebraic framework which allows to prove both the equality for small enough number of undefined points and separation for large enough values. In particular, we provide a function with an exponential gap between these two measures. Our separation results translate to the case of two-sided communication complexity as well, in particular showing that the result of Hatami et al. (2018) cannot be generalized to partial functions.
Joint work with Vladimir Podolskii (

29.11.2023 Mikhail Khrystik "The length of algebras"
in Russian

01.11.2023 Alexander Rubtsov "A Computational Model for Parsing Expression Languages"
In Russian

18.10.2023 Arseny M. Shur "Non-constructive approach to repetition thresholds" 
We analyze a simple algorithm, transforming an input word into a word avoiding certain repetitions such as fractional powers and undirected powers. This transformation can be made reversible by adding the log of the run of the algorithm to the output. We introduce a compression scheme for the logs; its analysis proves that $(1+\frac{1}{d})$-powers and undirected $(1+\frac{1}{d})$-powers can be avoided over $d+O(1)$ letters. These results are closer  to the optimum than it is usually expected from purely information-theoretic considerations.
Seminar was in Russian.

11.10.2023 Dmitry Zhuk  "On the complexity of the Quantified Constraint Satisfaction Problem"
In Russian

20.09.2023 Thomas Fernique "Local rules for cut and project tilings"
The cut-and-project method was introduced in the 1980s to define non-periodic tilings, as an alternative to the substitution method. The simplest case is that of a straight line drawn on a grid sheet: the sequence of cut horizontal or vertical segments, projected onto the line, forms a tiling of the line which is non-periodic as soon as the slope is irrational. This extends to higher dimensions (the celebrated Penrose tilings, for example, can be obtained from an algebraic plane in Euclidean space of dimension 5). The same question as for tilings defined by substitution then arises: which of these tilings do admit local rules, i.e. can be characterized only by their patterns of a given finite size? In this presentation, the cut and project method will be presented without assuming any prior knowledge. Then, some remarkable characterizations of (non-)existence of local rules will be presented and illustrated with examples as simple as possible.

28.06.2023  Nikolay Vereshchagin "Goodman-Strauss theorem and its versions"
Goodman-Strauss theorem theorem states that for any substitution satisfying some conditions the family of substitutional tilings is sofic. Those conditions were not formulated explicitly and the proof is very long (about 70 pages). Therefore is very hard to verify it. Also Goodman-Strauss claims that under those conditions the family of hierarchical tilings is sofic. (Every substitutional tiling is hierarchical, but the other way around.) In an attempt to clarify the statement, Fernique and Ollinger formulated explicitly some requirements for substitutions, under which the family of hierarchical tilings its sofic. In the talk I will state Fernique and Ollinger theorem in all details and prove it in a specific interesting case. This will be the joint session of Kolmogorov seminar and TCS Lab of HSE.


14.06.2023 Mikhail Vyalyi "Satisfiability problem for Sigma-Pi-Sigma formulae over the two-element field"

In Russian

10.05.2023 Bruno Bauwens "
Hall-type theorems for fast dynamic matching and applications"

We show that bipartite graphs with a large expansion factor implies poly(log N) time dynamic matching, where N is the total number of nodes. In this result, we consider dynamic matching in which the opponent makes requests of a node to be matched, and the match must remain unchanged until the request is retracted. Coupled with known constructions of lossless expanders, this gives a solution to the main open problem in a classical paper of Feldman, Friedman, and Pippenger~\cite{fel-fri-pip:j:networks}.

 Application 1: storing sets.
We construct 1-query bitprobes that store a dynamic subset $S$ of an $N$ element set. A membership query reads a single bit, whose location is computed in time $\poly(\log N, \log (1/\epsilon))$ time and is correct with probability $1-\epsilon$. Elements can be inserted and removed efficiently in time $\quasipoly (\log N)$. Previous constructions were static: membership queries have the same parameters, but each update requires the recomputation of the whole data structure, which takes time $\poly(\# S \log N)$. Moreover, the size of our scheme is smaller than the best known constructions for static sets.

 Application 2: switching networks.
We construct explicit constant depth $N$-connectors of essentially minimum size in which the path-finding algorithm runs in time quasipolynomial in $\log N$. In the non-explicit construction in Feldman, Friedman and Pippenger 1988, and in the explicit construction of Wigderson and Zuckerman, the runtime is exponential in $N$.

20.04.2023 Vladimir Podolskii "Constant-Depth Sorting Networks"
We consider sorting networks that are constructed from comparators of arity k>2. That is, in our setting the arity of the comparators — or, in other words, the number of inputs that can be sorted at the unit cost — is a parameter. We study its relationship with two other parameters — n, the number of inputs, and d, the depth. This model received considerable attention. Partly, its motivation is to better understand the structure of sorting networks. In particular, sorting networks with large arity are related to recursive constructions of ordinary sorting networks. Additionally, studies of this model have natural correspondence with a recent line of work on constructing circuits for majority functions from majority gates of lower fan-in. We obtain the first lower bounds on the arity of constant-depth sorting networks. More precisely, we consider sorting networks of depth d up to 4, and determine the minimal k for which there is such a network with comparators of arity k. For depths d=1, 2 we observe that k=n. For d=3 we show that k=n/2. For d=4 the minimal arity becomes sublinear: k=\Theta(n^{2/3}). This contrasts with the case of majority circuits, in which k=O(n^{2/3}) is achievable already for depth d=3. The talk is based on joint work with Natalia Dobrokhotova-Maikova and Alexander Kozachinskiy:https://drops.dagstuhl.de/opus/volltexte/2023/17546/pdf/LIPIcs-ITCS-2023-43.pdf

15.03.2023, 22.03.2023  Александр Козачинский "Weisfeiler–Leman algorithm in space" in 2 parts.
The Weisfeiler–Leman algorithm is a family of (incomplete) graph isomorphism tests having interesting connections to logic, fractional isomorphisms,  tree-width, and etc. We study the power of this algorithm on distance graphs of sets of points in R^d.
Video: Part 1Part 2

21.12.2022 Nikolay Proskurin "Polynomial Threshold Functions for Decision Lists"
In Russian

23.11.2022 Maria Tikhonova "Transformers in language modeling"
In Russian

19.10.2022 Fedor Kuyanov "Feynman checkers: number-theoretic properties"
In Russian

15.09.2022 Ivan Mikhailin "A better-than-3logn depth lower bound for De Morgan formulas with restrictions on top gates"
In Russian

21.04.2022 Yuri Malykhin "Matrix Complexity and Approximation"
In Russian

17.03.2022 Andrey Ignatov "Fitting Protein into H-Core: Structural Analysis of Discrete Hydrophobic Cores" 
Protein folding is a process by which a protein chain acquires its native 3-dimensional structure (named conformation). Modeling the whole folding process is a challenging problem as it includes a huge number of degrees of freedom.
One possible approach to relieve this issue is using coarse-grained protein models. They simplify the protein geometry by merging several groups of atoms into one unified atom. The current research is about HP-models where the whole amino acid residues are represented by unified atoms. These atoms can be of two types – H (hydrophobic), or P (polar). Hydrophobic residues are known to be present inside the protein molecule, while polar ones can stay on its surface.
In HP-model, monomers (unified residues) can be present only in nodes of some lattice. This turns the problem of protein folding into a discrete optimization problem where the number of H-H contacts needs to be maximized. This problem can be solved by creating the maximally dense H-Core (hydrophobic core with the maximal number of H-H contacts) and fitting the protein into it.
In the report, several methods of speeding up 2D H-Core construction and the following protein fitting are going to be presented. Additionally, some recent advances in 3D H-Core construction are going to be reported.
Development of this theory may help in rapid creation of a first approximation of the protein structure.

24.02.2022 Pavel Zakharov "Evolution of a bipartite random graph"
The evolution is a topic concerning the distribution of components' sizes and their number in random graph. Most of the facts are known when the basic graph is a so-called spectral expander (for instance well-known G(n, p), where the basic graph is K_n). We are going to extend some facts on the bipartite random graph model G(n, n, p). The main result is the proof of the central limit theorem for the number of vertices in the giant component for p = c/n, c > 1.
Seminar's video

17.02.2022 Anand Mahendran "On the Generative Power of Semi-Bracketed Contextual Grammars"
Bracketed and fully bracketed contextual grammars were introduced in 1998 to bring the notion of a tree structure to the strings generated by the grammar. The tree structure was defined by choosing the selectors of minimally Dyck covered strings and by associating a pair of parentheses to the adjoining contexts in the derivation. However, these grammars failed to generate the three basic non context-free languages that are required for representing natural languages. To overcome this failure, a model namely Semi-bracketed contextual grammar was introduced and the generative power of the grammar has been studied a little. In this work, we discuss the generative power of Semi-bracketed contextual grammars and relate the results with the Chomsky family of languages and the languages generated by internal contextual grammars under maximal local, maximal global and left most derivation mode. We also discuss various closure properties of the family of languages generated by Semi-bracketed contextual grammars with finite and regular choice.
Seminar's video

10.02.2022 Libor Barto (Charles University) "The number of homomorphisms into finite algebras"
In a recent joint work with William DeMeo and Antoine Mottet, we have characterized finite algebras A such that the number of homomorphism from any finite algebra X into A is bounded by a polynomial in the size of X. I will talk about this result, discuss its motivation and background, give examples, and mention open problems.
Seminar's video 

03.02.2022 Artyom Maksaev "Srambling index of non-negative matrices and directed graphs"
in Russian

27.01.2022 Alexander Smal (PDMI RAS) "Formula complexity and Karchmer-Raz-Wigderson conjecture"

in Russian

13.01.2022, 20.01.2022 Alexander Kozachinskiy "Towards a complete description of positional determinacy"
in Russian

19.05.21 A. Storozhenko (UCLA) "An Optimal Separation of Randomized and Quantum Query Complexity"
Understanding the relative power of quantum and classical computing is of basic importance in theoretical computer science. In this talk, I will discuss an optimal separation of quantum and randomized complexities in the query model.
First, we show the upper bound on the sum of the absolute values of the Fourier coefficients of given order $\ell$ for an arbitrary decision tree. The bound is essentially tight and settles a conjecture due to Tal (arxiv 2019; FOCS 2020). This result is of interest in its own right, independent of its use to obtain optimal quantum-classical separations.

As an application, we obtalexin, for any positive integer $k$, a partial Boolean function on $n$ bits that has bounded-error quantum query complexity at most $k/2$ and randomized query complexity $\Omega(n^{1-1/k})$ (up to polylogarithmic factors). This separation of bounded-error quantum versus randomized query complexity is best possible, by the results of Aaronson and Ambainis (STOC 2015). Prior to our work, the best known separation was polynomially weaker: $O(1)$ versus $n^{2/3-\epsilon}$ for any $\epsilon>0$ (Tal, FOCS 2020).


Andrey Bychkov "Optimal monomial quadratization for ODE systems"
Abstract: Consider a system of ODEs with polynomial right-hand side, for example, a scalar ODE: $x’ = x^5$. Introducing a new variable $y = x^4$ leads us to a system of two equations: $x’ = xy$ and $y’ = 4x^3 x’ = 4x^4 y = 4y^2$. The right-hand sides of the new system are of degree at most two, and every solution of the original system is the x-component of some solution of the new system. Such problem, given a system of ordinary differential equations (ODEs) with polynomial right-hand side, transform it into a system with quadratic right-hand side, is called a quadratization problem. The quadratization problem has emerged recently in such areas as model order reduction, numerical solutions of ODEs and chemical reaction networks. In 2011, Ch. Gu proved that it is always possible to perform quadratization with new variables being monomials in the original variables (like $x^4$ in the example above). We call such quadratization monomial. Later in 2020, M. Hemery, F. Fages, and S. Soliman, have showed that the problem of finding an optimal (i.e. of the smallest possible dimension) monomial quadratization is NP-hard. They also designed and implemented an algorithm for finding a monomial quadratization which is practical and yields an optimal monomial quadratization in many cases. In the talk, I will show the results of our recent preprint with G. Pogudin, in which we designed and implemented a practical algorithm that is guaranteed to find an optimal monomial quadratization. We will also discuss encoding the optimal monomial quadratization problem as APX-hard [2]-sumset cover problem.  

M. Anand "Studies on Insertion-deletion Systems and Contextual Grammars Related to Ambiguity, Measures and Applications"
Gene insertion and deletion are the operations that occur commonly in Deoxyribonucleic acid (DNA) processing and Ribonucleic acid (RNA) editing. Based on these operations, a computing model has been formulated in formal language theory known as insertion-deletion systems. As the biological macromolecules can be viewed as symbols, the gene sequences can be represented as strings.
This suggests that the molecular representation can be theoretically analyzed if a biologically inspired computing model recognizes various bio-molecular structures that are predominantly available in DNA, RNA and proteins. Contextual grammars were introduced by Solomon Marcus in 1969 (Marcus 1969), based on the fundamental concept of descriptive linguistics. Internal contextual grammars were introduced by Păun and Nguyen in 1980 as a variant of contextual grammars. Several descriptional complexity measures and various levels of ambiguity were defined for (internal) contextual grammars. Bracketed and Fully bracketed contextual grammars were introduced (Martín Vide and Păun (1998)) to bring the notion of a tree structure to the derived strings. Contextual grammars are the counterpart of insertion-deletion systems as ‘strings’ of latter are ‘selectors’ of former.
We introduce a simple grammar model called Matrix insertion-deletion system to encompass various bio-molecular structures that occur at intramolecular level like hairpin, stem and loop, cloverleaf and at intermolecular level such as double strand, nick language and holliday structure and basic RNA secondary structures like kissing hairpin, bulge loop, internal loop.
Based on the components used in the derivation, six new levels of ambiguity for insertion-deletion systems were introduced. We show that there are inherently i-ambiguous insertion-deletion languages which are j-unambiguous for the combinations (i, j) ∈ {(5, 4), (4, 3), (4, 2), (3, 1), (2, 1), (1, 0), (0, 1)}. Then, six new descriptional complexity measures for insertion-deletion systems based on the used contexts and strings were introduced. Finally, the trade-off between the ambiguity levels and various descriptional complexity measures were analyzed by introducing a new notion ‘pseudo inherently ambiguous languages’.
In addition to the work carried out in the domain of insertion-deletion system we carry out some work on its counterpart domain contextual grammar also. We analyze the trade-off between the ambiguity and descriptional complexity measures of languages generated by internal contextual grammars.

N. Proskurin "
On Separation between the Degree of a Boolean Function and the Block Sensitivity"
in Russian

D. Zhuk "On the Complexity of the Surjective Constraint Satisfaction Problem"
Constraint Satisfaction Problem (CSP) is the problem of deciding whether there exists an assignment to a set of variables subject to some specified constraints. Surjective Constraint Satisfaction Problem (SCSP) is the variant of CSP where we require the solution to be surjective that is all values from the domain appear in the solution. Unlike the CSP, where the complexity was classified for all constraint languages, the complexity of the SCSP remains unknown even for very simple constraint languages.
In the talk we will discuss recent results of the author on the complexity of the SCSP. First, I proved NP-Hardness for one of the most popular constraint languages with unknown complexity (so called No-Rainbow Problem). Additionally, I found a counter-example to a widely-believed conjecture describing the complexity of the SCSP for all constraint languages.
Seminar's video

Bruno Loff "NP-Hardness of Circuit Minimization for Multi-Output Functions"
I will show that it is NP-hard to compute the minimum circuit size of a given multi-output Boolean function f: {0,1}^n -> {0,1}^m. This result appears in a recent paper of ours, which will be presented at CCC 2020: https://eccc.weizmann.ac.il/report/2020/021/. I will also briefly overview the background behind this result and some other results from our paper. This is joint work with Rahul Ilango and Igor Oliveira.
Seminar's video

V. Podolskii "Computation of majority function by monotone low-depth formulas"
in Russian

A. Milovanov "On prediction of an infinite binary sequences by its prefixes"
in Russian

A. Skopenkov (after an announcement by Filakovsky, Wagner, Zhechev)."Realizability of hypergraphs in Euclidean space is not recognizable for codimension greater than 1." 
in Russian

Kristoffer Arnsfelt Hansen "Computational Complexity of Computing Nash Equilibrium Refinements"
The motivation for introducing Nash equilibrium refinements is to eliminate undesirable equilibria, e.g., those relying on playing dominated strategies.  From a computational perspective the interesting question is whether imposing such restrictions incur a significant additional computational cost. We answer this question for a range of different equilibrium refinements.

David Purser "The Big-O Problem for Labelled Markov Chains"
Given two labelled Markov chains, consider the problem of whether one is big-O of the other. More concretely, the problem asks whether the probability of every finite word in the first chain is smaller than some constant multiple of the probability in the second. We show that the problem is, in general, undecidable. Even when it is known one is big-O of the other, the problem of finding or approximating the optimal constant in the big-O is also undecidable. Our main results show the big-O problem is coNP-complete for unlabelled Markov chains (i.e. when the alphabet consists of a single character) and decidable for weighted automata with bounded languages subject to Schanuel’s conjecture.
Joint work with Dmitry Chistikov, Stefan Kiefer and Andrzej Murawski.

Ilya Vorobyev (Skoltech) " Multistage Group Testing "
Group testing is a well-known search problem that consists in detecting of sdefective members of a set of t samples by carrying out tests on properly chosen subsets of samples. The test outcome is positive if the tested set contains at least one defective element; otherwise, it is negative. The goal is to find all defective elements by using the minimal possible number of tests in the worst case. Two types of algorithms are usually considered. In adaptive group testing, at each step, the algorithm decides which group to test by observing the responses of the previous tests. In non-adaptive algorithms, all tests are carried out in parallel. Multistage algorithms are a compromise solution to the group testing problem. In p-stage algorithms, all tests are divided into p stages. Tests from the i-th stage may depend on the outcomes of the tests from the previous stages. In this talk, I will present some recent results about multistage group testing.


Manfred Buchacher "Inhomogeneous Lattice Walks"
A lattice walk is a sequence of points in Z^d, their consecutive differences are called its steps, and the set they are taken from its step set. We consider walks whose step set is not fixed but governed by a finite (deterministic) automaton. Closely related to the problem of counting lattice walks, either exactly or asymptotically, is the problem of determining the nature of the associated generating function, i.e. determining whether it is rational, algebraic or D-finite. We explain how what is known as the kernel method generalizes and applies to the setting of inhomogeneous lattice walks to answer this question. This is joint work with Manuel Kauers.

Jie-Hong Roland Jiang "Evaluation and Certification of Quantified Boolean Satisfiability"
Quantified Boolean Formulas (QBFs) allow compact encoding of decision problems even complete in the PSPACE complexity class. The broad applications of QBF satisfiability have attracted recent efforts to develop effective solvers regardless of its intractability. In this talk, we will address two key aspects, evaluation and certification, of QBF solving to enable practical applications.


Anton Gnatenko " Specification and Verification of Finite State Transducers"
in Russian

Gleb Pogudin "Solving equations in sequences"
Consider any recurrence relation, for example, the one for Fibonacci numbers: $f_{n + 2} = f_{n + 1} + f_n$. One can think of this equations as a relation between the sequence itself and its shifts. A natural general framework that would incorporate equations of this sort would be a language including not only arithmetic operations, but also a shift operator S. In this language, the above recurrence will recast to $S^2(f) = S(f) + f$. Many questions about nonlinear recurrences and discrete-time dynamical systems can be stated as questions about solutions of equations in this extended language (so-called difference equations) in the ring of sequences. In 2007, Hrushovski and Point showed that the first order theory of sequences in this language is undecidable, that is, there is no algorithm for verifying correctness of first-order formulas. On the other hand, in our recent work with A. Ovchinnikov and T. Scanlon, we have shown that the problem of determining existence of a solution of a system of difference equations in the complex-valued sequences is decidable. In the talk, I will describe this decidability result as well as undecidability results from our recent preprint with T. Scanlon and M. Wibmer that show that almost all reasonable extensions of the consistency problem are, in fact, undecidable including determining the existence of a solution in real-valued sequences and membership problem for radical difference ideals.

Bruno Bauwens "Strongly invertible function and their use for fast data compression"
We say that a probabilistic function F: X -> Y is (K,epsilon)-invertible if with probability 1-epsilon, the value F(x) identifies x inside any subset of K elements in X. More precisely, there exists a deterministic function g that takes a K-element subset of X and an element y as input, such that for all K-element sets S and x in S: Pr [g(S,F(x)) = x] > 1-epsilon This property is stronger than what is typically obtained from hashing, because the recovery of x does not require the knowledge of the randomly chosen hash function. We show that there exist polynomial time constructions of (2^k,epsilon)-invertible functions F: {0,1}^n -> {0,1}^m for which m-k < log^2 (n/epsilon). The functions are obtained from extractors constructed by Guruswamy, Umans, Vadhan, JACM, 2009. Application 1. We consider compression of a string x towards a target size m, (m is smaller than the length of x). In other words, we consider compression functions that have x and m as input and must generate an m-bit description of x. We show that we can speed up any such compressor to a polynomial time one, at the cost of slower decompression and a polylogarithmic increase in the compression length. Moreover, we obtain polynomial time compression down to the Kolmogorov complexity of the string (up to an additive polylog term), provided a good upper bound of this complexity is given. Unfortunately, the result is theoretical, since the decompression time increases by a factor 2^m. Application 2. In distributed compression, several sources need to compress their own strings in isolation. The compressed strings are sent to a decompressor, which needs to reconstruct all strings. If these strings contain correlated information, not all sources need to send all of the information. The minimal sizes of the messages are given by the Slepian-Wolf theorem for sequences generated by a stationary ergodic process. Our second result generalizes this theorem by removing any assumption on the generating mechanism. Moreover, the invertible functions above provide polynomial time compression to this optimal length. We present simplified constructions with improved parameters compared to the papers:
- B.Bauwens and M.Zimand, Linear list-approximation for short programs (or the power of a few random bits), CCC 2014, http://arxiv.org/abs/1311.7278
- M.Zimand, Kolmogorov complexity version of Slepian-Wolf coding, STOC 2017, Montreal, June 19-23, 2017 http://arxiv.org/abs/1511.03602
- B.Bauwens, Optimal probabilistic polynomial time compression and the Slepian-Wolf theorem: tighter versions and simple proofs, 2018, https://arxiv.org/abs/1802.00750

Vadim Grinberg (TTIC) "Hidden Click Semi-Random Problem"
in Russian


Matthias Englert "The (Generalized) Reordering Buffer Management Problem"
In the Generalized Reordering Buffer Management Problem (GRBM) a sequence of items located in a metric space arrives online, and has to be processed by a set of k servers moving within the space. In a single step the first b still unprocessed items from the sequence are accessible, and a scheduling strategy has to select an item and a server. Then the chosen item is processed by moving the chosen server to its location. The goal is to process all items while minimizing the total distance traveled by the servers.
This problem was  introduced by Chan et al. (Theoretical Computer Science 2012) and has been subsequently studied in an online setting by Azar et al. (STACS 2014). The problem is a natural generalization of two very well-studied problems: the k-server problem for b=1 and the Reordering Buffer Management Problem (RBM) for k=1. In this paper we consider the GRBM problem on a uniform metric in the online version. We give an introduction into this problem and outline a O(log k * (log k + loglog b)) competitive online algorithm. This is a significant improvement in the dependency on b compared to the previous best bound of O(b^0.5 * log k), and is asymptotically optimal for constant k, because Omega(log k + loglog b)$ is a lower bound for GRBM on uniform metrics. (Based on joint work with Harald Räcke and Richard Stotz.)

Matjaž Krnc "Positive Aging Admits Fast Asynchronous Plurality Consensus"
We study the problem of distributed plurality consensus among nodes, each of which initially holds one of opinions. The goal is to eventually agree on the initially dominant opinion. We consider the synchronous communication model in which each node is equipped with a random clock. If the clock of a node ticks, it may open communication channels to a constant number of other nodes, chosen uniformly at random or from a list of constantly many node addresses acquired in some previous steps. The tick rates and the time delays for establishing communication channels (channel delays) follow some probability distribution. Once a channel is established, communication between nodes can be performed atomically.

Previously, asynchronous plurality consensus has mostly been considered in the Poisson clock model (or similar) without any channel delays. We consider distributions for the waiting times between ticks and channel delays that have constant mean and the so-called positive aging property. In this more general setting asynchronous plurality consensus is fast: if the initial bias between the largest and second largest opinion is at least, then after time steps all but a fraction of nodes have the initial majority opinion. Here denotes the initial ratio between the largest and second largest opinion. After additional steps all nodes have the same opinion w.h.p., and this result is tight.
Next, we further improve partial consensus in the case when the distributions mentioned above satisfy an additional property, which is common in many well-known distributions (e.g., exponential, Rayleigh or Weibull with shape parameter at least). In particular, we show that positive aging together with this property leads to consensus time for all but nodes, w.h.p. For a large range of initial configurations, this result implies that partial consensus can be reached significantly faster in this asynchronous communication model than in the synchronous setting with the same limitations on the number of communication partners per time step.
To obtain these results, we first apply our approach in the classical synchronous model. Then, we focus on the asynchronous model: first, we assume the existence of a centralized leader and finally we present two fully distributed algorithms, which do not rely on any predefined leader.

Martin Milanic "Generalizations of simplicial vertices and a new polynomially solvable case of the maximum weight clique problem"
A vertex in a graph is simplicial if its neighborhood forms a clique. In the talk I will discuss three generalizations of the concept of simplicial vertices: avoidable vertices (also known as OCF-vertices), simplicial paths, and their common generalization avoidable paths. A general conjecture on the existence of avoidable paths will be presented, which, if true, would imply results due to Ohtsuki, Cheung, and Fujisawa from 1976 and due to Chvátal, Sritharan, and Rusu from 2002. In turn, both of these results generalize Dirac's classical result on the existence of simplicial vertices in chordal graphs. Very recently (on August 13, 2019) Marthe Bonamy, Oscar Defrain, Meike Hatzel, and Jocelyn Thiebaut announced a proof of the conjecture on arXiv.
An application of the concept of avoidable vertices to the maximum weight clique problem will be presented. A graph is said to be hole-cyclically orientable if it has an orientation such that every induced cycle of length at least four is oriented cyclically. These graphs form a generalization of the class of 1-perfectly orientable graphs and their subclasses chordal graphs and circular-arc graphs. By showing that Lex-BFS computes a bisimplicial elimination ordering on these graphs, which is simultaneously a circular-arc elimination ordering, we derive an efficient algorithm that computes a maximum weight clique of a given vertex-weighted hole-cyclically orientable graph. This is the first known polynomial-time algorithm for the maximum clique problem in the class of 1-perfectly orientable graphs.

The existence of avoidable vertices and edges has interesting consequences for highly symmetric graphs: in a vertex-transitive graph every induced two-edge path closes to an induced cycle, while in an edge-transitive graph every three-edge path closes to a cycle and every induced three-edge path closes to an induced cycle.
Joint work with J. Beisegel, M. Chudnovsky, V. Gurvich, and M. Servatius.

Peter Gacs "Eroders"
This talk is about cellular automata on a finite-dimensional lattice---a certain kind that has special interest for error correction.  A state 0 is distinguished, and a cellular automaton is called an "eroder'' if it eliminates any finite island of non-0s.  We restrict attention to cellular automata whose state set is ordered with 0 being minimal, and whose transition rule is monotonic.  (For example majority vote over some neighbors.)  A classical result of Toom characterizes all 2-state eroders (in any dimension).  It also shows that such eroders erode in linear time, and resist (low random) noise.  (The latter was key to building the simplest known construction of a reliable computation model.)

Here I will report on work (some of it old by Galperin and Toom, some of it new, joint with Ilkka Törmä and Mathieu Hilaire) trying to extend these results to more than 2 states.  In one dimension, all eroders erode in linear time, but not all resist noise.  In 2 dimensions and 3 states, some eroders need more than linear time, and we don't even know whether the eroder property is decidable. Characterizing eroders that resist noise (in any dimension) seems easier than to characterize just eroders (they all erode in linear time).

Khaled Elbassioni "Some Applications of the Multiplicative Weights Update Method in (Semi) Infinite (Discrete) Optimization"
Standard (continuous or discrete) optimization problems typically assume that the dimension and the number of constrains in the problem are finite. In many practical situations, either one or both of these two parameters may be infinite. Examples include: (i) the art gallery problem from computational geometry, where it is required to guard the infinite set of points of an art gallery with the minimum number of cameras, and (ii) robust optimization problems, in which the coefficients of the constraints and/or the objective function are drawn from some infinite uncertainty sets, and the goal is to solve the optimization problem for the worst-case choice in each uncertainty set. In this talk, we ​highlight some of these examples, and then describe how the well-known multiplicative weights update method can be applied to derive efficient approximation algorithms for solving this type of problems.

Kazuhisa Makino "The complexity issues on stochastic games"
Stochastic games were introduced in 1953 by Shapley for the discounted case, and extended to the undiscounted case in 1957 by Gillette. Each such game is a dynamic game with probabilistic transitions played by two players on a finite set of states. The game is played in the infinite sequence of rounds. In the round, the game is in some state. The players choose actions. Then they receive payoffs and the game moves to a new random state, where the payoffs and the transition probability depend on the previous state and the actions chosen by the players. The procedure is repeated at the new state. Stochastic games generalize parity games, cyclic games, simple stochastic games, and BWR games, which all belong to NP and coNP, but are not known to be solved in polynomial time. In this talk, I briefly survey the algorithmic issues on these games, as well as the properties on the optimal strategies for them. I also discuss recent works obtained with Endre Boros, Khaled Elbassioni, and Vladimir Gurvich.

A. Rubtsov  "Automata with additional data structures"
in Russian

V.Podolskii "Lower Bounds on Balancing Sets and Depth-2 Threshold Circuits"
in Russian

A. Romashchenko
 " How to use undiscovered entropy inequalities"
in Russian 

Ranko Lazic «Universal trees grow inside separating automata: Quasi-polynomial lower bounds for parity games»
Several distinct techniques have been proposed to design quasi-polynomial algorithms for solving parity games since the breakthrough result of Calude, Jain, Khoussainov, Li, and Stephan (2017): play summaries, progress measures and register games. We argue that all those techniques can be viewed as instances of the separation approach to solving parity games, a key technical component of which is constructing (explicitly or implicitly) an automaton that separates languages of words encoding plays that are (decisively) won by either of the two players. Our main technical result is a quasi-polynomial lower bound on the size of such separating automata that nearly matches the current best upper bounds. This forms a barrier that all existing approaches must overcome in the ongoing quest for a polynomial-time algorithm for solving parity games. The key and fundamental concept that we introduce and study is a universal ordered tree. The technical highlights are a quasi-polynomial lower bound on the size of universal ordered trees and a proof that every separating safety automaton has a universal tree hidden in its state space.

D. Taletskiy "
 Investigation of the number of maximum and largest independent sets in some classes of trees "
in Russian

D. Zhuk "
 The complexity of the task of satisfying the constraints depending on the language of the constraints "
in Russian

A. Milovanov"
 Turing machine games and the power of random strings "
in Russian

A. Storozhenko"
 Lower test scores for dicotry through communication complexity "
in Russian

D.Sirotkin "
 Investigation of the computational complexity of the problems of an independent set and a vertex k-coloring in some classes of graphs"
in Russian

I. Arzhantsev " Theory of divisors and combinatorics in a free semigroup "  
in Russian

D. Piontkovski " Algebraic systems and formal languages "
in Russian

M.Vyalyi " Permanent calculation algorithms and resolving trees with linear queries "
in Russian


A.Shen " Constructivization and sets of zero measure (review)"
in Russian

V. Gurvich " Sprague-Grundy Functions of the NIM Game on the Hypergraph "
in Russian

A. Chistopolskaya "
Depth of resolution trees with XOR queries greater than granularity "
in Russian

V. Grinberg
  "LP-based approximation of Minimum k-Star Cover"
Consider the following problem: we are given two sets in a metric space, facilities and clients, and an integer k. A star is a pair of one facility and a subset of clients assigned to this facility, and the weight of a star is sum of distances from the facility to assigned clients. We need to select a subset of at most k facilities and assign every client to exactly one of the selected facilities, creating stars, so that the maximal weight of a star (over all stars created) is minimal. This problem is NP-hard, and in spite of resembling k-Facility Location, k-Median or their capacitated versions, it is one of the most poorly studied problems of that kind.

My talk will be devoted to approximation algorithm, based on standard LP relaxation. I will show that, unless we are allowed to violate the number k of opened stars, the integrality gap of standard LP is unbounded, and if we can open at most (1 + eps)k stars, it is Omega(1/eps), for any eps > 0. I will also present a polynomial time approximation algorithm, which opens at most (1 + eps)k stars, and the cost of the returned solution is no worse than O(1/eps^2) times the true (unrelaxed) optimum.

Alexander Kozachinskiy "Data structures for polynomial evaluation".
The talk will be devoted to the polynomial evaluation problem. We are given an univariate polynomial of degree at most n with coefficients in some finite field and we want to answer queries of the form: for a point in a field, what is the value of our polynomial on this point? I will present a lower bound of Larsen which states that for data structures of size linear in n query time should be at least logarithmic in n. This is the strongest known lower bound for static data structures.
I will also give an upper bound of Kedlaya and Umans. They constructed a data structure for polynomial evaluation of size nearly linear in n with polylogarithmic query time.

V.Podolskii “Complexity of dense linear operators”
Suppose we have a vector x of n Boolean variables and we want to compute a Boolean operator Ax with Boolean square matrix A. That is, for all rows of A we would like to compute an OR of variables of x that has ones in the corresponding coordinate of the row. We start with variables and in one step we are allowed to apply OR operation to two expressions computed on previous steps (we build a circuit consisting of OR gates).
If A is a sparse matrix (has O(n) ones), this can easily be done in O(n) operations. We show that the same is true for very dense matrices (O(n) zeros).
This result translates to computation of operators over arbitrary commutative semiring. We also show that that the same cannot be done for semirings with large enough amount of non-commutativity.


Michael Raskin "Clique-width based graph algorithms"
Many graph problems (for example, checking 3-colourability) are known to be NP-complete. But it turns out that they are fixed-parameter tractable, i.e. there are notions of graph structural complexity such that many problems are easy even for large graphs with low complexity.
In the talk, I will explain the notion of clique-width of the graph and show how to optimise a term-automata-based colouring algorithm based on the clique-width representation for counting the colourings with large number of colours.

D. Dagaev
, A. Suzdaltsev  "Optimal seeding in knockout tournaments".
in Russian

D. Chistikov "On the complexity of quantified integer programming".
Quantified integer programming is the problem of deciding assertions of the form "for all/there exists x_k ... for all x_2 there exists x_1 such that A x >= c", where vectors of variables x_k, ..., x_1 form the vector x, all variables are interpreted over N (alternatively, over Z), and A and c are a matrix and vector over Z of appropriate sizes.
We show that quantified integer programming with alternation depth k is complete for the k-th level of the polynomial hierarchy.
Joint work with Christoph Haase.


D. Chistikov "Navigating one-counter automata: Directions in the mountains".
One-counter automata (OCA) are finite-state automata with a counter that supports increments, decrements, and tests for zero. They correspond to an intermediate class between regular and context-free languages and are suitable for modeling ``counting'' phenomena. However, reasoning about OCA is often intractable: for example, language equivalence is undecidable for nondeterministic OCA, and for deterministic OCA it took 40 years to prove the existence of short distinguishing words.
In this talk, I will give a review of OCA and reasoning tasks for them and discuss two tractability results:
(1) shortest paths between configurations of OCA are of at most quadratic length;
(2) the Parikh (commutative) image and sub-/superword closures of the language are accepted by small nondeterministic finite automata.

G.Evstropov "The k-sum metric combinatorial optimization problem"
Consider we have some set A and non-negative cost function w(e) of its elements, some subsets of set A are considered to be feasible. For example, A can be a set of arcs of directed graph and we are interested in simple paths from s to t. Well-known problems ask to find a subset of minimum total cost (minsum path can be solved with Dijkstra's algorithm) or a path of minimax length (bottleneck problem). K-sum optimization generalizes these two by asking to find a feasible subset that minimizes the sum of k most expensive elements. In case some subset consists of less than k elements we set its cost in k-sum metric equal to their sum.
It turns out if an O(P(n)) time algorithm exists to find minisum subset, k-sum optimum can be found in O(P(n) * n) time. In case there will be enough time we may consider the same approach for k-sum approximation and some continuous problems (such as k-sum mincost flow).


G. Posobin "Random noise increases Kolmogorov complexity"
Consider a binary string x of length n whose Kolmogorov complexity equals \alpha n for some \alpha<1. We want to increase the complexity of x by changing a small fraction of bits in x This is always possible: Buhrman, Fortnow, Newman and Vereshchagin showed that the increase can be at least \delta n for large n (where \delta is some positive number that depends on \alpha and the allowed fraction of changed bits).
We consider a related question: what happens with the complexity of x when we randomly change a small fraction of the bits (changing each bit independently with some probability p)? It turns out that a linear increase in complexity happens with high probability, but the guaranteed increase is smaller than in the case of arbitrary change. We note that the amount of the increase depends on x (strings of the same complexity could behave differently), and give an exact lower and upper bounds for this increase (with o(n) precision).
The proof uses the combinatorial technique that goes back to Ahlswede, Gacs and Korner (1977).


A. Romashchenko "Resource-Bounded Kolmogorov Complexity Provides an Obstacle to Soficness of Multidimensional Shifts"
We propose necessary  conditions of soficness of multidimensional shifts formulated in terms of resource-bounded Kolmogorov complexity.
Using this technique we provide examples of effective and  non-sofic  shifts  on ZZ^2 with very low block complexity: the number of admissible patterns of size NxN grows only as a polynomial in N.

V. Grinberg "Sum-of-squares polynomial optimization approach"


Y. Nesterov "Implementable Tensor Methods in Unconstrained Convex Optimization"
In this talk we discuss new tensor methods for unconstrained convex optimization which are solving the auxiliary problems of minimizing convex multivariate polynomials. We analyze the simplest scheme, based on minimization of a regularized local model of the objective function, and its accelerated version obtained in the framework of estimating sequences. Their rates of convergence are compared with the worst-case lower complexity bounds for corresponding problem classes. Finally, for the third-order methods, we suggest an efficient technique for solving the auxiliary problem, which is based on the recently developed relative smoothness condition. With this elaboration, the third-order methods become implementable and very fast.  At the same time, in many important cases the computational cost of iterations in these methods remains on the level typical for the second-order schemes.

A. Milovanov "Derandomization of PIT and generalizations of the Sylvester-Gallai theorem"
in Russian

A. Rubtsov "Exile of Kolmogorov Complexity from KC-DCF-Lemma"
We present a new structural lemma for deterministic context free languages. The lemma is a combinatorial analogue of KC-DCF-Lemma (based on Kolmogorov complexity), presented by M. Li and P. Vitányi in 1995 and corrected by O. Glier in 2003. We show that in the case of proving that a language is not a DCFL (the main application of lemmata) the lemmata are equivalent. The structural lemma is much easier to apply than KC-DCF-Lemma. It also provides a new insight to the nature of DCFL.  It allows not only to prove that a language is not a DCFL, but discloses the structure of DCFLs Myhill-Nerode classes.

V. Grinberg  "L0 Sampling and Graph Sketches"
in Russian

V. Podolskii "A game approach to the sensitivity conjecture"
The famous sensitivity conjecture has many reformulations.
One of them states that the decision tree complexity of any Boolean function is polynomially related the sensitivity of this function. In this talk we will discuss a recently introduced communication game that can help to prove the sensitivity conjecture. The talk is based on the paper “A New Approach to the Sensitivity Conjecture” by Justin Gilmer, Michal Koucký and Michael E. Saks (ITCS 2015).

L. Sysoeva  "Some properties of the automatic closure of Boolean functions sets"
in Russian

E. Mineeva "Parameterized computational complexity"
in Russian


Michael Raskin "Complementing unambiguous finite automata (a superpolynomial lower bound for 1-letter alphabet)"
Two main approaches to finite automata are deterministic finite automata and nondeterministic finite automata. Deterministic finite automata can be easily combined to recognize the union, intersection or the complement of the languages recognized by original DFAs; but they might require many more states than nondeterministic finite automata.
Unambiguous finite automata are NFAs that have at most one accepting run for every word. They can be exponentially more succinct than DFAs in some cases, and there was a conjecture that recongnizing the complement of the language of an UFA can be done by another UFA with a polynomial increase in the state count (the best lower bound was quadratic).
In the talk, a superpolynomial lower bound for complementing a UFA over the one-letter alphabet will be presented.

Guilhem Gamard  "Quasiperiodicity on one and two-dimensional words. Continuation of previous talk: two-dimensional aspects"
We work on one-dimensional and two-dimensional words, i.e. colorings of ℤ and ℤ² with colors choosen among a finite alphabet. These objects are fundamental in several branches of computer science, thus a large amount of research is devoted to understanding how “symmetric” a word can be: for example we have the notions of periodicity, topological entropy, minimality, unique ergodicity…
Here we focus on quasiperiodicity: a finite pattern q is a quasiperiod of a word w iff w is covered with translated copies of q, which may overlap. We would like to understand the structure of q-quasiperiodic words only through properties of q. One motivation for this comes the study of quasicrystals in physics, where a 2-dimensional word represents a regular molecular structure and a quasiperiod is a configuration with local minimal energy.

A. Shen  "What is geometric construction and how to prove its impossibility"
in Russian

Guilhem Gamard  "Quasiperiodicity on one and two-dimensional words" 
We work on one-dimensional and two-dimensional words, i.e. colorings of ℤ and ℤ² with colors choosen among a finite alphabet. These objects are fundamental in several branches of computer science, thus a large amount of research is devoted to understanding how “symmetric” a word can be: for example we have the notions of periodicity, topological entropy, minimality, unique ergodicity…
Here we focus on quasiperiodicity: a finite pattern q is a quasiperiod of a word w iff w is covered with translated copies of q, which may overlap. We would like to understand the structure of q-quasiperiodic words only through properties of q. One motivation for this comes the study of quasicrystals in physics, where a 2-dimensional word represents a regular molecular structure and a quasiperiod is a configuration with local minimal energy.

V. Chernyshev "Dynamics of points on metric graphs"
in Russian

Y. Makarychev «Algorithmic and Hardness Results for the Hub Labeling Problem»
There has been significant success in designing highly efficient algorithms for distance and shortest-path queries in recent years; many of the state-of-the-art algorithms use the hub labeling framework. In this paper, we study the approximability of the Hub Labeling problem. We prove a hardness of Ω(log n) for Hub Labeling, matching known approximation guarantees. The hardness result applies to graphs that have multiple shortest paths between some pairs of vertices. No hardness of approximation results were known previously.
Then, we focus on graphs that have a unique shortest path between each pair of vertices. This is a very natural family of graphs, and much research on the Hub Labeling problem has studied such graphs. We give an O(log D) approximation algorithm for graphs of diameter D with unique shortest paths. In particular, we get an O(log log n) approximation for graphs of polylogarithmic diameter, while previously known algorithms gave an O(log n) approximation. Finally, we present a polynomial-time approximation scheme (PTAS) and quasi-polynomial time algorithms for Hub Labeling on trees; additionally, we analyze a simple combinatorial heuristic for Hub Labeling on trees, proposed by Peleg in 2000. We show that this heuristic gives an approximation factor of 2.
The talk is based on a joint paper with Haris Angelidakis and Vsevolod Oparin.

10.10. 2017
K. Makarychev
 "Clustering Billions of Reads for DNA Data Storage"
I will tell the audience how to quickly cluster billions of strings based on their similarity (edit distance). We will discuss what makes the problem hard and then explore known (theoretical/mathematical) techniques like Locality Sensitive Hashing (LSH), metric embeddings, and sketching that can be employed for clustering Big Data. Finally, I will show how we use these techniques along with some new ingredients to cluster billions of DNA strands.
I will also briefly mention how string clustering is used in the Microsoft DNA Storage project – the project that develops technology for storing data on synthesized DNA strands (https://www.microsoft.com/en-us/research/project/dna-storage/).
The talk is based on my joint work with a team of researchers from Microsoft Research and University of Washington (https://www.microsoft.com/en-us/research/project/dna-storage/). This paper will appear at NIPS 2017.

T. Iskhakov «A brief introduction to the flow algorithms»
in Russian

S. Kuznetsov «Categorial grammar: logical-mathematical approach to the description of natural language»
in Russian

M. Vyalyi  «Nonnegative and PSD matrix ranks and their connection with classical and quantum communication»
 in Russian

A. Ryazanov «The upper bound for mutual information by the rank of matrix for joint distribution
in Russian

M. Kaledin «Applications of Gröbner basis for integer programming»
in Russian

M. Vyalyi «Fine-grained reductions»
in Russian

A. Knop «Time hierarchy theorems for heuristic computations»
in Russian

G. Evstropov 
 «Optimal algorithms for solving some zero-sum games with ordered sets of coins»
in Russian

D. Kutenin «A polynomial algorithm for primality testing»
in Russian

25.01.– 01.02.2017
M. Vyalyi  «A quasipolynomial algorithm for solving parity games» 
in Russian

Tomislav Petrović
 "A step towards finding an infinite string which is neither random nor predictable"
We'll discuss how to convert a sequential program that uses small amount of space and large amount of time into a parallel program that uses large amount of space and small amount of time, and vice-versa.
We'll also look at a simple extension of this result to sequential programs that don't necessarily use a small amount of space, but for which we know that they visit some small set of states.

K. Makarychev «Solving Optimization Problems with Diseconomies of Scale»
We present a new framework for solving optimization problems with a  diseconomy  of scale. In such problems, our goal is to minimize the cost of resources used to perform a certain task. The cost of resources grows superlinearly, as x q, with the amount x of resources used. We define a novel linear programming relaxation for such problems, and then show that the integrality gap of the relaxation is A q, where A q  is the q-th moment of the Poisson random variable with parameter 1. Using our framework, we obtain approximation algorithms for several optimization problems with a diseconomy of scale such asMinimum Energy Efficient Routing, Minimum Degree Balanced Spanning Tree, Load Balancing on Unrelated Parallel Machines, and Unrelated Parallel Machine Scheduling with Nonlinear Functions of Completion Times problems. In the Minimum Energy Efficient Routing problem, the goal is to route an unsplittable multicommodity flow in a network/graph between terminals so as to minimize the energy consumption on the links/edges. Our analysis relies on a decoupling inequality for nonnegative random variables. Consider arbitrary nonnegative jointly distributed random variables Y 1,…,Y n . Let X 1,…,X n  be independent copies of Y 1,…,Y n  such that all X i  are independent and each X i  has the same distribution as Y i . Then, E(X 1 +…+X n ) q  < C q  E(Y 1 +…+Y n ) q . The inequality was proved by de la Pena in 1990. However, the optimal constant C q  was not known. We show that the optimal constant is C q =A q . This is a joint work with Maxim Sviridenko, Yahoo Labs.

A. Shen  «On normal sequences»
in Russian

V. Oparin
  «The bounds of proof complexity in the proof systems based on the resolution method» 
in Russian

A. Skopenkov «Stability of intersections of graphs in the plane and the van Kampen obstruction»

23.09 – 12.10.2016 
G. PosobinV. PodolskiiA. MilovanovM. Vyalyi «Basic concepts of the Fourier analysis of Boolean functions»
in Russian



Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.