We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.
✖125319, Moscow,
3 Kochnovsky Proezd (near metro station 'Aeroport').
Phone: +7 (495) 772-95-90 *12332
Email: computerscience@hse.ru
Dean – Ivan Arzhantsev
First Deputy Dean – Tamara Voznesenskaya
Deputy Dean for Research and International Relations – Sergei Obiedkov
Deputy Dean for Methodical and Educational Work – Ilya Samonenko
Deputy Dean for Development, Finance and Administration – Irina Plisetskaya
Colloquium 10/05/2016 (PDF, 119 Kb)
Colloquium 12/04/2016 (PDF, 265 Kb)
Colloquium 22/03/2016 (PDF, 565 Кб)
Colloquium 15/03/2016 (PDF, 199 Kb)
Panagiotis Karras, Skoltech
Время: 18.10 - 20.00
Syntactic data anonymization strives to (i) ensure that an adversary cannot identify an individual’s record from published record attributes with high probability, and (ii) preserve data utility. These mutually conflicting goals can be expressed as an optimization problem with privacy as the constraint and utility as the objective function. Conventional research on k-anonymity, a popular privacy model, has resorted to generalizing data values in homogeneous groups. However, such grouping is not necessary. Instead, data values can be recast in a heterogeneous manner that allows for higher utility. Nevertheless, previous work in this direction did not define the problem in the most general terms; thus, the utility gains achieved are limited. In this paper, we propose a methodology that achieves the full potential of heterogeneity and gains higher utility. We formulate the problem as a network flow problem and develop an optimal solution therefor using Mixed Integer Programming, an O(kn^{2}) greedy algorithm that has no time-complexity disadvantage vis-á-vis previous approaches, an O(kn^{2} log n) enhanced version thereof, and an O(kn^{3}) adaptation of the Hungarian algorithm; these algorithms build a set of k perfect matchings from original to anonymized data. Our techniques can resist adversaries who may know the employed algorithms. Experiments with real-world data verify that our schemes achieve near-optimal utility (with gains of up to 41%), while they can exploit parallelism and data partitioning, gaining an efficiency advantage over simpler methods.
Bernhard Ganter, Technische Universität Dresden
Время: 18.10 - 20.00 (доклад на английском языке)
We give impressions of a field of research that was started some 30 years ago under the name of “Formal Concept Analysis.” It is a mathematical theory, based on algebraic structures such as complete lattices, with a spectrum of applications in data analysis and knowledge processing.
Although the theory is advanced and substantial, it finds more attention in computer science than in mathematics. Indeed, the field has some potential for supporting recent developments of structured knowledge representations.
In our lecture, we introduce the very basic notions of the field, show the possibility of expressive graphical representations, discuss some applications and, eventually, demonstrate why the value of this development lies in its mathematical nature.
B. Ganter (PDF, 520 Кб)
Sophie Hautphenne, EPFL Lauzanne and University of Melbourne
Время: 16.40 - 18.00 (доклад на английском языке)
We present some iterative methods for computing the global and partial extinction probability vectors for branching processes with countably infinitely many types. The probabilistic interpretation of these methods involves truncated branching processes with finite sets of types and modified progeny generating functions. Simple probabilistic arguments and coupling methods are used to prove the convergence of the algorithms. In addition, we discuss extinction criteria for global and partial extinction.
S. Hautphenne (PDF, 336 Кб)
Andrei Raigorodskii, Lomonosov Moscow State University
In 1959 P. Erdős and A. Renyi initiated the study of the binomial model of a random graph G(n, p), in which edges on n vertices are drawn independently, each with probability p. Now, the random graphs of Erdős and Renyi are among the most important subjects of combinatorics and its applications.
One natural generalization of the Erdős–Renyi model is as follows: given a sequence of graphs H_{n} we keep each of the edges of H_{n} with probability p independently of each others (no new edges are drawn). Thus appear random graphs H_{n,p}. In the last years, such models are being very intensively studied.
We will talk about one sequence of graphs, which is important for combinatorial geometry and coding theory. For this sequence of graphs, we will consider the above-described version of the Erdős–Renyi model. We will exhibit various old and new results. In particular, we will concentrate on colorings, cliques and independent sets.
Susanne Graf, Grenoble
4.40 p.m. - 6 p.m.
We explore here the problem from the knowledge perspective: a process can decide to execute a local action when it has the knowledge to do so. We discuss typical knowledge atoms, useful for expressing local enabling conditions with respect to different notions of correctness, as well as different means for obtaining knowledge and for representing it locally in an efficient manner.
Our goal is to use such a knowledge-based representation of the distribution problem for either deriving distributed implementations automatically from global specifications on which some constraint is enforced — a difficult problem — or for improving the efficiency of existing protocols by exploiting local knowledge. We also argue that such a knowledge-based presentation helps achieving the necessary correctness proofs.
Konstantin Makarychev, Microsoft Research
6.30 p.m. - 8 p.m.
Many combinatorial optimization problems are much simpler in practice than in the worst-case. One of the challenges in the area of approximation algorithms is to explain this phenomena and to design algorithms that work well in real-life. In this talk, we will first discuss different ways of modelling “real-life” instances. Then, I will present a new semi-random semi-adversarial model for graph partitioning problems, a planted model with permutation-invariant random edges (PIE). This model is much more general than stochastic models considered previously. Finally, I will describe a “constant factor” approximation algorithm for finding the minimum balanced cut in graphs from this model.
Kazuhisa Makino, Research Institute for Mathematical Sciences, Kyoto University (Japan)
4.40 p.m. - 6 p.m.
Dualization of a monotone Boolean function represented by a conjunctive normal form (CNF) is a problem which, in different disguise, is ubiquitous in many areas including computer science, artificial intelligence, data mining, machine learning, and game theory to mention some of them. It is also one of the few problems whose precise tractability status (in terms of polynomial-time solvability) is still unknown, and now open for more than 30 years. We briefly survey computational results for this problem, which includes the remarkable result by M.L. Fredman and L.G. Khachiyan that the problem is solvable in quasi-polynomial time (and thus most likely not coNP-hard), as well as on follow-up works.
Vladimir Podolski, Faculty of Computer Science, HSE and Steklov Mathematics Institute of RAS
4.40 pm - 6 pm
The talk is given in Russian
The Min-Plus semiring, or tropical semiring, is the set of rationals with two operations: &lqduo;min-plus addition,” which simply corresponds to the minimum of two arguments, and “min-plus multiplication,” which is the usual addition. Polynomials over the min-plus semiring are defined in a similar way to classical polynomials and are essentially piecewise linear functions of their arguments. Roots of these polynomials are defined to be points where the function is nonsmooth.
In this talk we consider algorithmic problems associated to min-plus polynomials. More specifically, we will be interested in solvability of systems of linear min-plus polynomials. This problem turns out to be polynomially equivalent to the problem of determining the winner in mean payoff games, a problem known to belong to the intersection of NP and coNP complexity classes.
The second result that we plan to discuss in the talk is the min-plus algebraic analogue of Hilbert's Nullstellensatz.
The talk is based on joint work with Dmitry Grigoriev.
Dines Bjørner, Technical University of Denmark
4.40 pm - 6 pm
Attention: the colloquium will be held on Wednesday in the Euclid room!
This talk is for beginners in the serious study of computer science. Behind its presentation lies the attitude that software, including programmes, denote mathematical objects. Through three examples I provide a glimpse into a universe of domains for which software is developed every day but for which, in most instances, there are no formal, i.e., mathematical understanding. The three examples are: road transport and platooning systems, oil/gas pipeline systems, and stock exchange sell offer/buy bid matching. The objective of this talk is to show you what it means to develop software using mathematics, albeit it new kind of mathematics, not differential equations, nor integrals, nor statistics, etc., but mathematical logic and abstract algebra, so that software can be shown correct, i.e., no bugs, no blue screen, and to met customers expectations!
Rostislav Yavorski, Faculty of Computer Science, HSE
4.40 pm - 6 pm
The talk is given in Russian
Turing, Post, Minsky machines, Markov algorithms, Kleene recursive functions were invented in the first half of the 20th century in quest of a suitable formalisation of the notion of algorithm. These mathematical models have been quite successful in the studies of decidability and algorithmic complexity, but did not prove very useful for modelling of network protocols or operating systems components. In the talk we will explain some modern approaches to modelling of computation, which have been actually used in industrial development of complex information systems.
Vladimir Spokoiny, Kharkevich Institute for Information Transmission Problems (Moscow) and Weierstrass Institute for Applied Analysis and Stochastics (Berlin)
4.40 pm - 6 pm
Since its introduction in 1979 by Efron, the bootstrap became one of the most popular and effective methods in various statistical problems like confidence estimation, hypothesis testing, model selection, etc. However, validation of a bootstrap procedure is very involved and requires some advance tools from empirical process theory and Gaussian processes.
Most of these tools meet serious problems and challenges if the parameter dimension becomes large, much larger than the sample size. Recently Chernozhkov et al (2013, 2014) offered a new approach to statistical inference in high dimension which includes powerful result for Gaussian processes and random matrices, Gaussian approximation in high dimension, and Gaussian comparison. This talk reviews some of these results and tools and explains how they can be applied to the problems of subset estimation and sparse estimation problems.
Ilya Razenshteyn, MIT
4.40 pm - 6pm
In the Approximate Nearest Neighbor problem (ANN), we are given a set P of n points in a d-dimensional space, and the goal is to build a data structure that, given a query point q, reports any point from P that is approximately closest to :q.
Locality-Sensitive Hashing (LSH) is by now a standard technique for solving the ANN problem. In my talk I will define LSH and show several constructions of good hash families. Then I will state some limitations of LSH and describe a recent line of research that provides data structures for ANN that are provably (substantially) better than what LSH could possibly give.
The talk is partially based on a joint work with Alexandr Andoni (Simons Institute, previously Microsoft Research Silicon Valley). See the paper at .
Federico Wadehn, ETH (Zürich)
4 pm - 5.30 pm
Probabilistic graphical models together with dedicated algorithms such as belief propagation and other message passing algorithms allow a unified approach to a multitude of problems arising in signal processing, coding theory as well as in statistical inference in general. The following talk will briefly cover Bayesian networks and Markov random fields before focussing on factor graphs and related algorithms such as sum-product message passing. Various toy examples as well as an input estimation and a Bayesian inference algorithm using the factor graph description language will be shown.
Max Kanovich, University College London
4.40 pm - 6 pm
The fundamental ideas of the resource logics (linear logic, separation logic) are presented in a semi-formal style. For general resource models, on one hand, and for concrete heap-like models of practical interest, on the other hand, we get into the formalities, including the semantics of the assertion language and axioms and inference rules. Surprisingly, as for the assertion language of separation logic, even purely propositional separation logic turns out to be undecidable, even for a concrete heap-like models of practical interest.
Assaf Schuster, Technion
4 pm - 5 pm
Cloud providers, such as Amazon EC2, would like to have satisfied clients. Who wouldn't? However, in order to maximize their marginal profit, they have to fit the clients on as few machines as possible.
One way providers can maximize clients' satisfaction while making best use of the resources is by renting precious physical memory to those clients who value it the most. But real-world cloud clients are selfish; they will only tell their providers the truth about how much they value memory when it is in the clients' best interest to do so. How, then, can cloud providers allocate memory efficiently to those (selfish) clients?
We present Ginseng, the first market-driven cloud system that solves this problem, allocating memory efficiently precisely to those selfish cloud clients who value it the most. Ginseng, built using the KVM hypervisor and libvirt, achieves a 6.2x-15.8x improvement (83%-100% of the optimum) in aggregate client satisfaction.
Dzmitry Kliazovich, University of Luxembourg
5 pm - 6 pm
Wil van der Aalst, Eindhoven University of Technology
4.40 pm - 6 pm
Data science is the profession of the future, because organizations that are unable to use (big) data in a smart way will not survive. It is not sufficient to focus on data storage and data analysis. The data scientist also needs to relate data to process analysis. Process mining bridges the gap between traditional model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques such as machine learning and data mining. Process mining seeks the confrontation between event data (i.e., observed behavior) and process models (hand-made or discovered automatically). This technology has become available only recently, but it can be applied to any type of operational processes (organizations and systems). Example applications include: analyzing treatment processes in hospitals, improving customer service processes in a multinational, understanding the browsing behavior of customers using a booking site, analyzing failures of a baggage handling system, and improving the user interface of an X-ray machine. All of these applications have in common that dynamic behavior needs to be related to process models. Hence, prof. Wil van der Aalst refers to it in his talk as “data science in action.” Process mining provides not only a bridge between data mining and business process management; it also helps to address the classical divide between “business” and “IT.” Evidence-based business process management based on process mining helps to create a common ground for business process improvement and information systems development.
Geoffrey Gérard Décrouez, Faculty of Computer Science, HSE
4.40 pm - 6 pm
Scale invariance represents a relatively new way to analyse and model real world data. It finds numerous applications in many diverse fields, including the study of natural phenomena in physics, biology, but also man-made applications such as signal and image processing. Data presenting scale invariance do not have a well defined time scale playing a dominant role. Instead, their dynamics can be understood from the relationship existing across a whole range of scales. Multifractal theory is particularly useful for analysing this kind of data. In this talk I will present some basic notions of fractal theory, and then discuss the multifractal formalism.
I will illustrate the theory with examples taken from the literature and my research, including multiplicative cascades on trees and random processes in multifractal time.
Stefan Haar, École Normale Supérieure de Cachan
4.40 pm - 6 pm
Some circles, squares and arrows, plus some black dots moving along: that is all it takes to build a Petri net. These nets are a mathematical tool and model for dynamical systems generally considered to be “at home” in computer science. However, a great deal of the theory of Petri nets and of the concurrency (describing asynchronous parallel processes) which they involve, had been developed for and inspired by the understanding of physical processes, building upon principles from chemical reactions, and both relativistic and quantum physics.
It is fair to say that Petri nets are not only intuitive, but also fertile for many fields; in this talk, I will illustrate this in the contexts of physics, engineering, and biology, reflecting in some sort the evolution that the field has taken over the past 50 years.
Alexander Shen, Kharkevich Institute for Information Transmission Problems (Moscow) and LIFR (Marseille)
4.40 pm - 6 pm
In mid-20th century Shannon introduced the concept of entropy, which can be intuitively described as the “average number of information bits in a single value of a random variable.” However this concept is not applicable to individual objects, such as the text of a novel or the DNA molecule. Indeed, there is no random variable without an ensemble of similar objects of the same kind.
In mid-1960s several people (Kolmogorov, Solomonoff, Levin, Chaitin, …) realized that the amount of information or complexity contained in an individual object can be defined as the minimal length of a computer program that generates this object, under some natural assumptions on the programming language. Thus appeared algorithmic information theory, which proved to be connected to various domains such as philosophical foundations of probability (when can a statistical hypothesis be rejected?), combinatorics (inequalities between cardinalities of sets and their projections), and theory of computation.
I will present basic definitions and some of the fundamental results of this theory, and also, depending on the audience, a glimpse of current work.