• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Colloquium in 2015/2016

2016

May, 10, 2016

Causal inference and Kolmogorov complexity


Bruno Bauwens, HSE
It is often stated that “Correlation does not imply causation”: a dependency between observed values of two random variables might not imply that there exists a causal connection between the corresponding processes (and assuming there exists one, one might not know the direction). In Shannon information theory, this is reflected by the law of “symmetry of information”: I(X;Y) = I(Y;X). The information that a random variable X has about Y equals the information that Y has about X. This law remains valid if Shannon entropy is replaced by Kolmogorov complexity. However, there exists a subtler setting where this law is violated and one might speculate that this asymmetry can be used to reconstruct causality. In the second part of the talk, we discuss the postulate of independence of conditionals for the inference of causal relations in observed data. This postulate was introduced by D. Janzing and B. Schölkopf in 2010, and the two-variable case states that, if X causes Y, then the marginal distribution PX has low information about the conditional distribution P{Y|X} (as defined with Kolmogorov complexity). We explain how the most popular methods can be explained by this postulate and overview new methods that were inspired by it.

 Colloquium 10/05/2016 (PDF, 119 Kb)

April, 12, 2016

Physics Informed Machine Learning


Michael (Misha) Chertkov, Skoltech - Adjunct Professor, on leave from Los Alamos National Laboratory)

Machine Learning (statistical engineering) capabilities are in a phase of tremendous growth. Underlying these advances is a strong and deep connection to various aspects of statistical physics. There is also a great opportunity in pointing these tools toward physical modeling. In this colloquium I illustrate the two-way flow of ideas between physics and statistical engineering on three examples. First, I review the work on structure learning and statistical estimation in power system distribution (thus physical) networks. Then I describe recent progress in constructive understanding of graph learning (on example of inverse Ising model) illustrating that the generic inverse task (of learning) is computationally easy in spite of the fact that the direct problem (inference or sampling) is difficult. I conclude speculating how macro-scale models of physics (e.g. large eddy simulations of turbulence) can be learned from micro-scale simulations (e.g. of Navier-Stocks equations).

 Colloquium 12/04/2016 (PDF, 265 Kb)

 


March 22, 2016

Finding a majority ball with majority answers


Mate Vizer,  Renyi Institute Budapest

Suppose we are given a set of n balls {b1, …, bn} each colored either red or blue in some way unknown to us. To find out some information about the colors, we can query any triple of balls. As an answer to such a query we obtain (the index of) a majority ball, that is, a ball whose color is the same as the color of another ball from the triple. Our goal is to find a non-minority ball, that is, a ball whose color occurs at least n/2 times among the n balls. We show that the minimum number of queries needed to solve this problem is Θ(n) in the adaptive case and Θ(n3) in the non-adaptive case. We also consider some related problems. This is a joint work with D. Gerbner, B. Keszegh, D. Palvolgyi, B. Patkos and G Wiener.

 Colloquium 22/03/2016 (PDF, 565 Кб)

March 15,2016

Decidability of Conjunctive Queries for Description Logics with Counting, Inverses and Nominals


Sebastian Rudolph, TU Dresden

Description Logics (DLs) are Knowledge Representation formalisms with a great significance for Semantic Web technologies. For instance, the ontology specification languages OWL DL and its successor OWL 2 DL, standardized by the World Wide Web Consortium (W3C), are based on DL languages. Conjunctive queries constitute the standard querying paradigm for data bases and have in recent years attracted increasing interest in the area of logic-based knowledge representation. While decidability and computational complexity are mainly clarified for standard DL reasoning tasks (such as knowledge base consistency or axiom entailment), decidability of conjunctive query answering in OWL DL and OWL 2 DL is still an open problem. Over a comparatively long time, a major obstacle towards the solution of this problem was the intricate interplay of three modeling features: nominal concepts, inverse roles and cardinality constraints. In my talk, I will present results that, for the first time, establish decidability of a DL containing all three constructs. For a slightly restricted class of conjunctive queries (i.e., queries with only simple roles), our result also shows decidability in the logics that underpin OWL DL and OWL 2 DL. This work was carried out in the course of a research stay at Oxford University and has been published in the Journal of Artificial Intelligence Research 

 Colloquium 15/03/2016 (PDF, 199 Kb)





2015

November 17, 2015 

k-Anonymization by Freeform Generalization

Panagiotis Karras, Skoltech

Время: 18.10 - 20.00

Syntactic data anonymization strives to (i) ensure that an adversary cannot identify an individual’s record from published record attributes with high probability, and (ii) preserve data utility. These mutually conflicting goals can be expressed as an optimization problem with privacy as the constraint and utility as the objective function. Conventional research on k-anonymity, a popular privacy model, has resorted to generalizing data values in homogeneous groups. However, such grouping is not necessary. Instead, data values can be recast in a heterogeneous manner that allows for higher utility. Nevertheless, previous work in this direction did not define the problem in the most general terms; thus, the utility gains achieved are limited. In this paper, we propose a methodology that achieves the full potential of heterogeneity and gains higher utility. We formulate the problem as a network flow problem and develop an optimal solution therefor using Mixed Integer Programming, an O(kn2) greedy algorithm that has no time-complexity disadvantage vis-á-vis previous approaches, an O(kn2 log n) enhanced version thereof, and an O(kn3) adaptation of the Hungarian algorithm; these algorithms build a set of k perfect matchings from original to anonymized data. Our techniques can resist adversaries who may know the employed algorithms. Experiments with real-world data verify that our schemes achieve near-optimal utility (with gains of up to 41%), while they can exploit parallelism and data partitioning, gaining an efficiency advantage over simpler methods.

September 29, 2015

Formal Concept Analysis: A Useful Example of Modern Mathematics

Bernhard Ganter, Technische Universität Dresden

Время: 18.10 - 20.00 (доклад на английском языке)

We give impressions of a field of research that was started some 30 years ago under the name of “Formal Concept Analysis.” It is a mathematical theory, based on algebraic structures such as complete lattices, with a spectrum of applications in data analysis and knowledge processing.

Although the theory is advanced and substantial, it finds more attention in computer science than in mathematics. Indeed, the field has some potential for supporting recent developments of structured knowledge representations.

In our lecture, we introduce the very basic notions of the field, show the possibility of expressive graphical representations, discuss some applications and, eventually, demonstrate why the value of this development lies in its mathematical nature.

 B. Ganter (PDF, 520 Кб)


September 22, 2015 года

Extinction probabilities of branching processes with infinitely many types

Sophie Hautphenne, EPFL Lauzanne and University of Melbourne

Время: 16.40 - 18.00 (доклад на английском языке)

We present some iterative methods for computing the global and partial extinction probability vectors for branching processes with countably infinitely many types. The probabilistic interpretation of these methods involves truncated branching processes with finite sets of types and modified progeny generating functions. Simple probabilistic arguments and coupling methods are used to prove the convergence of the algorithms. In addition, we discuss extinction criteria for global and partial extinction.

 S. Hautphenne (PDF, 336 Кб)

September 15, 2015

Random Graphs

Andrei Raigorodskii, Lomonosov Moscow State University

In 1959 P. Erdős and A. Renyi initiated the study of the binomial model of a random graph G(np), in which edges on n vertices are drawn independently, each with probability p. Now, the random graphs of Erdős and Renyi are among the most important subjects of combinatorics and its applications.

One natural generalization of the Erdős–Renyi model is as follows: given a sequence of graphs Hn we keep each of the edges of Hn with probability p independently of each others (no new edges are drawn). Thus appear random graphs Hn,p. In the last years, such models are being very intensively studied.

We will talk about one sequence of graphs, which is important for combinatorial geometry and coding theory. For this sequence of graphs, we will consider the above-described version of the Erdős–Renyi model. We will exhibit various old and new results. In particular, we will concentrate on colorings, cliques and independent sets.


May, 21, 2015

Knowledge-based Verification and Construction of Distributed and Constrained Systems

Susanne Graf, Grenoble

4.40 p.m. - 6 p.m.

We explore here the problem from the knowledge perspective: a process can decide to execute a local action when it has the knowledge to do so. We discuss typical knowledge atoms, useful for expressing local enabling conditions with respect to different notions of correctness, as well as different means for obtaining knowledge and for representing it locally in an efficient manner.

Our goal is to use such a knowledge-based representation of the distribution problem for either deriving distributed implementations automatically from global specifications on which some constraint is enforced — a difficult problem — or for improving the efficiency of existing protocols by exploiting local knowledge. We also argue that such a knowledge-based presentation helps achieving the necessary correctness proofs.

Beyond Worst Case Analysis of Graph Partitioning Algorithms

Konstantin Makarychev, Microsoft Research

6.30 p.m. - 8 p.m.

Many combinatorial optimization problems are much simpler in practice than in the worst-case. One of the challenges in the area of approximation algorithms is to explain this phenomena and to design algorithms that work well in real-life. In this talk, we will first discuss different ways of modelling “real-life” instances. Then, I will present a new semi-random semi-adversarial model for graph partitioning problems, a planted model with permutation-invariant random edges (PIE). This model is much more general than stochastic models considered previously. Finally, I will describe a “constant factor” approximation algorithm for finding the minimum balanced cut in graphs from this model.

May, 14, 2015

Computational Aspects of Monotone Dualization

Kazuhisa Makino, Research Institute for Mathematical Sciences, Kyoto University (Japan)

4.40 p.m. - 6 p.m.

Dualization of a monotone Boolean function represented by a conjunctive normal form (CNF) is a problem which, in different disguise, is ubiquitous in many areas including computer science, artificial intelligence, data mining, machine learning, and game theory to mention some of them. It is also one of the few problems whose precise tractability status (in terms of polynomial-time solvability) is still unknown, and now open for more than 30 years. We briefly survey computational results for this problem, which includes the remarkable result by M.L. Fredman and L.G. Khachiyan that the problem is solvable in quasi-polynomial time (and thus most likely not coNP-hard), as well as on follow-up works.


April 30, 2015

Min-Plus Polynomials and Mean Payoff Games

Vladimir Podolski, Faculty of Computer Science, HSE and Steklov Mathematics Institute of RAS

4.40 pm - 6 pm

The talk is given in Russian

The Min-Plus semiring, or tropical semiring, is the set of rationals with two operations: &lqduo;min-plus addition,” which simply corresponds to the minimum of two arguments, and “min-plus multiplication,” which is the usual addition. Polynomials over the min-plus semiring are defined in a similar way to classical polynomials and are essentially piecewise linear functions of their arguments. Roots of these polynomials are defined to be points where the function is nonsmooth.

In this talk we consider algorithmic problems associated to min-plus polynomials. More specifically, we will be interested in solvability of systems of linear min-plus polynomials. This problem turns out to be polynomially equivalent to the problem of determining the winner in mean payoff games, a problem known to belong to the intersection of NP and coNP complexity classes.

The second result that we plan to discuss in the talk is the min-plus algebraic analogue of Hilbert's Nullstellensatz.

The talk is based on joint work with Dmitry Grigoriev.

April 22, 2015

An Informatics View of The World

Dines Bjørner, Technical University of Denmark

4.40 pm - 6 pm

Attention: the colloquium will be held on Wednesday in the Euclid room!

This talk is for beginners in the serious study of computer science. Behind its presentation lies the attitude that software, including programmes, denote mathematical objects. Through three examples I provide a glimpse into a universe of domains for which software is developed every day but for which, in most instances, there are no formal, i.e., mathematical understanding. The three examples are: road transport and platooning systems, oil/gas pipeline systems, and stock exchange sell offer/buy bid matching. The objective of this talk is to show you what it means to develop software using mathematics, albeit it new kind of mathematics, not differential equations, nor integrals, nor statistics, etc., but mathematical logic and abstract algebra, so that software can be shown correct, i.e., no bugs, no blue screen, and to met customers expectations!

March 5, 2015

Modelling and Analysis of Computation Processes

Rostislav Yavorski, Faculty of Computer Science, HSE

4.40 pm - 6 pm

The talk is given in Russian

Turing, Post, Minsky machines, Markov algorithms, Kleene recursive functions were invented in the first half of the 20th century in quest of a suitable formalisation of the notion of algorithm. These mathematical models have been quite successful in the studies of decidability and algorithmic complexity, but did not prove very useful for modelling of network protocols or operating systems components. In the talk we will explain some modern approaches to modelling of computation, which have been actually used in industrial development of complex information systems.

February 19, 2015

Gaussian multiplier bootstrap in high dimension with applications to model selection

Vladimir Spokoiny, Kharkevich Institute for Information Transmission Problems (Moscow) and Weierstrass Institute for Applied Analysis and Stochastics (Berlin)

4.40 pm - 6 pm

Since its introduction in 1979 by Efron, the bootstrap became one of the most popular and effective methods in various statistical problems like confidence estimation, hypothesis testing, model selection, etc. However, validation of a bootstrap procedure is very involved and requires some advance tools from empirical process theory and Gaussian processes.

Most of these tools meet serious problems and challenges if the parameter dimension becomes large, much larger than the sample size. Recently Chernozhkov et al (2013, 2014) offered a new approach to statistical inference in high dimension which includes powerful result for Gaussian processes and random matrices, Gaussian approximation in high dimension, and Gaussian comparison. This talk reviews some of these results and tools and explains how they can be applied to the problems of subset estimation and sparse estimation problems.

January 29, 2015

Approximate Nearest Neighbor Search: Beyond Locality-Sensitive Hashing

Ilya Razenshteyn, MIT

4.40 pm - 6pm

In the Approximate Nearest Neighbor problem (ANN), we are given a set P of  n points in a d-dimensional space, and the goal is to build a data structure that, given a query point q, reports any point from P that is approximately closest to&nbsp:q.

Locality-Sensitive Hashing (LSH) is by now a standard technique for solving the ANN problem. In my talk I will define LSH and show several constructions of good hash families. Then I will state some limitations of LSH and describe a recent line of research that provides data structures for ANN that are provably (substantially) better than what LSH could possibly give.

The talk is partially based on a joint work with Alexandr Andoni (Simons Institute, previously Microsoft Research Silicon Valley). See the paper at .


2014

December 30, 2014

Probabilistic graphical models: Factor graphs and more

Federico Wadehn, ETH (Zürich)

4 pm - 5.30 pm

Probabilistic graphical models together with dedicated algorithms such as belief propagation and other message passing algorithms allow a unified approach to a multitude of problems arising in signal processing, coding theory as well as in statistical inference in general. The following talk will briefly cover Bayesian networks and Markov random fields before focussing on factor graphs and related algorithms such as sum-product message passing. Various toy examples as well as an input estimation and a Bayesian inference algorithm using the factor graph description language will be shown.

December 25, 2014

Resource reasoning in program analysis

Max Kanovich, University College London

4.40 pm - 6 pm

The fundamental ideas of the resource logics (linear logic, separation logic) are presented in a semi-formal style. For general resource models, on one hand, and for concrete heap-like models of practical interest, on the other hand, we get into the formalities, including the semantics of the assertion language and axioms and inference rules. Surprisingly, as for the assertion language of separation logic, even purely propositional separation logic turns out to be undecidable, even for a concrete heap-like models of practical interest.

December 5, 2014

RaaS and Ginseng: The Resource as a Service Cloud

Assaf Schuster, Technion

4 pm - 5 pm

Cloud providers, such as Amazon EC2, would like to have satisfied clients. Who wouldn't? However, in order to maximize their marginal profit, they have to fit the clients on as few machines as possible.

One way providers can maximize clients' satisfaction while making best use of the resources is by renting precious physical memory to those clients who value it the most. But real-world cloud clients are selfish; they will only tell their providers the truth about how much they value memory when it is in the clients' best interest to do so. How, then, can cloud providers allocate memory efficiently to those (selfish) clients?

We present Ginseng, the first market-driven cloud system that solves this problem, allocating memory efficiently precisely to those selfish cloud clients who value it the most. Ginseng, built using the KVM hypervisor and libvirt, achieves a 6.2x-15.8x improvement (83%-100% of the optimum) in aggregate client satisfaction.

Energy efficient Cloud Computing

Dzmitry Kliazovich, University of Luxembourg

5 pm - 6 pm

November 27, 2014

Process Mining: Data Science in Action

Wil van der Aalst, Eindhoven University of Technology

4.40 pm - 6 pm

Data science is the profession of the future, because organizations that are unable to use (big) data in a smart way will not survive. It is not sufficient to focus on data storage and data analysis. The data scientist also needs to relate data to process analysis. Process mining bridges the gap between traditional model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques such as machine learning and data mining. Process mining seeks the confrontation between event data (i.e., observed behavior) and process models (hand-made or discovered automatically). This technology has become available only recently, but it can be applied to any type of operational processes (organizations and systems). Example applications include: analyzing treatment processes in hospitals, improving customer service processes in a multinational, understanding the browsing behavior of customers using a booking site, analyzing failures of a baggage handling system, and improving the user interface of an X-ray machine. All of these applications have in common that dynamic behavior needs to be related to process models. Hence, prof. Wil van der Aalst refers to it in his talk as “data science in action.” Process mining provides not only a bridge between data mining and business process management; it also helps to address the classical divide between “business” and “IT.” Evidence-based business process management based on process mining helps to create a common ground for business process improvement and information systems development.

October 9, 2014

On scale invariance, multifractal formalism, and their applications

Geoffrey Gérard Décrouez, Faculty of Computer Science, HSE

4.40 pm - 6 pm

Scale invariance represents a relatively new way to analyse and model real world data. It finds numerous applications in many diverse fields, including the study of natural phenomena in physics, biology, but also man-made applications such as signal and image processing. Data presenting scale invariance do not have a well defined time scale playing a dominant role. Instead, their dynamics can be understood from the relationship existing across a whole range of scales. Multifractal theory is particularly useful for analysing this kind of data. In this talk I will present some basic notions of fractal theory, and then discuss the multifractal formalism.

I will illustrate the theory with examples taken from the literature and my research, including multiplicative cascades on trees and random processes in multifractal time.

September 18, 2014

True concurrency – from C.A. Petri to Telecom and Systems Biology

Stefan Haar, École Normale Supérieure de Cachan

4.40 pm - 6 pm

Some circles, squares and arrows, plus some black dots moving along: that is all it takes to build a Petri net. These nets are a mathematical tool and model for dynamical systems generally considered to be “at home” in computer science. However, a great deal of the theory of Petri nets and of the concurrency (describing asynchronous parallel processes) which they involve, had been developed for and inspired by the understanding of physical processes, building upon principles from chemical reactions, and both relativistic and quantum physics.

It is fair to say that Petri nets are not only intuitive, but also fertile for many fields; in this talk, I will illustrate this in the contexts of physics, engineering, and biology, reflecting in some sort the evolution that the field has taken over the past 50 years.

September 11, 2014

Algorithmic Information Theory and Randomness of Individual Objects

Alexander Shen, Kharkevich Institute for Information Transmission Problems (Moscow) and LIFR (Marseille)

4.40 pm - 6 pm

In mid-20th century Shannon introduced the concept of entropy, which can be intuitively described as the “average number of information bits in a single value of a random variable.” However this concept is not applicable to individual objects, such as the text of a novel or the DNA molecule. Indeed, there is no random variable without an ensemble of similar objects of the same kind.

In mid-1960s several people (Kolmogorov, Solomonoff, Levin, Chaitin, …) realized that the amount of information or complexity contained in an individual object can be defined as the minimal length of a computer program that generates this object, under some natural assumptions on the programming language. Thus appeared algorithmic information theory, which proved to be connected to various domains such as philosophical foundations of probability (when can a statistical hypothesis be rejected?), combinatorics (inequalities between cardinalities of sets and their projections), and theory of computation.

I will present basic definitions and some of the fundamental results of this theory, and also, depending on the audience, a glimpse of current work.