• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Seminars 2017

 

16.03.2017
International workshop: Formal Concept Analysis for Knowledge Discovery
Official site
Abstract: The International Workshop “Formal Concept Analysis for Knowledge Discovery” was held at the Faculty of Computer Science. The event brought together   scientists and specialists of Data Analysis from St. Catherines (Canada), St. Petersburg, Novosibirsk, Tula, Kazan, Perm and other cities. Prof. Ivo Duentsch from Brock University (St. Catherines) made keynote talk Knowledge structures and skill assignments: Structural tools for diagnostic assessment.
 

06.03.2017
Title: Reactive Systems: A Powerful Paradigm for Modeling and Analysis from Engineering to Biology
SpeakerThomas A. Henzinger (IST Austria)
Place: Moscow, 3 Kochnovsky Proezd, room 317, 15:10
Abstract: A reactive system is a dynamic system that evolves in time by reacting to external events. Hardware components and software processes are reactive systems that interact with each other and with their physical environment. Computer science has developed powerful models, theories, algorithms, and tools for analyzing and predicting the behavior of reactive systems. These techniques are based on mathematical logic, theory of computation, programming languages, and game theory. They were originally developed to let us build a more dependable computer infrastructure, but their utility transcends computer science. For example, both an aircraft and a living organism are complex reactive systems. Our understanding and the design of such systems can benefit greatly from reactive modeling and analysis techniques such as execution, composition, and abstraction. 


06.03.2017
Title: Vellvm - Verifying the LLVM
SpeakerSteve Zdancewic (University of Pennsylvania)
Place: Moscow, 3 Kochnovsky Proezd, room 317 
Abstract: The Low-Level Virtual Machine (LLVM) compiler provides a modern, industrial-strength SSA-based intermediate representation (IR) along with infrastructure support for many source languages and target platforms. Much of the LLVM compiler is structured as IR to IR translation passes that apply various optimizations and analyses.

In this talk, I will describe the Vellvm project, which seeks to provide a formal framework for developing machine-checkable proofs about LLVM IR programs and translation passes. I'll discuss some of the subtleties of modeling the LLVM IR semantics. I'll also describe some of the proof techniques that we have used for reasoning about LLVM IR transformations and sketch some example applications including verified memory-safety instrumentation and program optimizations.
Vellvm is implemented in the Coq theorem prover and provides facilities for extracting LLVM IR transformation passes and plugging them into the LLVM compiler, thus enabling us to create verified optimization passes for LLVM and evaluate them against their unverified counterparts.
This is joint work with many collaborators at Penn, and Vellvm is part of the NSF Expeditions project: The Science of Deep Specications.


21.02.2017
Title: Probably Approximately Correct Computation of the Canonical Basis
SpeakerDaniel Borchmann, Postdoctoral Research Associate, Technische Universität Dresden
Place: Moscow, 3 Kochnovsky Proezd, room 205 
Abstract: To learn knowledge from relational data, extracting functional dependencies is a common approach.  A way to achieve this extraction is to convert the given data into so-called formal contexts and afterwards compute exact implicational bases of them.  A particularly interesting such basis is the so-called canonical basis, which is not only a basis of minimal cardinality, but for which also algorithms are known that can perform well in practice.  However, all these algorithms are of high runtime complexity, i.e., are not output-polynomial, and are thus likely to fail in certain situations.  On the other hand, most data sets stemming from real world applications are faulty to a certain degree, and an exact representation of its implicational knowledge – as provided by the canonical basis – may not helpful anyway.  Usual approaches of considering association rules instead of implications usually do not solve this problem satisfactorily, as they still require to compute exact implication bases.

This talk wants to investigate an alternative approach of learning approximations of implicational knowledge from data.  For this, we revisit the notion of probably approximately correct implication bases (PAC bases), survey known approaches and results about the feasibility of computing such bases, and shall discuss first experimental results showing their usefulness.  In particular, we shall show how methods from query learning can be leveraged to obtain an algorithm that allows to compute PAC bases in output polynomial time. Finally, we shall give an outlook how attribute exploration, an interactive learning approach based on querying domain experts, can be combined with PAC bases to obtain a probably approximately correct attribute exploration algorithm.

15.02.2017 - 22.02.2017
Title
: From digital pixels to life 
Speaker: Prof. Peter Horvath, to Institute for Molecular Medicine Finland (FIMM)
Place: Moscow, 3 Kochnovsky Proezd,  
Abstract: In his course Prof. Peter Horvath made stress on high-content screening (HCS), which includes cell biology, automated high resolution microscopy, informatics and robotics. High-content screening aims to discover small and large molecules (such as drugs, siRNAs) that change the phenotypes of cell in a desired manner. High-content analysis (HCA) refers to the analysis and evaluation of large data produced during an HCS scenario. Despite the fact that informatics was revolutionized recently, HCA suffers from the lack of solutions to the computational problems that arise and the limited computational capacity. To overcome these, recently numerous image analysis and machine learning approaches were proposed. This course gave an insight into the different most popular methods including automated microscopy, image processing, and multiparametric analysis of the data. During this course 10.000-100.000 images (virtually in this case) were created and we also  developed methods to analyze them using image segmentation and supervised machine learning.