• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Colloquium

For a researcher in a diverse and quickly developing area of knowledge such as computer science, it is important to maintain a broad perspective and strive to understand what colleagues in related fields are studying. This requires a platform where specialists can meet and tell each other about their latest findings in a common language. Such a platform is the Colloquium of HSE's Faculty of Computer Science. This platform is a faculty-wide academic seminar designed for teachers and research staff, graduate and undergraduate students, as well as those who are interested in computer science.

Colloquium meetings are held on Tuesdays in the Faculty of Computer Science building at Kochnovsky Proezd, 3, lecture hall 205, 2nd floor.

NB: a somewhat more detailed web page is available in Russian here.


 

2021-2022

Date: May 24, 16:20
Placе: online 
Title: Do process models behave identically? Algorithmics and Decidability of Bisimulation Equivalences
Speaker: Irina Lomazova, HSE University

Abstract:The concept of process equivalence can be formalized in many different ways. One of the most important is the bisimulation equivalence, which captures the mail features of the observed behavior of the process. Two processes are bisimilar if an external observer cannot distinguish them.
In this talk, we give an overview of the algorithmic and decidability aspects of bisimulation equivalences for Petri nets and some other formal models of process control flow, and present some new results on resource bisimulation equivalences for Petri nets

Afisha (PDF, 93 Кб)


Date: April 19, 16:20
Placе: online 
Title: Statistics in Tandem Mass Spectrometry Data Analysis
Speaker: Attila Kertesz-Farkas, HSE University

Abstract: In this colloquium talk I will give a basic but a conscience introduction to the statistics used in database searching-based tandem mass spectrometry data annotation. Then, we will discuss some machine learning based methods, what they can see in the mass spectrometry data and their pitfalls.

Afisha (PDF, 369 Кб)


Date: March 1, 16:20
Placе: online 
Title: Power Indicies for Attribution of JSM-hypotheses and Formal Concepts
Speaker: Dmitry Ignatov, HSE University

Abstract: Among the family of rule-based classification models, there are classifiers based on conjunctions of binary attributes. For example, the JSM-method of automatic reasoning (named after John Stuart Mill) was formulated as a classification technique in terms of intents of formal concepts as classification hypotheses. These JSM-hypotheses already represent an interpretable model since the respective conjunctions of attributes can be easily read by decision makers and thus provide plausible reasons for model prediction. However, from the interpretable machine learning (IML) viewpoint, it is advisable to provide decision makers with the importance (or contribution) of individual attributes to the classification of a particular object, which may facilitate explanations by experts in various domains with high-cost errors like medicine or finance. To this end, we use the notion of Shapley value from cooperative game theory, also popular in IML. In addition to the supervised problem statement, we propose the usage of Shapley and Banzhaf values for ranking attributes of closed sets, namely intents of formal concepts (or closed itemsets). The introduced indices are related to extensional concept stability and are based on counting generators, especially those that contain a selected attribute. We provide the listeners with theoretical results, basic examples and attribution of JSM-hypotheses and formal concepts by means of Shapley value and some other power indicies.

Afisha


Date: February 1, 16:20
Placе: online 
Title: Approximation with neural networks of minimal size: exotic regimes and superexpressive activations
Speaker: Dmitry Yarotsky, Skoltech

Abstract: I will discuss some "exotic" regimes arising in theoretical studies of function approximation by neural networks of minimal size. The classical theory predicts specific power laws relating the model complexity to the approximation accuracy for functions of given smoothness, under the assumption of continuous parameter selection. It turns out that these power laws can break down if we use very deep narrow networks and don't impose the said assumption. This effect is observed for networks with common activation functions, e.g. ReLU. Moreover, there exist some "superexpressive" collections of activation functions that theoretically allow to approximate any continuous function with arbitrary accuracy using a network with a fixed number of neurons, i.e. only by suitably adjusting the weights without increasing the number of neurons. This result is closely connected to the Kolmogorov(-Arnold) Superposition Theorem. An example of superexpressive collection is {sin, arcsin}. At the same time, the commonly used activations are not superexpressive. 

Colloquium (PDF, 125 Кб)


Date: December 7, 16:20
Placе: online 
Title: Highly accurate protein structure prediction with AlphaFold
Speaker: Anna Potapenko, DeepMind

Abstract: Predicting a protein’s structure from its primary sequence has been a grand challenge in biology for the past 50 years, holding the promise to bridge the gap between the pace of genomics discovery and resulting structural characterization. In this talk, we will describe work at DeepMind to develop AlphaFold, a new deep learning-based system for structure prediction that achieves high accuracy across a wide range of targets. We demonstrated our system in the 14th biennial Critical Assessment of Protein Structure Prediction (CASP14) across a wide range of difficult targets, where the assessors judged our predictions to be at an accuracy “competitive with experiment” for approximately 2/3rds of proteins. The talk will cover both the underlying machine learning ideas and the implications for biological research.

Colloquium (PDF, 832 Kb)


Date: November 23, 16:20
Placе: online 
Title: Software Release anomaly detection in DevOps environment
Speaker: Manuel Mazzara, Innopolis University

Abstract: In this talk, I present current research on the use of Machine Learning to support DevOps automation and continuous releases. Decisions can be machine-assisted, but ultimately human made.

Colloquium (PDF, 126 Кб)


Date: October 26, 16:20
Placе: online 
Title: Is the next deep learning disruption in the physical sciences?
Speaker: Max Welling University of Amsterdam

Abstract: A number of fields, most prominently speech, vision and NLP have been disrupted by deep learning technology. A natural question is: "which application areas will follow next?".  My prediction is that the physical sciences will experience an unprecedented acceleration by combining the tools of simulation on HPC clusters with the tools of deep learning to improve and accelerate this process. Together, they form a virtuous cycle where simulations create data that feeds into deep learning models which in turn improves the simulations. In a way, this is like building a self-learning computational microscope for the physical sciences. In this talk I will illustrate this using two recent pieces of work from my lab: molecular simulation and PDE solving. In molecular simulation we try to predict molecular properties or digitally synthesize molecules with prescribed properties. We have built a number of equivariant graph neural networks to achieve this. Partial differential equations (PDEs) are the most used mathematical model in natural sciences to describe physical processes. Intriguingly, we find that PDE solvers can be learned from data using graph neural networks as well, which has the added benefit that we can learn a solver that can generalize across PDEs and different boundary conditions. Moreover, it may open the door to ab initio learning of PDEs directly from data.

Colloquium (PDF, 736 Kb) 


Date: September 28, 18:10
Placе: online 
Title: Positional Embedding in Transformer-based Models
Speaker: Tatiana Likhomanenko (Apple)

Abstract: Transformers have been shown to be highly effective on problems involving sequential modeling, such as in machine translation (MT) and natural language processing (NLP). Following its success on these tasks, the Transformer architecture raised immediate interest in other domains: automatic speech recognition (ASR), music generation, object detection, and finally image recognition and video understanding. Two major components of the Transformer are the attention mechanism and the positional encoding. Without the latter, vanilla attention Transformers are invariant with respect to input tokens permutations (making "cat eats fish" and "fish eats cat" identical to the model). In this talk we will discuss different approaches on how to encode positional information, their pros and cons: absolute and relative, fixed and learnable, 1D and multidimensional, additive and multiplicative, continuous and augmented positional embeddings. We will also focus on how well different positional embeddings generalize to unseen positions for both interpolation and extrapolation tasks.

Colloquium (PDF, 452 Kb)