Mini-Workshop of the Centre of Deep Learning and Bayesian Methods
On May 24, Thursday, you are welcome to visit Mini-Workshop on topical issues of machine learning such as solving inverse problems using Deep Neural Networks, estimating uncertainty in predictions of Neural Networks and quantum machine learning.
The researchers from University of Zurich, University of Cambridge and University of Innsbruck will present the results of their studies and answer your questions.
Please sign up via link to take part.
For ordering a pass to HSE building and on any questions email the Centre's manager.
Location: Faculty of Computer Science, Kochnovsky proezd, 3, room 622.
Time: 17:00 - 20:00, May, 24
Dr Valery Vishnevskiy, ETH Zurich, University of Zurich, Institute for Biomedical Engineering
Solving Inverse Problems Using Deep Neural Networks and Optimization Loop Unrolling
A mathematical model that describes and predicts the behavior of a physical (or generally abstract) system with known state and parameters defines the direct problem. The associated inverse process of estimating hidden parameters and the state of the system is called the inverse problem. Due to wide success of deep learning, many researchers proposed to learn the inverse mapping explicitly as an artificial neural net. Instead, in this talk we present a different approach based on unrolled representation of numerical methods that allow to learn efficient and generalisable inverse mapping.
Andrey Malinin, PhD student, researcher, Department of Engineering, University of Cambridge
Estimating Uncertainty in Predictions of Neural Networks
Estimating uncertainty in predictions is important to improving the safety and reliability of AI systems. This talk provides an overview of uncertainty in predictions for DNN classifications models. The sources of uncertainty - model uncertainty, noise in data and mismatch between the distributions of training and test datasets, are examined. The advantages and limitations of class-posterior and Bayesian ensemble approaches are discussed in the context of modelling each source of uncertainty. A new framework for modelling uncertainty, called Prior Networks, is proposed in order to model the effect of each source of uncertainty within a theoretically consistent and interpretable probabilistic framework. Prior Networks are compared to class-posterior and Bayesian ensemble approaches on the tasks of identifying out-of-distribution (OOD) samples and detecting misclassifications.
Alexey Melnikov, PhD student, researcher, Institute for Theoretical Physics, University of Innsbruck, Institute of Physics and Technology, Russian Academy of Sciences
Quantum machine learning
With an increase of the number of qubits in quantum processors, we might soon be able to solve complex problems intractable on classical supercomputers. What impact will quantum computing have on machine learning? Quantum machine learning, an emerging field in quantum information science, studies this question, as well as applications of machine learning tools in quantum physics. In my talk, I will consider a general agent-environment interaction picture and explain the advantages to expect by considering the agent or/and the environment as quantum systems. I will address computational, model and sample complexities from the quantum machine learning point of view.