DeepBayes 2018: more Bayesian methods in deep learning
The second Summer School on Deep Learning and Bayesian Methods (DeepBayes 2018) was held in Moscow from August 27 to September 1, 2018. The summer school was organized by HSE Centre of Deep Learning and Bayesian Methods and Samsung AI Center in Moscow. The lectures were taught by researchers from two centers-organizers, Skoltech, and Lomonosov Moscow State University.
This year the summer school was conducted in English and the participants came from 24 countries, including European countries, the UK, India, Canada and the USA. The majority of participants study or work in Russia (51 participants), Germany (11 participants) or the Netherlands (6 participants). All the participants passed a serious selection process. One of its requirements was a confident knowledge of machine learning and deep learning.
During 6 days participants studied Bayesian methods and learned to combine them with neural networks. Such synthesis allows to expand the capabilities of deep learning models and improves the quality of solving applied problems. These problems include, for example, the use of neural networks on mobile devices or the creation of robots that perform complex compound actions.
A distinctive feature of the DeepBayes summer school is that the material taught in one lesson becomes the basis for the next lesson. This allows to give material at a deep level and to discuss in detail the latest methods that combine Bayesian and deep learning paradigms.
Lecture by Dr. Max Welling
Invited talks were also incorporated into the sequential schedule. Dr. Max Welling, a professor at the University of Amsterdam and one of the world’s leading experts in Bayesian deep learning, drew connections between methods considered in preceding lectures. In the second part of his talk, Dr. Welling explained several advances methods of approximate variational inference. Dr. Maurizio Filippone, assistant professor at EURECOM, gave a lecture on Deep Gaussian processes. Alessandro Achille from the University of California gave a talk via video link on Information Bottleneck.
Invited talks were also given by two DeepMind researchers. Sergey Bartunov gave an overview of Bayesian methods in reinforcement learning. Michael Figurnov explained how to apply popular today reparametrization trick to almost any continuous distribution. Videos of all the lectures are available in open access.Michael Figurnov participating in the Board games evening
Another distinguishing feature of the school is a large number of practical classes. At these classes, participants implement learned methods by themselves. It helps to understand how the method works and what are the limits of its applicability. For example, on the first day, the participants implemented the EM algorithm to identify the teacher who, according to legend, hid the table games for the team building evening. On the last day, the seminar was devoted to the implementation of the Bayesian compression techniques for neural networks developed in 2017. The materials of practical exercises are available on github.
The summer school is not only an opportunity to gain new knowledge but also to find new friends, share experiences and discuss research ideas. Two informal events were organized to facilitate this concept: board games evening and final reception. The board games evening was held in the middle of the summer school so that participants have a rest from endless formulas and gain strength for the remaining two days. The evening culminated in a spontaneous competition for the most original construction built from a large 1.5 m long Jenga.
We’ve done a lot of work to organize this summer school at the highest possible level. That concerns both to conditions for participants, and to the quality of lectures and seminars. A large number of foreign participants, together with invited speakers, made the atmosphere at the summer school similar in some ways to the ambiance of the world's leading conferences, e.g., ICML.
I want to personally thank Dmitry Vetrov and his staff for organizing my trip so well. My students and postdoc were super excited by the workshop and will recommend it to others for next year. One thing they mentioned could be changed next year is that they can also present their own work to the other participants.
As I see it, a very successful curriculum was compiled within the framework of the school. It suits perfectly for forming the foundations of probabilistic thinking and their application in real problems of machine learning. So that’s very good for students and young researchers who were lucky to come to the school this year. As for me, I would have been happy to visit such a school some years ago when I just began to do all this stuff.”
It was exciting seeing so many bright students interested in the topic of probabilistic deep learning! Tip for the next generation of attendees: revise the basics of statistics before the school. This should make the lectures easier to follow.
But thanks way more for the great summer school of course! I have a lot of studying to do... But the contents were really great, and the organisation splendid (in particular kudos to how you clearly incorporated the daily feedback, this is quite remarkable). Thanks a lot!
Coming from the field of mathematical statistics I was not completely aware of the most recent trends in the ML research. In this sense DeepBayes 2018 was a great place to quickly gain knowledge in deep Bayesian inference from the perspective of machine learning community. The talks were precise, clear and to the point. On the other hand, I could look at the things from a slightly different angle and hence had questions or comments after almost every talk, which in turn inspired me to start working on a new article. Coffee breaks and social events were also just fine. I will definitely recommend the next DeepBayes school to my friends and colleagues. I also wish that in future DeepBayes holds a poster session for the participants, so that they could learn even more from each others' research.