We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.
109028, Moscow,
11, Pokrovsky boulevard.
Phone: +7 (495) 531-00-00 *27254
Email: computerscience@hse.ru
Bondarenko A., Ajjour Y., Dittmar V. et al.
In bk.: WSDM 2022 - Proceedings of the 15th ACM International Conference on Web Search and Data Mining. Association for Computing Machinery (ACM), 2022. P. 66-74.
Belomestny Denis, Iosipoi L., Paris Q. et al.
Bernoulli: a journal of mathematical statistics and probability. 2022. Vol. 28. No. 2. P. 1382-1407.
Puchkin N., Zhivotovskiy N.
In bk.: Proceedings of Machine Learning Research. Vol. 134: Conference on Learning Theory. PMLR, 2021. P. 3806-3832.
Derkach D., Maevskiy A., Karpov M. et al.
Journal of High Energy Physics. 2022. P. 1-38.
In bk.: ESEC/FSE 2021: Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. Association for Computing Machinery (ACM), 2021. P. 703-715.
The Faculty of Computer Science was created with the goal of becoming one of the world’s leading faculties for developers and researchers in data analysis, machine learning, big data, theoretical computer science, bioinformatics, system and software engineering, system programming, and distributed computing. In cooperation with major companies like Yandex, Sberbank, SAS, Samsung, 1C, and many others, the Faculty provides both deep theoretical knowledge and hands-on practical experience in many branches of contemporary computer science.
35th Conference on Neural Information Processing Systems (NeurIPS 2021) is one of the world's largest conferences on machine learning and neural networks. It takes place on December 6-14, 2021.
This year, NeurIPS has accepted ten papers by the researchers of the Faculty of Computer Science:
On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay (Ekaterina Lobacheva, Maxim Kodryan, Nadezhda Chirkova, Andrey Malinin, Dmitry Vetrov)
Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces (Kirill Struminsky, Artyom Gadetsky, Denis Rakitin, D. Karpushkin, Dmitry Vetrov)
Distributed Saddle-Point Problems Under Similarity (Aleksandr Beznosikov, G. Scutari, Alexander Rogozin, Alexander Gasnikov)
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks (D. Kovalev, E. Gasanov, P. Richtárik, Alexander Gasnikov)
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices (Max Ryabinin, Eduard Gorbunov, Vsevolod Plokhotnyuk, G. Pekhimenko)
Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize (A. Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, K. Scaman, H. Wai)
Distributed Deep Learning in Open Collaborations (Michael Diskin, A. Bukhtiyarov, Max Ryabinin, L. Saulnier, Q. Lhoest, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, Maxim Kashirin, Alexander Borzunov, A. Villanova del Moral, D. Mazur, I. Kobelev, Y. Jernite, T. Wolf, G. Pekhimenko)
Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets (Max Ryabinin, Andrey Malinin, M. Gales)
Revisiting Deep Learning Models for Tabular Data (Y. Gorishniy, Ivan Rubachev, V. Khrulkov, Artem Babenko)
Good Classification Measures and How to Find Them (M. Gösgens, A. Zhiyanov, A. Tikhonov, Liudmila Prokhorenkova)
We asked some of the authors to tell us about their articles:
Correction
Since the publication of this article, we have learned that two more papers have been accepted, Overlapping Spaces for Compact Graph Representations (K. Shevkunov, Liudmila Prokhorenkova) and Federated-EM with heterogeneity mitigation and variance reduction (A. Dieuleveut, G. Fort, Eric Moulines, G. Robin).
When training modern neural networks, there are many details to consider, both in the structure of the networks themselves and in their training methods. In our article "On the Periodic Behavior of Neural Network Training with Batch Normalization and Weight Decay" we investigate the unusual behaviour of networks with simultaneous use of normalization layers and L2-regularization of weights in training. These are two very common techniques used in almost all modern neural networks, so understanding their interaction is important for solving practical problems. The essence of the work is that with long enough training of a neural network the error function instead of a monotonic decrease begins to behave periodically: periods of neural network convergence are interrupted by sharp decays, after which the learning process begins almost anew. In our article, we investigate in detail the reasons for such behaviour and show in which situations it can occur.
In this paper, we propose a practical approach for distributed neural network training, in which volunteers from all over the world can participate. This approach has become increasingly relevant recently: the amount of computation required for the largest models often requires access to expensive supercomputers, and not everyone has access to them. Given that such models are sometimes not in the public domain, the difficulty in reproducing the results of research negatively affects the scientific progress in machine learning.
The method proposed in the paper, called Distributed Deep Learning in Open Collaborations (DeDLOC), allows efficient aggregation of distributed learning results even in situations of relatively slow connectivity and unstable node participation: it is unrealistic to expect that each participant's computer has a gigabit connection and will be connected to the network 24/7. Perhaps one of the most interesting results of the work is the successful experiment to train a representational model for Bengali with 40 real volunteers. Together with them, we obtained a neural network whose language comprehension quality is comparable to the most powerful cross-language models trained on a cluster of hundreds of video cards.
Many popular machine learning algorithms, such as stochastic gradient descent in a regression problem, are actually stochastic approximation algorithms. In such algorithms, we are trying to find the zeros of some function whose values only have access to random variables that are unbiased estimates of that function's values. In our paper, we considered the case of linear stochastic approximation (LSA) which approximates the solution of a system of linear equations based on random realizations of the system matrix and its right-hand side.
We have succeeded in obtaining estimates on the error of the linear stochastic approximation under weaker assumptions than have been known before. We have shown for the first time that under such assumptions only a polynomial concentration for the error at the nth step of the LSA is possible, with its order depending on the step of the algorithm. The examples in our paper show that this result cannot be improved without additional assumptions on the behaviour of the observed random matrices. This shows that the estimates obtained using the linear stochastic approximation procedure are, generally speaking, characterised by heavy tails. Similar behaviour is also typical for stochastic gradient descent iterations.
Among the authors of the NeurIPS 2021 articles are the students of the Faculty Vsevolod Plokhotnyuk, Anton Sinitsin, Dmitry Popov, Dmitry Pyrkin, and Maxim Kashirin.