• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Variational dropout sparsifies DNNs paper has been accepted to ICML'17

The paper Variational Dropout Sparsifies Deep Neural Networks  has been accepted to the International Conference on Machine Learning'2017. It is joint work of the laboratory, Yandex, Skoltech and MIPT. The authors are laboratory's research assistants Dmitry Molchanov and Arsenii Ashukha and head Dmitry Vetrov. In this research a state-of-the-art result in deep neural networks sparsification has been achieved using Bayesian Deep Learning framework.
Neural networks show high quality results in different data analysis tasks but often such models require a lot of memory and work quite slowly. That's why neural networks sparsification and speed-up is an important and fast growing area of research. Success in this area allows using of neural networks on devices with limited computational resources, especially smartphones. In the paper accepted to ICML'17 a theoretically grounded model based on Bayesian approach is described. This model compresses the neural networks in dozens and hundreds of times. Both Russian and foreign companies are interested in such results.