• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Anton Rodomanov


PhD student, 
Faculty of Computer Science
National Research University Higher School of EconomicsMoscow, Russia
anton.rodomanov@gmail.com



Education

CV

  • Here is my CV.

Publications

  • A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
A. Rodomanov, D. Kropotov
Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. [pdf ], [ supplementary], [poster ], [ slides], [code]
  • Primal-Dual Method for Searching Equilibrium in Hierarchical Congestion Population Games
P. Dvurechensky, A. Gasnikov, E. Gasnikova, S. Matsievsky, A. Rodomanov, I. Usik
Proceedings of the 9th International Conference on Discrete Optimization and Operations Research and Scientific School (DOOR), 2016. [pdf]
  • A Newton-type Incremental Method with a Superlinear Convergence Rate
A. Rodomanov, D. Kropotov
NIPS Workshop on Optimization for Machine Learning (Optimization@NIPS), 2015. [pdf ], [ poster]
  • Putting MRFs on a Tensor Train
A. Novikov, A. Rodomanov, A. Osokin, D. Vetrov
Proceedings of the 31st International Conference on Machine Learning (ICML), 2014. [pdf ], [ supplementary], [poster ], [ slides], [code ]

Talks

  • Incremental Newton Method for Big Sums of Functions
Seminar on Stochastic Analysis in Problems, IUM, Moscow, Russia, October 2016. [slides (in Russian) ], [video (in Russian)]
  • A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
International Conference on Machine Learning (ICML), New York, USA, June 2016. [slides ], [video]
  • Optimization Methods for Big Sums of Functions
Deep Machine Intelligence Workshop, Skoltech, Moscow, Russia, June 2016. [slides]
  • Incremental Newton Method for Minimizing Big Sums of Functions
HSE off-site seminar on Machine Learning, Voronovo, Russia, May 2016. [slides ]
  • Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning
Seminar on Applied Linear Algebra, HSE, Moscow, Russia, March 2016. [slides]
  • Proximal Incremental Newton Method
Seminar on Bayesian Methods in Machine Learning, MSU, Moscow, Russia, February 2016. [slides ]
  • Probabilistic Graphical Models: a Tensorial Perspective
International Conference on Matrix Methods in Mathematics and Applications (MMMA), Skoltech, Moscow, Russia, August 2015. [slides ]
  • A Fast Incremental Optimization Method with a Superlinear Rate of Convergence
Summer School on Control, Information and Optimization, Solnechnogorsk, Russia, June 2015. [slides]
  • Markov Chains and Spectral Theory
Seminar on Bayesian Methods in Machine Learning, MSU, Moscow, Russia, October 2014. [slides (in Russian) ]
  • Low-Rank Representation of MRF Energy by Means of the TT-Format
SIAM Conference in Imaging Science (SIAM-IS), Hong-Kong, China, May 2014. [slides ]
  • Fast Gradient Method
Seminar on Bayesian Methods in Machine Learning, MSU, Moscow, Russia, April 2014. [slides (in Russian) ]
  • TT-Decomposition for Compact Representation of Tensors
Seminar on Bayesian Methods in Machine Learning, MSU, Moscow, Russia, October 2013. [slides (in Russian) ]

Posters

  • A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
International Conference on Machine Learning (ICML), New York, USA, June 2016. [poster ]
  • A Newton-type Incremental Method with a Superlinear Convergence Rate
NIPS Workshop on Optimization for Machine Learning (Optimization@NIPS), Montreal, Canada, December 2015. [poster ]
  • A Fast Incremental Optimization Method with a Superlinear Rate of Convergence
Microsoft Research PhD Summer School, Cambridge, United Kingdom, July 2015. [ poster ]
  • Putting MRFs on a Tensor Train
International Conference on Machine Learning (ICML), Beijing, China, June 2014. [poster ]

Miscellaneous

  • Linear Coupling of Gradient and Mirror Descent: Version for Composite Functions with Adaptive Estimation of the Lipschitz Constant
My notes on small enhancement of this paper, 2016. [pdf ]
  • Development of a Stochastic Optimization Method for Machine Learning Problems with Big Data
My BSc thesis, MSU, 2015. [pdf (in Russian) ]
  • Fast Gradient Method for Machine Learning Problems with L1-Regularization
My 3rd year term paper, MSU, 2014. [pdf (in Russian) ]

 

Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.