Anton Rodomanov

Faculty of Computer Science,
National Research University Higher School of Economics, Moscow, Russia
anton.rodomanov@gmail.com
Education
- 2017-2021: PhD in Computer Science, National Research University Higher School of Economics.
- 2015-2017: MSc in Computer Science, National Research University Higher School of Economics.
- 2011-2015: BSc in Computer Science, Lomonosov Moscow State University.
CV
- Here is my CV.
Publications
- A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
A. Rodomanov, D. Kropotov
Proceedings of the 33rd International Conference on Machine Learning (ICML), 2016. [pdf ], [ supplementary], [poster ], [ slides], [code]
- Primal-Dual Method for Searching Equilibrium in Hierarchical Congestion Population Games
P. Dvurechensky, A. Gasnikov, E. Gasnikova, S. Matsievsky, A. Rodomanov, I. Usik
Proceedings of the 9th International Conference on Discrete Optimization and Operations Research and Scientific School (DOOR), 2016. [pdf]
- A Newton-type Incremental Method with a Superlinear Convergence Rate
A. Rodomanov, D. Kropotov
- Putting MRFs on a Tensor Train
A. Novikov, A. Rodomanov, A. Osokin, D. Vetrov
Proceedings of the 31st International Conference on Machine Learning (ICML), 2014. [pdf ], [ supplementary], [poster ], [ slides], [code ]
Talks
- Incremental Newton Method for Big Sums of Functions
- A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
International Conference on Machine Learning (ICML), New York, USA, June 2016. [slides ], [video]
- Optimization Methods for Big Sums of Functions
- Incremental Newton Method for Minimizing Big Sums of Functions
- Introduction to the Tensor Train Decomposition and Its Applications in Machine Learning
- Proximal Incremental Newton Method
- Probabilistic Graphical Models: a Tensorial Perspective
- A Fast Incremental Optimization Method with a Superlinear Rate of Convergence
- Markov Chains and Spectral Theory
- Low-Rank Representation of MRF Energy by Means of the TT-Format
- Fast Gradient Method
- TT-Decomposition for Compact Representation of Tensors
Posters
- A Superlinearly-Convergent Proximal Newton-Type Method for the Optimization of Finite Sums
- A Newton-type Incremental Method with a Superlinear Convergence Rate
- A Fast Incremental Optimization Method with a Superlinear Rate of Convergence
- Putting MRFs on a Tensor Train
Miscellaneous
- Linear Coupling of Gradient and Mirror Descent: Version for Composite Functions with Adaptive Estimation of the Lipschitz Constant
- Development of a Stochastic Optimization Method for Machine Learning Problems with Big Data
- Fast Gradient Method for Machine Learning Problems with L1-Regularization
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.