• A
  • A
  • A
  • АБВ
  • АБВ
  • АБВ
  • А
  • А
  • А
  • А
  • А
Обычная версия сайта

Научные семинары

Семинар по структурному обучению (Structural Learning Seminar) является главным семинаром лаборатории. Семинар объединяет исследователей, студентов и аспирантов из НИУ ВШЭ, Сколтеха, ИППИ РАН и других ведущих российских университетов и научных центров. На семинаре обсуждаются задачи на стыке математики и компьютерных наук: вероятность и математическая статистика в высокой размерности, машинное обучение, оптимизация, численные методы и др.  

Организаторы семинара: Алексей Наумов Максим ПановВладимир Спокойный.

Заседания семинара проходят по вторникам с 17:00 в ИППИ РАН, комната 615 (если не указано другое место и время проведения). За новостями семинара можно следовать в Телеграм канале лаборатории. 
Архив заседаний семинара доступен по ссылке.

Желающих сделать доклад на Семинаре просим присылать на e-mail: anaumov@hse.ru заявки с указанием темы доклада, аннотации  на английском языке (не более 1/2 стр.) и приемлемой даты выступления. Докладчик должен заранее сообщить о выбранной им форме доклада - на меловой доске или компьютерная презентация (предпочтительная форма доклада - на меловой доске). Продолжительность доклада - 1 час (при необходимости продолжительность доклада может быть увеличена до 1.5-2 часов)

Spring 2019
DateSpeakerTitleAbstract
29.01.2019 at CS HSEEkaterina Krymova (Duisburg Essen University)On estimation of the noise variance in high dimensional linear regression modelWe consider a problem of unknown noise variance estimation in high-dimensional linear regression model. To estimate a nuisance vector of regression coefficients we use a family of spectral regularisers of the maximum likelihood estimator. Noise variance estimation is based on an adaptive normalisation of the prediction error. We derive an upper bound for concentration of the proposed method around an ideal estimator (in case of zero nuisance).

 


Fall 2018
DateSpeakerTitleAbstract
26.12.2018Qeuntin Paris (HSE)Learning functional minimizers on metric spaces.In this talk, we will discuss recent developments motivated by statistical problems that arise in optimal transport. In particular we will discuss new results concerning the estimation of so called barycenters (or Frechet means) in the Wasserstein space. Before presenting our results, the talk will review some basic material from geometry and optimal transport. The end of the talk will discuss some open problems.
26.12.2018Sergey Samsonov (HSE, Skoltech)Tansportation and concentration of measure inequalities for Markov ChainsIn this talk I am going to observe some concentration of measure inequalities, starting from rather classical ones by Bakri, Ledoux.  Then I will cover some interesting concentration results for Markov Chains, related with transportation inequalities following papers  by Djellot, Guillin, Wu 2004 and Joulin, Olivier 2010, and concentration results for quadratic forms of vectors with dependent entries following Adamczak 2015 with particular applications to Unadjusted Langevin Dynamics.
13.12.2018Sergey Samsonov (HSE, Skoltech)Concentration inequalities and their applicationsIn this talk I am going to cover some functional inequalities for dependent sequences of random variables, especially focusing on the results for Markov chains based on the work by Djellout, Guilin and Wu. As one of the applications we will consider Unadjusted Langevin Algorithm, following paper by Durmus, Moulines, and state some concentration results for it.
6.12.2018Maxim Kaledin (HSE, Skoltech)Dynamic Programming in Stochastic Optimal Control Problems.Stochastic control problems are well-known in financial mathematics (option pricing, portfolio management) and in many related areas. These problems even in case of discrete state space and known transition density are subject to curse of dimensionality when using deterministic algorithms. Kean&Wolpin(1994) and Rust (1997) were the first who proved that this can be avoided in case of discrete Markov Decision Processes with stochastic algorithms. The modern challenge is to figure out what happens with complexity when the transition density can only be approximated (RL,ML, Finance applications). During my talk I will cover known and new complexity estimates for stochastic control problems as well as several algorithms to solve MDP.
29.11.2018Vladimir Ulyanov (HSE, MSU)Non-linear expectations: notion, facts, applications.We give a short review on recent developments on problems of probability model under uncertainty by using the notion of nonlinear expectations and, in particular, sublinear expectations. We discuss as well a new law of large numbers (LLN) and central limit theorem (CLT) and provide non-asymptotic results. Classical LLN and CLT have been widely used in probability theory, statistics, data analysis as well as in many practical situations such as financial pricing and risk management. They provide a strong and convincing way to explain why in practice normal distributions are so widely utilized. But often a serious problem is that, in general, the “i.i.d.” condition is difficult to be satisfied. In practice, for the most real-time processes and data for which the classical trials and samplings become impossible, the uncertainty of probabilities and distributions can not be neglected. In fact the abuse of normal distributions in finance and many other industrial or commercial domains has been criticized. The new CLT does not need this strong “i.i.d.” assumption. Instead of fixing a probability measure P, it is introduced an uncertain subset of probability measures {P_a : a in A } and considered the corresponding sublinear expectation as sup{a in A} E_{a}X.
22.11.2018Nikita Zhivotovskiy (HSE)Covariance estimation: missing observations and heavy tails.In this talk, I will discuss some recent results on covariance estimation together with some work in progress. I will start with the subgaussian case when some of the observations are missing: we observe N i.i.d. vectors and want to estimate the covariance matrix. The problem is that some of the components of each vector are missing, that is these components are set to be equal to zero with some probability. We show how to improve the state of the art results in this scenario. Next, we discuss how to apply dimension-free Bernstein type inequalities for vectors with heavy tails under mild moment assumptions.
14.11.2018Eric Moulines (Ecole Polytechnique, HSE)Low-rank Interaction with Sparse Additive Effects Model for Large Data Frames (NIPS paper)Many applications of machine learning involve the analysis of large data frames — matrices collecting heterogeneous measurements (binary, numerical, counts, etc.) across samples — with missing values. Low-rank models, as studied by Udell et al. (2016), are popular in this framework for tasks such as visualization, clustering and missing value imputation. Yet, available methods with statistical guarantees and efficient optimization do not allow explicit modeling of main additive effects such as row and column, or covariate effects. In this paper, we introduce a low-rank interaction and sparse additive effects (LORIS) model which combines matrix regression on a dictionary and low-rank design, to estimate main effects and interactions simultaneously. We provide statistical guarantees in the form of upper bounds on the estimation error of both components. Then, we introduce a mixed coordinate gradient descent (MCGD) method which provably converges sub-linearly to an optimal solution and is computationally efficient for large scale data sets. We show on simulated and survey data that the method has a clear advantage over current practices.
14.11.2018Błażej Miasojedow (University of Warsaw)Statistical inference for Continous Time Bayesian Networks.Continuous Time Bayesian Networks (CTBN) are a class of multivariate Markov Jump Processes (MJP), where dependence between coordinates is described by a directed graph with possible cycles. CTBNs are a flexible tool for modelling phenomena from different areas such as chemistry, biology, social sciences etc. In the talk, I will discuss the problems of parameter learning and structure learning for CTBNs with a special attention to computational methods.
8.11.2018Alexey Naumov (HSE)Gaussian approximations for maxima of large number of quadratic forms of high-dimensional random vectorsLet X_1, … , X_n be i.i.d. random vectors taking values in R^d, d \geq 1, and Q_1, … , Q_p, p \geq 1, be symmetric positive definite matrices. We consider the distribution function of vector (Q_j S_n, S_n), j = 1, … , p, where S_n = n^{-1/2}(X_1 + … + X_n), and prove the rate of Gaussian approximation with explicit dependence on n, p and d. We also compare this result with results of Bentkus (2003) and Chernozhukov, Chetverikov, Kato (2016). Applications to change point detection and model selection will be discussed as well. The talk is based on the joint project with F. Goetze, V. Spokoiny and A. Tikhomirov.
1.11.2018Andzhey Koziuk (WIAS, Berlin)Exact smoothning of a probability measureIn the talk an exact smoothing of indicator function is introduced and developed to allow differential calculus access the internal machinery behind a problem of measures comparison. Specifically, the problem of accuracy of non-classical analogue of Berry-Esseen inequality for a radnom elemets in Hilbert space is addressed with the help of the tool and aided with the classical Lindenberg chained sum construction.
25.10.2018Nikita Zhivotoskiy (Technion, HSE)Robust estimation of the mean of a random vectorIn this talk, we discuss the estimation of the mean random of a vector from an i.i.d. sample of its observations. The latest results show that using a special procedure, the mean of a random vector, for which only the existence of a covariance matrix is ​​known, can be estimated as precisely as if the vector had a multidimensional normal distribution with the same covariance structure. The estimation of the covariance matrix with minimal constraints on the underlying distribution will be briefly discussed. The talk is based on the works of arxiv: 1702.00482 and arxiv: 1809.10462
18.10.2018Alexey Ustimenko (HSE/Skoltech)Convergence of Laplacian Spectra from Point CloudsThe Laplacian-Beltrami operator (LBO) on manifolds and its spectral structure has been widely used in many applications such as manifold learning, spectral clustering, dimensionality reduction and so on. Recently, Point Integral Method (PIM) was proposed to estimate the LBO by the discrete Laplace operator constructed from a set of sample points. We would discuss a paper “Convergence of Laplacian Spectra from Point Clouds” by Zuoqiang Shi, Jian Sun and their’s proof of the convergence of the eigenvalues and eigenvectors obtained by PIM to the eigenvalues and eigenfunctions of the LBO with the Neumann boundary.
11.10.2018Nikita Puchkin (HSE)Manifold LearningI will discuss a problem of a manifold reconstruction from a noisy point cloud. There are plenty methods of manifold learning (ISOMAP, LLE, etc.) but only a few papers provide a theoretical justification of the proposed algorithms (Genovese et. al. (2012), Maggioni et. al. (2016), Aamari and Levrard (2017)). An approach based on a local polynomial estimation (Aamari, Levrard (2017)) reconstructs distinct parts of the manifold and the reconstructed manifold is often disconnected. Another approach (Genovese et. al. (2012)) tries to estimate a density with a support on the manifold. This approach requires a specific distribution of the noise, which is unlikely to hold in practice. A recently proposed method (Osher et. al. (2017)), based on an approximation of the Laplace-Beltrami operator, is aimed at a global reconstruction of the manifold and does not have the mentioned drawbacks. Unfortunately, the statistical properties of this method are not studied so far. In my talk, I will discuss some details concerning a theoretical justification of the procedure.
4.10.2018Valentina Shumovskaya (HSE, Skoltech)Towards Hypothesis Testing for Random Graphs with Community StructureThe analysis of random graphs and network analysis recently become an active area of research. We will dicuss the problem of testing between two populations of inhomogeneous random graphs defined on the same set of vertices, where the null hypothesis means that the underlying edge probability matrices coincide. We propose a new approach of random graphs testing based on community detection procedure: we introduce a structural assumption, which is that our graphs have community structures with k communities.
27.09.2018Alexander Goldenshluger (University of Haifa, HSE)Nonparamertic estimation in non-regular deconvolution problems.I will discuss problems of nonparametric estimation in non-regular deconvolution models. In such settings standard estimation methods based on the Fourier transform are not applicable. I will review the existing literature on the topic, discuss general ideas for constructing optimal estimators, and present some preliminary results. Based on joint ongoing work with D. Belomestny.
20.09.2018Alexander Timofeev (MIPT)Density-sensitive semisupervised inferenceIn the talk, we will review the article written by Martin Azizyan, Aarti Singh and Larry Wasserman. We will consider semisupervised inference for the regression problem, based on a density-sensitive metric and show that the proposed semisupervised method outperforms any supervised one for certain sets. At the end of the talk, We will demonstrate how to calculate the density-sensitive metric.
13.09.2018Katya Krymova (University of Duisburg-Essen)On a sparse modification of projection approximation subspace tracking methodIn this talk we revisit the well-known constrained projection approximation subspace tracking algorithm (CPAST) and derive non-asymptotic bounds on its error. Furthermore we introduce a novel sparse modification of CPAST which is able to exploit sparsity of the underlying covariance structure. We present a non-asymptotic analysis of the proposed algorithm and study its empirical performance on simulated and real data.
10.09.2018Steve Oudot (Ecole Polytechnique)Statistics and Learning with topological descriptorsI this survey talk I will review some of the recent advances in Topological Data Analysis to define stable descriptors for data, to use these descriptors for learning tasks, and to do statistics on them. Among other topics, I will address finding suitable metrics, defining and computing means and barycenters, deriving confidence intervals, laws of large numbers, central limit theorems, and kernel mean embeddings.

 



 

Нашли опечатку?
Выделите её, нажмите Ctrl+Enter и отправьте нам уведомление. Спасибо за участие!
Сервис предназначен только для отправки сообщений об орфографических и пунктуационных ошибках.