Bayesian Methods in Machine Learning
The course addresses Bayesian methods for solving various machine learning problems (classification, regression, clustering, etc). Bayesian approach to probability theory allows to take into account user’s preferences in decision rule construction. Besides, it offers efficient framework for model selection. In particular one may perform automatic feature selection, adjust the number of clusters, estimate the dimension of latent subspace, set the regularization coefficients in an efficient way. In the Bayesian framework the probability is interpreted as an ignorance measure rather than objective randomness. Simple rules for operating with probabilities such as the law of total probability and Bayes rule allow one to make inference under uncertainty conditions. In this sense Bayesian framework can be regarded as a generalization of Boolean logic. The course is read in HSE, Yandex SDA, MSU.
- Bayesian approach to probability theory. Examples of Bayesian reasoning.
- Bayesian approach in the game ”Akinator”.
- Model selection problem. Examples. Basic methods for model selection: VC theory, MDL, information criterions.
- Bayesian model selection. Model evidence. Full Bayesian inference.
- Matrix calculations. Normal distribution and its properties.
- Regularized linear regression and the relevance vector regression.
- EM-algorithm. Logistic regression and the relevance vector machine.
- Approximate Bayesian inference: Variational bounds.
- Approximate Bayesian inference: Markov Chain Monte Carlo.
- Approximate Bayesian inference: Expectation Propagation.
- Bayesian PCA.
- Bayesian Gaussian mixture.
- Topic models and the Latent Dirichlet Allocation.
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.