
This course provides an introduction into machine learning theory including mathematical foundation. It covers the following areas:
- Is statistical or machine learning possible? (Outlook Vapnik-Chervonenkis theory, VC Dimension and Limit)
- Statistical/Machine learning: Supervised, unsupervised, reinforcement learning, classification, regression
- Probabilistic and non-probabilistic methods
- Different models:
- Linear model
- Kernel methods
- Support Vector Machines
- Bayes methods
- Gaussian processes
- MCMC and particle filters
- Neural networks
- Genetic algorithms
- Model selection
- Overfitting and regularization
- Bias-Variance tradeoff, error and noise
- Training, testing, validation o Curse of dimensionality
- Vapnik-Chervonenki theory, Hoeffding’s lemma, VC dimension and VC inequality
- Structural Risk Minimization (SRM), Occham’s Razor
- Overview of selected application fields
- Consequences of statistical learning theory
- Philosophic implications (Are data more important than theories? Where are the limits of statistical learning theory?
- Ethical implications for society, principal problems of algorithmic decisions (Information privacy, individual freedom, checks-and balances of citizens, corporations, states etc.)
- Lectures are accompanied by practical exercises with respect to theories and concepts, supported by the use of appropriate software
- Dozent/in: Anahita Farhang Ghahfarokhi
- Dozent/in: Jörg Schäfer