I have published the lecture notes of my class “Introduction to Online Learning” I taught at Boston University during Fall 2019. The class covers the basics on Online Learning in the adversarial setting, i.e., regret minimization. Also, it covers the so-called parameter-free online algorithms, i.e., algorithms without learning rates.
I tried to present the latest results in the field, keeping the proofs as simple as possible.
I strongly suggest to go over the lectures in order: I have introduced the required math tools slowly over the lectures, rather than all at once in single lectures.
I also compiled these posts in a cleaner and expanded pdf that is periodically updated.
- Introduction to Online Learning
- Online Gradient Descent
- Subgradients and Online-to-Batch Conversion
- More Online-to-Batch Examples and Strong Convexity
- Adaptive Algorithms: L* bounds and AdaGrad
- Lower Bounds for Online Linear Optimization
- Online Mirror Descent I: Bregman version
- Online Mirror Descent II: Regret and Mirror Version
- Online Mirror Descent III: Examples and Learning with Expert Advice
- Follow-The-Regularized-Leader I: Regret Equality
- Follow-The-Regularized-Leader II: Applications
- Follow-The-Regularized-Leader III: More Logarithmic Bounds
- Online Linear Classification
- Parameter-free Online Learning I: Coin-Betting and 1-d OCO
- Parameter-free Online Learning II: Reductions
- Parameter-free Online Learning III: from Coins to Experts
- Optimistic Follow-The-Regularized-Leader
- Multi-Armed Bandit I
- Multi-Armed Bandit II
- Multi-Armed Bandit III: Stochastic Bandits
- Multi-Armed Bandit IV: UCB
- From Online Learning to Statistical Learning