This post is part of the lecture notes of my class “Introduction to Online Learning” at Boston University, Fall 2019.
You can find all the lectures I published here.
In the last lecture, we introduced the Explore-Then-Commit (ETC) algorithm that solves the stochastic bandit problem, but requires the knowledge of the gaps. This time we will introduce a parameter-free strategy that achieves the same optimal regret guarantee.
1. Upper Confidence Bound Algorithm
The ETC algorithm has the disadvantage of requiring the knowledge of the gaps to tune the exploration phase. Moreover, it solves the exploration vs. exploitation trade-off in a clunky way. It would be better to have an algorithm that smoothly transition from one phase into the other in a data-dependent way. So, we now describe an optimal and adaptive strategy called Upper Confidence Bound (UCB) algorithm. It employs the principle of optimism in the face of uncertainty, to select in each round the arm that has the potential to be the best one.
UCB works keeping an estimate of the expected loss of each arm and also a confidence interval at a certain probability. Roughly speaking, we have that with probability at least
where the “roughly” comes from the fact that is a random variable itself. Then, UCB will query the arm with the smallest lower bound, that is the one that could potentially have the smallest expected loss.
Remark 1. The name Upper Confidence Bound comes from the fact that traditionally stochastic bandits are defined over rewards, rather than losses. So, in our case we actually use the lower confidence bound in the algorithm. However, to avoid confusion with the literature, we still call it Upper Confidence Bound algorithm.
The key points in the proof are on how to choose the right confidence level and how to get around the dependency issues.
The algorithm is summarized in Algorithm 1 and we can prove the following regret bound.
Theorem 1. Assume that the rewards of the arms are
-subgaussian and
and let
. Then, UCB guarantees a regret of
Proof: We analyze one arm at the time. Also, without loss of generality, assume that the optimal arm is the first one. For arm , we want to prove that
.
The proof is based on the fact that once I have sampled an arm enough times, the probability to take a suboptimal arm is small.
Let the biggest time index such that
. If
, then the statement above is true. Hence, we can safely assume
. Now, for
bigger than
we have
Consider and such that
, then we claim that at least one of the two following equations must be true:
If the first one is true, the confidence interval around our estimate of the expectation of the optimal arm does not contain . On the other hand, if the second one is true the confidence interval around our estimate of the expectation
does not contain
. So, we claim that if
and we selected a suboptimal arm, then at least one of these two bad events happened.
Let’s prove the claim: if both the inequalities above are false, , and
, we have
that, by the selection strategy of the algorithm, would imply .
Note that . Hence, we have
Now, we upper bound the probabilities in the sum. Given that the losses on the arms are i.i.d. and using the union bound, we have
Hence, we have
Given that the same bound holds for , we have
Using the decomposition of the regret we proved last time, , we have the stated bound.
It is instructive to observe an actual run of the algorithm. I have considered 5 arms and Gaussian losses. In the left plot of figure below, I have plotted how the estimates and confidence intervals of UCB varies over time (in blue), compared to the actual true means (in black). In the right side, you can see the number of times each arm was pulled by the algorithm.

It is interesting to note that the logarithmic factor in the confidence term will make the confidences of the arm that are not pulled to increase over time. In turn, this will assure that the algorithm does not miss the optimal arm, even if the estimates were off. Also, the algorithm will keep pulling the two arms that are close together, to be sure on which one is the best among the two.
The bound above can become meaningless if the gaps are too small. So, here we prove another bound that does not depend on the inverse of the gaps.
Theorem 2. Assume that the rewards of the arms minus their expectations are
-subgaussian and let
. Then, UCB guarantees a regret of
Proof: Let be some value to be tuned subsequently and recall from the proof of Theorem 1 that for each suboptimal arm
we can bound
Hence, using the regret decomposition we proved last time, we have
Choosing , we have the stated bound.
Remark 2. Note that while the UCB algorithm is considered parameter-free, we still have to know the subgaussianity of the arms. While this can be easily upper bounded for stochastic arms with bounded support, it is unclear how to do it without any prior knowledge on the distribution of the arms.
It is possible to prove that the UCB algorithm is asymptotically optimal, in the sense of the following Theorem.
Theorem 3 (Bubeck, S. and Cesa-Bianchi, N. , 2012, Theorem 2.2). Consider a strategy that satisfies
for any set of Bernoulli rewards distributions, any arm
with
and any
. Then, for any set of Bernoulli reward distributions, the following holds
2. History Bits
The use of confidence bounds and the idea of optimism first appeared in the work by (T. L. Lai and H. Robbins, 1985). The first version of UCB is by (T. L. Lai, 1987). The version of UCB I presented is by (P. Auer and N. Cesa-Bianchi and P. Fischer, 2002) under the name UCB1. Note that, rather than considering 1-subgaussian environments, (P. Auer and N. Cesa-Bianchi and P. Fischer, 2002) considers bandits where the rewards are confined to the interval. The proof of Theorem 1 is a minor variation of the one of Theorem 2.1 in (Bubeck, S. and Cesa-Bianchi, N. , 2012), which also popularized the subgaussian setup. Theorem 2 is from (Bubeck, S. and Cesa-Bianchi, N. , 2012).
3. Exercises
Exercise 1. Prove a similar regret bound to the one in Theorem 2 for an optimally tuned Explore-Then-Commit algorithm.
EDIT: Fixed a small bug in the proof of UCB found by Daniel Hsu
LikeLike