1 Introduction

The theoretical framework of the multi-armed bandit problem formalizes the fundamental exploration/exploitation dilemma that appears in decision-making problems facing partial information. At a high level, a set of K arms is available to a player. At each turn, she has to choose one arm and receives a reward corresponding to the played arm, without knowing what would have been the received reward had she played another arm. The player faces the dilemma of exploring, that is playing an arm whose mean reward is loosely estimated in order to build a better estimate, or exploiting, that is playing a seemingly best arm based on current mean estimates in order to maximize her cumulative reward. The accuracy of the player policy at time horizon T is typically measured in terms of sample complexity or of regret. The sample complexity is the number of plays required to find an approximation of the best arm with high probability. In that case, the player can stop playing after identifying this arm. The regret is the difference between the cumulative rewards of the player and the one that could be acquired by a policy assumed to be optimal.

The stochastic multi-armed bandit problem assumes the rewards to be generated independently from stochastic distribution associated with each arm. Stochastic algorithms usually assume distributions to be constant over time like with the Thompson Sampling (TS) [17], UCB [2] or Successive Elimination (SE) [6]. Under this assumption of stationarity, TS and UCB achieve optimal upper bounds on the cumulative regret with logarithmic dependencies on T . SE algorithm achieves a near-optimal sample complexity.

In the adversarial multi-armed bandit problem, rewards are chosen by an adversary. This formulation can model any form of non-stationarity. EXP3 algorithm [3, 14] achieves an optimal regret of \(O(\sqrt{T})\) against an oblivious opponent that chooses rewards before the beginning of the game, with respect to the best policy that pulls the same arm over the totality of the game. This weakness is partially overcame by EXP3.S [3], a variant of EXP3, that forgets the past adding at each time-step a proportion of the mean gain and achieves controlled regret with respect to policies that allow arm switches during the run.

The switching bandit problem introduces non-stationarity within the stochastic bandit problem by allowing means to change at some time-steps. As mean rewards stay stationary between those changes, this setting is also qualified as piecewise-stationary. Discounted UCB [13] and sliding-window UCB [8] are adaptations of UCB to the switching bandit problem and achieve a regret bound of \(O(\sqrt{M T \log T})\), where \(M-1\) is the number of distribution changes. It is also worth citing Meta-Eve [10] that associates UCB with a mean change detector, resetting the algorithm when a change is detected. While no analysis is provided, it has demonstrated strong empirical performances.

Stochastic and adversarial Several variants combining stochastic and adversarial rewards have been proposed by Seldin and Slivkins [15] or Bubeck and Slivkins [5]. For instance, in the setting with contaminated rewards, rewards are mainly drawn from stationary distributions except for a minority of mean rewards chosen in advance by an adversary. In order to guarantee their proposed algorithm EXP3++ [15] achieves logarithmic guarantees, the adversary is constrained in the sense it cannot lowered the gap between arms more than a factor 1 / 2. They also proposed another variant called adversarial with gap [15] which assumes the existence of a round after which an arm persists to be the best. These works are motivated by the desire to create generic algorithms able to perform bandit tasks with various reward types, stationary, adversary or mainly stationary. However, despite achieving good performances on a wide range of problems, each one needs a specific parameterization (i.e., an instance of EXP3++ parameterized for stationary rewards may not perform well if rewards are chosen by an adversary).

Our contribution We consider a generalization of the stationary stochastic, piecewise-stationary and adversarial bandit problems. In this formulation, rewards are drawn from stochastic distributions of arbitrary means defined before the beginning of the game. Our first contribution is for the unique best arm setting. We introduce a deceptively simple variant of the Successive Elimination (SE) algorithm, called Successive Elimination with Randomized Round-Robin (SER3), and we show that the seemingly minor modification—a randomized round-robin procedure—leads to a dramatic improvement of the performance over the original SE algorithm. We identify a notion of gap that generalizes the gap from stochastic bandits to the non-stationary case and derive gap-dependent (also known as problem-dependent) sample complexity and regret bounds, instead of the more classical and less informative problem-free bounds. We show for instance in Theorem 1 and Corollary 1 that SER3 achieves a non-trivial problem-dependent sample complexity scaling with \({\varDelta }^{-2}\) and a cumulative regret in \(O(K\log (TK/{\varDelta })/{\varDelta })\) after T steps, in situations where SE may even suffer from a linear regret, as supported by numerical experiments (see Sect. 5). This result positions, under some assumptions, SER3 as an alternative to EXP3 when the rewards are non-stationary.

Our second contribution is to manage best arm switches during the game. First, we extend the definition of the sample complexity in order to analyze the best arm identification algorithms when best arm switches during the game. SER4 takes advantages of the low regret of SER3 by resetting the reward estimators randomly during the game and then starting a new phase of optimization. Against an optimal policy with \(N-1\) switches of the optimal arm (but arbitrarily many distribution switches), this new algorithm achieves an expected sample complexity of \(O({\varDelta }^{-2}\sqrt{NK\delta ^{-1} \log (K \delta ^{-1})})\), with probability \(1-\delta \), and an expected cumulative regret of \(O({\varDelta }^{-1}{\sqrt{NTK \log (TK)}})\) after T time-steps. A second algorithm for the non-stationary stochastic multi-armed bandit with switches is an alternative to the passive approach used in SER4 (the random resets). Second, algorithm EXP3.R takes advantage of the exploration factor of EXP3 to evaluate unbiased estimations of the mean rewards. Combined with a drift detector, this active approach resets the weights of EXP3 when a change of best arm is detected. We finally show that EXP3.R also obtains competitive problem-dependent regret minimization guarantees in \(O\left( 3NCK\sqrt{TK \log T}\right) \), where C depends on \({\varDelta }\).

2 Setting

We consider a generalization of the stationary stochastic, piecewise-stationary and adversarial bandit problems where the adversary chooses before the beginning of the game a sequence of distributions instead of directly choosing a sequence of rewards. This formulation generalizes the adversarial setting since choosing arbitrarily a reward \(y_k(t)\) is equivalent to drawing this reward from a distribution of mean \(y_k(t)\) and a variance of zero. The stationary stochastic formulation of the bandit problem is a particular case, where the distributions do not change.

2.1 The problem

Let \([K] = {1,\ldots ,K}\) be a set of K arms. The reward \(y_{k_t}(t) \in [0,1]\) obtained by the player after playing the arm \(k_t\) is drawn from a distribution of mean \(\mu _{k_t}(t) \in [0,1]\). The instantaneous gap between arms k and \(k^\prime \) at time t is:

$$\begin{aligned} {\varDelta }_{k,k'}(t) \mathop {=}\limits ^{\text {def}}\mu _k(t) -\mu _{k^\prime }(t). \end{aligned}$$
(1)

The player competes against an optimal policy, assumed as optimal (per example, always playing the arm with the highest mean reward). Let \(k^*(t)\) be the arm played by the optimal policy at time t.

2.2 The notion of sample complexity

In the literature [12], the sample complexity of an algorithm is the number of samples needed by this algorithm to find a policy achieving a specific level of performance with high probability. We denote \(\delta \in (0,1]\) the probability of failure. For instance, for the best arm identification in the stochastic stationary bandit (that is when \(\forall k \forall t, \mu _k(t)=\mu _k(t+1)\) and \(k^*(t)=k^*(t+1)\)), the sample complexity is the number of sample needed to find, with a probability at least \(1-\delta \), the arm \(k^*\) with the maximum mean reward. Analysis in sample complexity is useful for situations where the knowledge of the optimal arm is needed to make one impact-full decision, for example to choose which one of several possible products to manufacture or for building hierarchical models of contextual bandits in a greedy way [7], reducing the exploration space.

Definition 1

(Sample complexity) Let A be an algorithm. An arm k is epsilon optimal if \(\mu _k \ge \mu ^* - \epsilon \), with \(\epsilon \in [0,1]\). The sample complexity of A performing a best arm identification task is the number of observations needed to find an \(\epsilon \)-optimal arm with a probability of at least \(1-\delta \).

The usual notion of sample complexity—the minimal number of observations required to find a near-optimal arm with high probability—is well adapted to the case when there exists a unique best arm during all the game, but makes little sense in the general scenario when the best arm can change. Indeed, after a best arm change, a learning algorithm requires some time-steps before recovering. Thus, we provide in Sect. 4 a meaningful extension of the sample complexity definition to the switching best arm scenario. This extended notion of sample complexity now takes into account not only the number of time-steps required by the algorithm to identify a near-optimal arm, but more generally the number of time-steps required before recovering a near-optimal arm after each change.

2.3 The notion of regret

When the decision process does not lead to one final decision, minimizing the sample complexity may not be an appropriate goal. Instead, we may want to maximize the cumulative gain obtained through the game which is equivalent to minimize the difference between the choices of an optimal policy and those of the player. We call this difference, the regret. We define the pseudocumulative regret as the difference of mean rewards between the arms chosen by the optimal policy and those chosen by the player.

Definition 2

(Pseudocumulative regret)

$$\begin{aligned} \sum _{t=1}^{T} \mu _{k^*(t)}(t) - \mu _{k_t}(t). \end{aligned}$$
(2)

Usually, in the stochastic bandit setting, the distributions of rewards are stationary and the instantaneous gap \({\varDelta }_{k,k^\prime }(t) = \mu _k(t) -\mu _{k^\prime }(t)\) is the same for all the time-steps.

There exists a non-reciprocate relation between the minimization of the sample complexity and the minimization of the pseudocumulative regret. For instance, algorithm UCB has an order optimal regret, but it does not minimize the sample complexity. UCB will continue to play suboptimal arms, but with a decreasing frequency as the number of plays increases. However, an algorithm with an optimal sample complexity, like Median Elimination [6], will also have an optimal pseudocumulative regret (up to some constant factors). More details on the relation between both lower bounds can be found in [4, 9].

Therefore, the algorithms presented in this paper slightly differ according to the quantity to minimize, the regret or the sample complexity. For instance, when the target is the regret minimization, after identifying the best arm, the algorithms continue to sample it, whereas in the case of sample complexity minimization, the algorithms stop the sampling process when the best arm is identified. When best arm switches are considered, algorithms minimizing the sample complexity enter a waiting state after identifying the current best arm and do not sample the sequence for exploitation purposes (sampling the optimal arm still increases the sample complexity). However, they still have to parsimoniously collect samples for each action in order to detect best arm changes and face a new trade-off between the rate of sampling and the time needed to find the new best arm after a switch.

3 Non-stationary stochastic multi-armed bandit with unique best arm

In this section, we present algorithm Successive Elimination with Randomized Round-Robin (SER3, see Algorithm 1), a randomized version of Successive Elimination which tackles the best arm identification problem when rewards are non-stationary.

3.1 A modified successive elimination algorithm

We elaborate on several notions required to understand the behavior of the algorithm and to relax the constraint of stationarity.

3.1.1 The elimination mechanism

The elimination mechanism was introduced by Successive Elimination [6]. Estimators of the rewards are built by sequentially sampling the arms. After \(\tau _{\min }\) turns of round-robin, the elimination mechanism starts to occur. A lower bound of the reward of the best empirical arm is computed and compared with an upper bound of the reward of all other arms. If the lower bound is higher than one of the upper bounds, then the associated arm is eliminated and stop being considered by the algorithm. Processes of sampling and elimination are repeated until the elimination of all arms except one.

figure a

3.1.2 Hoeffding’s inequality

Successive Elimination assumes that the rewards are drawn from stochastic distributions that are identical over time (rewards are identically distributed). However, the Hoeffding inequality used by this algorithm does not require stationarity and only requires independence. We remember the Hoeffding inequality:

Lemma 1

(Hoeffding’s inequality [11]) If \(X_1,X_2,\ldots ,X_\tau \) are \(\tau \) independent random variables and \(0 \le X_i \le 1\) for all \((i=1,2,\ldots ,\tau )\), then for \(\epsilon _\tau > 0\)

$$\begin{aligned} P\left( \left| \sum ^t_{i=1} \frac{X_i}{\tau } - {\mathbb {E}}\left[ \sum ^t_{i=1} \frac{X_i}{\tau }\right] \right| \ge \epsilon _\tau \right) \le 2 \exp \left( -2 \epsilon _\tau ^2 \tau \right) . \end{aligned}$$

Thus, we can use this inequality to calculate confidence bounds of empirical means computed with rewards drawn from non-identical distributions.

3.1.3 Randomization of the round-robin

We illustrate the need of randomization with an example tricking a deterministic algorithm (see Table 1).

Table 1 A sequence of mean rewards tricking a deterministic bandit algorithm

The best arm seems to be \(k=1\) as \(\mu _1(t)\) is greater than \(\mu _2(t)\) at every time-step t. However, by sampling the arms with a deterministic policy playing sequentially \(k=1\) and then \(k=2\), after \(t=6\) the algorithm has only sampled rewards from a distribution of mean 0.6 for \(k=1\) and of mean 0.8 for \(k=2\). After enough time following this pattern, an elimination algorithm will eliminate the first arm. Our algorithm SER3 adds a shuffling of the arm set after each round-robin cycle to Successive Elimination and avoids this behavior.

3.1.4 Uniqueness of the best arm

The best arm identification task assumes a criteria identifying the best arm without ambiguity. We define the optimal arm as:

$$\begin{aligned} k^* = \arg \max _{k \in [K]} \sum ^T_{t=1} \mu _k(t). \end{aligned}$$
(4)
Fig. 1
figure 1

Two examples of sequence of mean rewards a Assumption 1 is satisfied as the mean gap remains positive. b Assumption 1 is not satisfied. This sequence involves a best arm switch as the mean gap becomes non-positive

As an efficient algorithm will find the best arm before the end of the run, we use Assumption 1 to ensure its uniqueness at every time-step. First, we define some notations. A run of SER3 is a succession of round-robin. The set \([\tau ] = \{(t_1,|S_1|),\ldots ,(t_\tau , |S_\tau |)\}\) is a realization of SER3, and \(t_i\) is the time-step when the round-robin \(i{\text {th}}\) of size \(|S_i|\) starts (\(t_i = 1 + \sum _{j=1}^{i-1} |S_j|\)). As arms are only eliminated, \(|S_i| \ge |S_{i+1}|\). We denote \({\mathbb {T}}(\tau )\) the set containing all possible realizations of \(\tau \) round-robin steps. Now, we can introduce Assumption 1 that ensures the best arm is the same at any time-step.

Assumption 1

(Positive mean gap) For any \(k \in [K]-\{k^*\}\) and any \([\tau ] \in {\mathbb {T}}(\tau )\) with \(\tau \ge \tau _{\min }\), we have:

$$\begin{aligned} {\varDelta }^*_k\left( [\tau ]\right) =\frac{1}{\tau } \sum ^\tau _{i=1} \sum _{j = t_i }^{t_{i} + |S_i|- 1} \frac{{\varDelta }_{k^*,k}(j)}{|S_i|} > 0. \end{aligned}$$
(5)

Assumption 1 is trivially satisfied when distributions are stationary, is quite weak (see, e.g., Fig. 1b) and can tolerate a large noise when \(\tau \) is high. As the optimal arm must distinguish itself from others, instantaneous gaps are more constrained at the beginning of the game. It is quite similar to the assumption used by Seldin and Slivkins [15] to be able to achieve logarithmic expected regret on moderately contaminated rewards, i.e., the adversary does not lower the averaged gap too much. Another analogy can be done with the adversarial with gap setting [15], \(\tau _{\min }\) representing the time needed for the optimal arm to accumulate enough rewards and to distinguish itself from the suboptimal arms.

Fig. 2
figure 2

Cumulative regret of SER3, SE, UCB and EXP3 on Problem 1

Fig. 3
figure 3

Cumulative regret of SER3, UCB and EXP3 on Problem 2

Figure 1a illustrates Assumption 1. In this example the mean of the optimal arm \(k^*\) is lower than the second one on time-steps \(t \in \{5,6,7\}\). Thus, even if the instantaneous gap is negative during these time-steps, the mean gap \({\varDelta }^*_k\left( [\tau ]\right) \) stays positive. The parameter \(\tau _{\min }\) protects the algorithm from local noise at the initialization of the algorithm. In order to ease the reading of the results in the next sections, we here assume \(\tau _{\min } = \log \frac{K}{\delta }\).

Assumption 1 can be seen as a sanity-check assumption ensuring that the best arm identification problem indeed makes sense. In Sect. 4, we consider the more general switching bandit problem. In this case, Assumption 1 may not be verified (see Fig. 1b) and is naturally extended by dividing the game in segments wherein Assumption 1 is satisfied (Figs. 2, 3 and 4).

3.2 Analysis

All theoretical results are provided for \(\epsilon = 0\) and therefore accept only \(k^*\) as the optimal arm.

Theorem 1

(Sample complexity of SER3) For \(K\ge 2\), \(\delta \in (0,0.5]\), and \(\tau _{\min } = \log \frac{K}{\delta }\), the sample complexity of SER3 is upper bounded by:

$$\begin{aligned} O\left( \frac{K}{ {\varDelta }^2} \log \left( \frac{K}{ \delta {\varDelta }}\right) \right) \end{aligned}$$

where \({\varDelta }= \min _{[\tau ],k} \frac{1}{\tau } \sum ^\tau _{i=1} \sum _{t = t_i }^{t_{i} + |S_i|- 1} \frac{{\varDelta }_{k^*,k}(t)}{|S_i|}\).

The proof is given in “Proof of Theorem 1 and Theorem 2” of Appendix 2.

Guarantee on the sample complexity can be transposed in guarantee on the pseudocumulative regret. In that case, when only one arm remains in the set, the player continues to play this last arm until the end of the game.

Corollary 1

(Expected pseudocumulative regret of SER3) For \(K\ge 2\), and \(\delta = 1/T\), and \(\tau _{\min } = \log (KT)\), the expected pseudocumulative regret of SER3 is upper bounded by:

$$\begin{aligned} \min \left( O\left( \frac{K-1}{{\varDelta }} \log \left( \frac{K T}{ {\varDelta }}\right) \right) , O\left( \sqrt{TK \log \frac{T}{K}} \right) \right) \end{aligned}$$
Fig. 4
figure 4

Cumulative regret of SER4, SWUCB, EXP3.S, EXP3.R and Meta-Eve on Problem 3

The proof is given in “Proof of Corollary 1” of Appendix 2.

These guarantees are the same as the original Successive Elimination performed with a deterministic round-robin on arms with stationary rewards. Indeed, when reward distributions are stationary, we have for all t and all \([\tau ]\):

$$\begin{aligned} \frac{1}{\tau } \sum ^\tau _{i=1} \sum _{t = t_i }^{t_{i} + |S_i|- 1} \frac{{\varDelta }_{k^*,k}(t)}{|S_i|} = {\varDelta }_{k^*,k}(t) = {\varDelta }_{k^*,k}(t+1). \end{aligned}$$
(6)

However, in a non-stationary environment satisfying Assumption 1 Successive Elimination will eliminate the optimal arm if the adversary knows the order of its round-robin before the beginning of the run and exploits this knowledge against the learner, thus resulting in a linear cumulative regret. Our modification of SE algorithm allows SER3 to perform on near adversarial sequence of reward while achieving a gap-dependent logarithmic pseudocumulative regret.

Remark

These logarithmic guarantees result from Assumption 1 that allows to stop the exploration of eliminated arms. They do not contradict the lower bound for non-stationary bandit whose scaling is in \({\varOmega }(\sqrt{T})\) [8] as it is due to the cost of the constant exploration for the case where the best arm changes.

3.3 Non-stationary stochastic multi-armed bandit with budget

We study the case when the sequence from which the rewards are drawn does not satisfy Assumption 1.

The sequence of mean rewards is build by the adversary in two steps. First, the adversary choose the mean rewards \(\mu _k(1),\ldots ,\mu _k(T)\) associated with each arm in such a way that Assumption 1 is satisfied. The adversary can then apply a malus \(b_k(t) \in [0, \mu _k(t)]\) to each mean reward to obtain the final sequence. The mean reward of the arm k at time t is \(\mu _k(t) - b_k(t)\). The budget spent by the adversary for the arm k is \(B_k = \sum _{t=1}^{T} b_k(t)\). We denote \(B \ge \arg \max _k B_k\) the upper bound on the budget of the adversary.

Algorithm SER3 can be modified to perform a best arm identification task when Assumption 1 is not satisfied but B is known. To achieve that, we replace the condition of elimination (Inequality ((3)) in Algorithm 1) is replaced by the following:

$$\begin{aligned} {\hat{\mu }}_{\max }(\tau ) - {\hat{\mu }}_k(\tau ) +\epsilon \ge \frac{B}{\tau } + 2 \sqrt{\frac{1}{2\tau } \log \left( \frac{4 K \tau ^2}{\delta } \right) } \end{aligned}$$

This new algorithm is called Successive Elimination with Round-Robin Randomized and Budget (SER3.B).

Theorem 2

For \(K\ge 2\), \(\delta \in (0,0.5]\), and \(\tau _{\min } = \log \frac{K}{\delta }\), the sample complexity of SER3.B is upper bounded by:

$$\begin{aligned} O\left( \frac{K}{ {\varDelta }^2} \left( \log \frac{K}{ \delta {\varDelta }} + B \right) \right) \end{aligned}$$

where \({\varDelta }= \min _{[\tau ],k} \frac{1}{\tau } \sum ^\tau _{i=1} \sum _{t = t_i }^{t_{i} + |S_i|- 1} \frac{{\varDelta }_{k^*,k}(t)}{|S_i|}\).

The proof is given in “Proof of Theorem 1 and Theorem 2” of Appendix 2.

4 Non-stationary stochastic multi-armed bandit with best arm switches

The switching bandit problem has been proposed by Garivier et al. [8] and assumes means to be stationary between switches. In particular, algorithm SWUCB is built on this assumption and is a modification of UCB using only the rewards obtained inside a sliding window. In our setting, we allow mean rewards to change at every time-steps and consider that a best arm switch occurs when the arm with the highest mean change. This setting provides an alternative to the adversarial bandit with budget, when B is very high or unknown.

The optimal policy is the sequence of couples (optimal arm, time when the switch occurred):

$$\begin{aligned} \{ (k^*_1,1),\ldots ,(k^*_N,T_N) \}, \end{aligned}$$
(7)

with \(k^*_n \ne k^*_{n+1}\) and \({\varDelta }_{k^*_n,k}(t) > 0\) for any \(k \in [K] - \{k^*_n\}\) and any \(t\in [T_n,T_{n+1})\). The optimal policy starts playing the arm \(k^*_n\) at the time-step \(T_n\). Time-steps \(T_n\) when switches occur are unknown to the player.

4.1 Successive Elimination with Randomized Round-Robin and Resets (SER4)

Definition 1 of the sample complexity is not adapted to the switching bandit problem. Indeed, this definition is used to measure the number of observations needed by an algorithm to find one unique best arm. When the best arm changes during the game, this definition is too limiting. In Sect. 4.1.1 we introduce a generalization of the sample complexity for the case of switching policies.

4.1.1 The sample complexity of the best arm identification problem with switches

A cost associated is added to the usual sample complexity. This cost is equal to the number of iterations after a switch during which the player does not know the optimal arm and does not sample.

Definition 3

(Sample complexity with switches) Let A be an algorithm. The sample complexity of A performing a best arms identification task for a segmentation \(\{T_n\}_{n=1..N}\) of [1 : T], with \(T_1=1< T_2< \dots< T_N<T\), is:

$$\begin{aligned} \sum _{n=1}^N \sum _{t=T_n}^{T_{n+1}-1} \max (s(t) ,1_{\llbracket k_t \ne k^*_n \rrbracket }), \end{aligned}$$
(8)

where s(t) is a binary variable equal to 1 if and only if the time-step t is used by the sampling process of A, \(k_t\) is the arm identified as optimal by A at time t, \(k^*_n\) is the optimal arm over the segment n and \(T_{N+1}=T+1\).

In order to clarify Definition 3, we detail the different states achievable by an algorithm of best arms identification and their impact on the sample complexity. Two states are achievable during a task of minimization of the sample complexity:

  • \(s(t)=1\) if the algorithm is sampling an arm during the time-step t. In the case of SER4, \(s(t)=1\) when \(|S_\tau | \ne 1\) and the sample complexity increases by one.

  • \(s(t)=0\) if the algorithm submits an arm as the optimal one during the time-step t. In the case of SER4, \(s(t)=0\) when \(|S_\tau | = 1\). The sample complexity increases by one if \(k_t\ne k^*(t)\).

In the context of SER4, the sample complexity is the number of time-steps during which the arm set does not only contain the optimal arm.

4.1.2 Algorithm

In order to allow the algorithm to choose another arm when a switch occurs, at each turn, estimators of SER3 are reseted with a probability \(\varphi \in [0,1]\) and a new task of best arm identification is started. We name this algorithm Successive Elimination with Randomized Round-Robin and Resets (SER4).

figure b

4.1.3 Analysis.

We now provide the performance guarantees of SER4 algorithm, both in terms of sample complexity and of pseudocumulative regret.

The following results are given in expectation and in high probability. The expectations are taken with regard to the randomization of the resets. The sample complexity or the pseudocumulative regret achieved by the algorithm between each resets (given by the analysis of SER3) still results in high probability.

Theorem 3

(Expected sample complexity of SER4) For \(K\ge 2\), \(\delta = 1/T\), \(\tau _{\min } = \log \frac{K}{\delta }\) and \(\varphi \in (0,1]\), the expected sample complexity of SER4 w.r.t. the randomization of resets is upper bounded by:

$$\begin{aligned} O\left( \frac{\varphi K}{\delta {\varDelta }^2} \log \left( \frac{K}{ \delta {\varDelta }}\right) + \frac{N}{\varphi } \right) \end{aligned}$$

with a probability of at least \(1- \delta \).

The proof is given in “Proof of Theorem 3” of Appendix 2.

We tune \(\varphi \) in order to minimize the sample complexity.

Corollary 2

For \(K\ge 2\), \(\delta = 1/T\), \(\tau _{\min }= \log \frac{K}{\delta }\), \({\varDelta }\ge \frac{1}{KT}\) and \(\varphi = \sqrt{\frac{ N \delta }{K \log (\frac{K}{\delta })}} \), the expected sample complexity of SER4 w.r.t. the randomization of resets is upper bounded by:

$$\begin{aligned} O\left( \frac{1}{{\varDelta }^2} \sqrt{\frac{NK \log (\frac{K}{\delta })}{\delta }} \right) . \end{aligned}$$

Remark 2

Transposing Theorem 3 for the case where \(\epsilon \in [\frac{1}{KT},1]\) is straightforward. This allows to tune the bound by setting \(\varphi = \epsilon \sqrt{( N \delta )/(K \log (TK))}\).

This result can also be transposed in bound on the expected cumulative regret. We consider that the algorithm continues to play the last arm of the set until a reset occurs.

Corollary 3

(Expected cumulative regret of SER4) For \(K\ge 2\), and \(\delta = 1/T\), \(\tau _{\min } = \log (KT)\), \({\varDelta }\ge \frac{1}{KT}\) and \(\varphi = \sqrt{\frac{N}{TK\log (KT)} }\), the expected cumulative regret of SER4 w.r.t. the randomization of resets is upper bounded by:

$$\begin{aligned} \min \left( O\left( \frac{\sqrt{NTK \log (KT)}}{{\varDelta }} \right) , O\left( T^{2/3}\sqrt{NK \log \frac{T}{K}} \right) \right) .\nonumber \\ \end{aligned}$$
(9)

The proof is given in “Proof of Corollary 3” of Appendix 2.

Remark 3

A similar dependency in \(\sqrt{T}{\varDelta }^{-1}\) appears also in SWUCB (see Theorem 1 in [8]) and is standard in this type of results.

4.2 EXP3 with resets

SER4 and other algorithms from the state of the art [3, 8, 13] use a passive approach through forgetting the past. In this subsection, we propose an active strategy which consists in resetting the reward estimations when a change of the best arm is detected. A supposed advantage of this approach is to let the algorithm converge on a longer time period, as it is reset only when a switch is detected, and thus build a more accurate estimate of the reward distributions. First, we describe the adversarial bandit algorithm EXP3 [3], which will be used by proposed algorithm EXP3.R between detections. We then describe the drift detector used to detect changes of the best arm. Finally, we combine the both to obtain EXP3.R algorithm.

figure c

4.2.1 EXP3 algorithm

EXP3 algorithm (see Algorithm 3) minimizes the regret against the best arm using an unbiased estimation of the cumulative reward at time t for computing the choice probabilities of each action. While this policy can be viewed as optimal in an actual adversarial setting, in many practical cases the non-stationarity within a time period exists but is weak and is only noticeable between different periods. If an arm performs well in a long time period but is extremely bad on the next period, EXP3 algorithm can need a number of trial equal to the first period’s length to switch its most played arm.

4.2.2 The detection test

The detection test (see Algorithm 4) uses confidence intervals to estimate the expected reward in the previous time period. The action distribution in EXP3 is a mixture of uniform and Gibbs distributions. We call \(\gamma \)-observation an observation selected though the uniform distribution. Parameters \(\gamma \), H and \(\delta \) define the minimal number of \(\gamma \)-observations by arm needed to call a test of accuracy \(\epsilon \) with a probability \(1-\delta \). They will be fixed in the analysis (see Corollary 4), and the correctness of the test is proven in Lemma 2. We denote \({\bar{\mu }}^k(I)\) the empirical mean of the rewards acquired from the arm k on the interval I using only \(\gamma \)-observations and \({\varGamma }_{\text {min}}\)(I) the smallest number of \(\gamma \)-observations among each action on the interval I. The detector is called only when \({\varGamma }_{\text {min}}(I) \ge \frac{\gamma H}{K}\). The detector raises an alert when the action \(k_{\max }\) with the highest empirical mean \({\hat{\mu }}^k(I-1)\) on the interval \(I-1\) is eliminated by an other on the current interval.

figure d

4.2.3 EXP3.R algorithm

Coupled with a detection test, EXP3 algorithm has several advantages. First in a non-stationary environment, we need a constant exploration to detect changes where a suboptimal arm becomes optimal and this exploration is naturally given by the algorithm. Second, the number of breakpoints is higher than the number of best arm changes (\(M\ge N\)). This means that the number of resets needed by EXP3 is lower than the one needed by a stochastic bandit algorithm such as UCB. Third, EXP3 is robust against test failures (non-detection) or local non-stationarity. We call EXP3.R algorithm obtained by combining EXP3 and the drift detector. First, one instance of EXP3 is initialized and used to select actions. When the count of \( \frac{\gamma H}{K}\) \(\gamma \)-observations per arm is fulfilled, the detection test is called. If in the corresponding interval, the empirical mean of an arm exceeds by \(2\epsilon \) the empirical mean of the current best arm, then a drift detection is raised. In this case, weights of EXP3 are reset. Then a new interval of collect begins, preparing the next test. These steps are repeated until the run ends (see Algorithm 5).

figure e

4.2.4 Analysis

In this section we analyze the drift detector, and then, we bound the expected regret of EXP3.R algorithm.

Assumption 2

(Accuracy of the drift detector) During each of the segments S where \(k^*_S\) is the optimal arm, the gap between \(k^*_S\) and any other arm is of at least \(4\epsilon \) with

$$\begin{aligned} \epsilon = \sqrt{\frac{K \log (\frac{1}{\delta })}{4\gamma H}}. \end{aligned}$$
(13)

Lemma 2 guarantees that when Assumption 1 holds and the interval I is included into the interval S, then, with high probability, the test will raise a detection if and only if the optimal action \(k^*_S\) eliminates a suboptimal action.

Lemma 2

(Arm switches are detected) When Assumption 2 holds and \(I \subseteq S\), then, with a probability \(1-2\delta \), for any \(k\ne k^*_S\):

$$\begin{aligned} {\hat{\mu }}^{k^*_S}(I) - {\hat{\mu }}^{k}(I) \ge 2\sqrt{\frac{K\log (\frac{1}{\delta })}{2\gamma H}} \Leftrightarrow \mu ^{k^*_S}(I) \ge \mu ^k(I). \end{aligned}$$
(14)

The proof is given in “Proof of Lemma 2” of Appendix 2.

Theorem 4 bounds the expected cumulative regret of EXP3.R.

Theorem 4

(Expected cumulative regret of EXP3.R) For any \(K>0\), \(0 < \gamma \le 1\), \(0 \le \delta < \frac{1}{2}\), \(H\ge K\) and \(N\ge 1\) when Assumption 2 holds, the expected cumulative regret of EXP3.R is

$$\begin{aligned}&G^* - \mathbf{E }[G_{\text {EXP3.R}}] \le (e-1) \gamma T \nonumber \\&\quad +\,\frac{\left( N - 1+ \frac{K \delta T}{H} + K\delta \right) K \log (K)}{\gamma }\nonumber \\&\quad +\,(N-1)HK \left( \frac{1}{1-2\delta } + 1 \right) . \end{aligned}$$
(15)

The proof is given in “Proof of Theorem 4” of Appendix 2.

In Corollary 4 we optimize parameters of the bound obtained in Theorem 4.

Corollary 4

(Expected cumulative regret of EXP3.R) For any \(K\ge 1\), \(T\ge 10\), \(N\ge 1\) and \(C\ge 1\) when Assumption 1 holds, the expected cumulative regret of EXP3.R run with input parameters

$$\begin{aligned} \delta =\sqrt{\frac{\log T}{KT}}, \gamma = \sqrt{\frac{K \log K \log T}{T }} \text {and } H=C\sqrt{T \log T}\nonumber \\ \end{aligned}$$
(16)

is

$$\begin{aligned} G^* - \mathbf{E }[G_{\text {EXP3.R}}])\le & {} (e-1) \sqrt{TK \log K \log T} \nonumber \\&+\,N\sqrt{TK\log K} \nonumber \\&+\,(C+1)K\sqrt{T \log K} \nonumber \\&+\,3NCK\sqrt{T \log T}. \end{aligned}$$
(17)

The proof is given in “Proof of Corollary 4” of Appendix 2.

According to C, the precision \(\epsilon \) is:

$$\begin{aligned} \epsilon = \sqrt{\frac{1}{2C}} \sqrt{\frac{\log {\sqrt{\frac{KT}{\log T}}}}{\log T} \sqrt{\frac{K}{\log K}}}. \end{aligned}$$
(18)

Notice that, when T increases, \(\sqrt{\frac{\log {\sqrt{\frac{KT}{\log T}}}}{\log T} \sqrt{\frac{K}{\log K}}}\) tends toward a constant.

5 Numerical experiments

We compare our algorithm with the state of the art. For each problem, \(K=20\) and \(T=10^7\). The instantaneous gap between the optimal arm and the others is constant, \({\varDelta }= 0.05\), i.e., the mean of the optimal arm is \(\mu ^*(t) = \mu (t)+ {\varDelta }\). During all experiments, probabilities of failure of Successive Elimination (SE), SER3 and SER4 are set to \(\delta = 0.05\). Constant explorations of all algorithms of the EXP3 family are set to \(\gamma = 0.05\). Results are averaged over 50 runs. On problems 1 and 2 (Figs. 2 and 3), variances are low (in the order of \(10^3\)) and thus not showed. On problem 3 (Fig. 4), variances are plotted as the gray areas under the curves.

5.1 Problem 1: sinusoidal means

The index of the optimal arm \(k^*\) is drawn before the game and does not change. The mean of all suboptimal arm is \(\mu (t) = \cos (2\pi t / K)/5 + 0.5\).

This problem challenges SER3 against SE, UCB and EXP3. SER3 achieves a low cumulative regret, successfully eliminating suboptimal arms at the beginning of the run. Contrarily, SE is tricked by the periodicity of the sinusoidal means and eliminates the optimal arm. The deterministic policy of UCB is not adapted to the non-stationarity of rewards, and thus, the algorithm suffers from a high regret. The unbiased estimators of EXP3 enable the algorithm to quickly converge on the best arm. However, EXP3 suffers from a linear regret due to its constant exploration until the end of the game.

5.2 Problem 2: decreasing means with positive gap

The optimal arm \(k^*\) does not change during the game. The mean of all suboptimal arms is \(\mu (t) = 0.95 - \min (0.45 , 10^{-7} t )\).

On this problem, SER3 is challenged against SE, UCB and EXP3. SER3 achieves a low cumulative regret, successfully eliminating suboptimal arms at the beginning of the run. Contrarily to problem 1, mean rewards evolve slowly and Successive Elimination (SE) achieves the same level of performance than SER3. Similarly to problem 1, UCB achieves a high cumulative regret. The cumulative regret of EXP3 is low at the end of the game but would still increase linearly with time.

5.3 Problem 3: decreasing means with arm switches

At every turn, the optimal arm \(k^*(t)\) changes with a probability of \(10^{-6}\). In expectation, there are 10 switches by run. The mean of all suboptimal arms is \(\mu (t) = 0.95 - \min (0.45 , 10^{-7} (t [\text {mod } 10^6] )\).

On problem 3, SER4 is challenged against SWUCB, EXP3.S, EXP3.R and Meta-Eve. The probability of reset of SER4 is \(\varphi = 5^{-5}\). The size of the window of SWUCB is \(10^5\). The historic considered by EXP3.R is \(H=4\,\times \,10^5\), and the regularization parameter of EXP3.S is \(\alpha = 10^{-5}\).

SER4 obtains the lowest cumulative regret, confirming the random resets approach to overcome switches of best arm. SWUCB suffers from the same issues as UCB in previous problems and obtains a very high regret. Constant changes of mean cause Meta-Eve to reset very frequently and to obtain a lower regret than SW UCB. EXP3.S and EXP3.R achieve both low regrets but EXP3.R suffers from the large size of historic needed to detect switches with a gap of \({\varDelta }\). We can notice that the randomization of resets in SER4, while allowing to achieve the best performances on this problem, involves a highest variance. Indeed, on some runs, a reset may occur lately after a best arm switch, whereas the use of windows or regularization parameters will be more consistent.

6 Conclusion

We proposed a new formulation of the multi-armed bandit problem that generalize the stationary stochastic, piecewise-stationary and adversarial bandit problems. This formulation allows to manage difficult cases, where the means rewards and/or the best arm may change at each turn of the game. We studied the benefit of random shuffling in the design of sequential elimination bandit algorithms. We showed that the use of random shuffling extends their range of application to a new class of best arm identification problems involving non-stationary distributions, while achieving the same level of guarantees than SE with stationary distributions. We introduced SER3 and extended it to the switching bandit problem with SER4 by adding a probability of restarting the best arm identification task. We extended the definition of the sample complexity to include switching policies. Up to our knowledge, we proved the first sample complexity-based upper bound for the best arm identification problem with arm switches. The upper bound over the cumulative regret of SER4 depends only of the number \(N-1\) of arm switches, as opposed to the number of distribution changes \(M-1\) in SWUCB (\(M\ge N\) can be of order T in our setting). Algorithm EXP3.R also achieves a competitive regret bound. The adversarial nature of EXP3 makes it robust to non-stationarity, and the detection test accelerates the switch when the optimal arm changes while allowing convergence of the bandit algorithm during periods where the best arm does not change.