Algorithm IMED

Algorithm for the multi-armed bandit problem From Wikipedia, the free encyclopedia

In multi-armed bandit problems, IMED (for Indexed Minimum Empirical Divergence) is an algorithm developed in 2015 by Junya Honda and Akimichi Takemura. It is the first algorithm proved to be asymptotically optimal respect to the problem-dependant Lai–Robbins lower bound[1] for distributions in [2].

A run of IMED with 3 Beta arms. The arm 1 is the optimal arm

Multi-armed bandit problem

The Multi-armed bandit problem is a sequential game where one player has to choose at each turn between actions (arms). Behind every arm there is an unknown distribution that lies in a set known by the player (for example, can be the set of Gaussian distributions or Bernoulli distributions).

At each turn the player chooses (pulls) an arm , he then gets an observation of the distribution .

Regret minimization

The goal is to minimize the regret at time that is defined as

where

  • is the mean of arm
  • is the highest mean
  • is the number of pulls of arm up to turn

The player has to find an algorithm that chooses at each turn which arm to pull based on the previous actions and observations to minimize the regret .

This is a trade-off problem between exploration to find the best arm (the arm with the highest mean) and exploitation to play as much as possible the arm that we think is the best arm.[3]

Applications

Multi-armed bandit algorithms are used in a variety of fields; for example, they have applications in clinical trials, recommender systems, telecommunications[4], and agriculture.[5]

Algorithm IMED

The algorithm compute an index at each turn for each arm. Then it pull the arm with the smallest index.[2]

The index is the sum of two terms. The first is the cost of to transport the empirical distribution to a distribution such that the arm is optimal. The second is the cost of behing pulled to much.

Formally, the index of an arm at turn is defined as follow:

where

  • is the Kullback–Leibler divergence
  • is the set of distribution in
  • is the empirical distribution of arm at turn
  • is the highest empirical mean of turn

Remark : For arms that verify we have . Then there index is equal to

Pseudocode

for each arm i do:
    n[i] ← 1; nu[i] ← None; mu[i] ← None
for t from 1 to K do:
    select arm t
    observe reward r
    n[t] ← n[t] + 1
    nu[t] ← update empirical distribution
    mu[t] ← update empirical mean
for t from K+1 to T do:
    mu* ← highest mu
    for each arm i do:
        scoreK[i] ← n[i] K_inf(nu[i],mu*)
        scoreN[i] ← ln(n[i])
        index[i] ← scoreK[i] + scoreN[i]
    select arm a with smallest index[a]
    observe reward r
    n[a] ← n[a] + 1
    nu[a] ← update empirical distribution
    mu[a] ← update empirical mean

Theoretical results

In the multi-armed bandit problem we have the asymptotic Lai–Robbins lower bound[1] asymptotic lower bound on regret. The algorithm IMED is the first algorithm that matches this lower bound for distribution in in the first order. If the distribution are also bounded then it also match the second order. It is the first algorithm that match the second under of this lower bound.[2]

Lai–Robbins lower bound

In 1985 Lai and Robbins proved an asymptotic, problem-dependent lower bound on regret[1]. In 2018, Aurelien Garivier, Pierre Menard and Gilles Stoltz proved a refined lower bound that gives the second order [6]

It states that for every consistent algorithm on the set — that is, an algorithm for which, for every , the regret is subpolynomial (i.e. for all ) — we have:

This bound is asymptotic (as ) and gives a first-order lower bound of order with the optimal constant in front of it and the second order in .

Regret bound for IMED

If the distribution of every arm is ( i.e. then the regret of the algorithm IMED verify

[2]

If all the distribution are bounded then it exists a constant such that for large enough the regret of IMED is upper bounded by

[2]

Computation time

The algorithm only requiere to compute the for suboptimal arms who are pulled times, which make it a lot faster than KL-UCB. A faster version of IMED was developed in 2023 to make it even faster, using a Taylor development of the in the first order [7].

See also

References

Related Articles

Wikiwand AI