AWESOME

AWESOME: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents. Two minimal requirements for a satisfactory multiagent learning algorithm are that it (1). learns to play optimally against stationary opponents and (2). converges to a Nash equilibrium in self-play. The previous algorithm that has come closest, WoLF-IGA, has been proven to have these two properties in 2-player 2-action (repeated) games-assuming that the opponent’s mixed strategy is observable. Another algorithm, ReDVaLeR (which was introduced after the algorithm described in this paper), achieves the two properties in games with arbitrary numbers of actions and players, but still requires that the opponents’ mixed strategies are observable. In this paper we present AWESOME, the first algorithm that is guaranteed to have the two properties in games with arbitrary numbers of actions and players. It is still the only algorithm that does so while only relying on observing the other players’ actual actions ((not) their mixed strategies). It also learns to play optimally against opponents that eventually become stationary. The basic idea behind AWESOME (adapt when everybody is stationary, otherwise move to equilibrium) is to try to adapt to the others’ strategies when they appear stationary, but otherwise to retreat to a precomputed equilibrium strategy. We provide experimental results that suggest that AWESOME converges fast in practice. The techniques used to prove the properties of AWESOME are fundamentally different from those used for previous algorithms, and may help in analyzing future multiagent learning algorithms as well.


References in zbMATH (referenced in 14 articles , 1 standard article )

Showing results 1 to 14 of 14.
Sorted by year (citations)

  1. Bachrach, Yoram; Everett, Richard; Hughes, Edward; Lazaridou, Angeliki; Leibo, Joel Z.; Lanctot, Marc; Johanson, Michael; Czarnecki, Wojciech M.; Graepel, Thore: Negotiating team formation using deep reinforcement learning (2020)
  2. Parras, Juan; Zazo, Santiago: A distributed algorithm to obtain repeated games equilibria with discounting (2020)
  3. Albrecht, Stefano V.; Stone, Peter: Autonomous agents modelling other agents: a comprehensive survey and open problems (2018)
  4. Barrett, Samuel; Rosenfeld, Avi; Kraus, Sarit; Stone, Peter: Making friends on the fly: cooperating with new teammates (2017)
  5. Albrecht, Stefano V.; Crandall, Jacob W.; Ramamoorthy, Subramanian: Belief and truth in hypothesised behaviours (2016)
  6. Brandt, Felix; Fischer, Felix; Harrenstein, Paul: On the rate of convergence of fictitious play (2013)
  7. Crandall, Jacob W.; Goodrich, Michael A.: Learning to compete, coordinate, and cooperate in repeated games using reinforcement learning (2011)
  8. Brandt, Felix; Fischer, Felix; Harrenstein, Paul: On the rate of convergence of fictitious play (2010)
  9. Banerjee, Bikramjit; Peng, Jing: Generalized multiagent learning with performance bound. (2007) ioport
  10. Banerjee, Dipyaman; Sen, Sandip: Reaching Pareto-optimality in prisoner’s dilemma using conditional joint action learning. (2007) ioport
  11. Conitzer, Vincent; Sandholm, Tuomas: AWESOME: a general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents (2007)
  12. Powers, Rob; Shoham, Yoav; Vu, Thuc: A general criterion and an algorithmic framework for learning in multi-agent systems (2007)
  13. Sandholm, Tuomas: Perspectives on multiagent learning (2007)
  14. Conitzer, Vincent; Sandholm, Tuomas: AWESOME: A general multiagent learning algorithm that converges in self-play and learns a best response against stationary opponents (2003) ioport