Citing this article

A standard form of citation of this article is:

Airiau, St&eacute;phane, Saha, Sabyasachi and Sen, Sandip (2007). 'Evolutionary Tournament-Based Comparison of Learning and Non-Learning Algorithms for Iterated Games'. Journal of Artificial Societies and Social Simulation 10(3)7 <http://jasss.soc.surrey.ac.uk/10/3/7.html>.

The following can be copied and pasted into a Bibtex bibliography file, for use with the LaTeX text processor:

@article{airiau2007,
title = {Evolutionary Tournament-Based Comparison of Learning and Non-Learning Algorithms for Iterated Games},
author = {Airiau, St\'{e}phane and Saha, Sabyasachi and Sen, Sandip},
journal = {Journal of Artificial Societies and Social Simulation},
ISSN = {1460-7425},
volume = {10},
number = {3},
pages = {7},
year = {2007},
URL = {http://jasss.soc.surrey.ac.uk/10/3/7.html},
keywords = {Repeated Games, Evolution, Simulation},
abstract = {Evolutionary tournaments have been used effectively as a tool for comparing game-playing algorithms. For instance, in the late 1970's, Axelrod organized tournaments to compare algorithms for playing the iterated prisoner's dilemma (PD) game. These tournaments capture the dynamics in a population of agents that periodically adopt relatively successful algorithms in the environment. While these tournaments have provided us with a better understanding of the relative merits of algorithms for iterated PD, our understanding is less clear about algorithms for playing iterated versions of arbitrary single-stage games in an environment of heterogeneous agents. While the Nash equilibrium solution concept has been used to recommend using Nash equilibrium strategies for rational players playing general-sum games, learning algorithms like fictitious play may be preferred for playing against sub-rational players. In this paper, we study the relative performance of learning and non-learning algorithms in an evolutionary tournament where agents periodically adopt relatively successful algorithms in the population. The tournament is played over a testbed composed of all possible structurally distinct 2&times;2 conflicted games with ordinal payoffs: a baseline, neutral testbed for comparing algorithms. Before analyzing results from the evolutionary tournament, we discuss the testbed, our choice of representative learning and non-learning algorithms and relative rankings of these algorithms in a round-robin competition. The results from the tournament highlight the advantage of learning algorithms over players using static equilibrium strategies for repeated plays of arbitrary single-stage games. The results are likely to be of more benefit compared to work on static analysis of equilibrium strategies for choosing decision procedures for open, adapting agent society consisting of a variety of competitors.},
}

The following can be copied and pasted into a text file, which can then be imported into a reference database that supports imports using the RIS format, such as Reference Manager and EndNote.


TY - JOUR
TI - Evolutionary Tournament-Based Comparison of Learning and Non-Learning Algorithms for Iterated Games
AU - Airiau, St&eacute;phane
AU - Saha, Sabyasachi
AU - Sen, Sandip
Y1 - 2007/06/30
JO - Journal of Artificial Societies and Social Simulation
SN - 1460-7425
VL - 10
IS - 3
SP - 7
UR - http://jasss.soc.surrey.ac.uk/10/3/7.html
KW - Repeated Games
KW - Evolution
KW - Simulation
N2 - Evolutionary tournaments have been used effectively as a tool for comparing game-playing algorithms. For instance, in the late 1970's, Axelrod organized tournaments to compare algorithms for playing the iterated prisoner's dilemma (PD) game. These tournaments capture the dynamics in a population of agents that periodically adopt relatively successful algorithms in the environment. While these tournaments have provided us with a better understanding of the relative merits of algorithms for iterated PD, our understanding is less clear about algorithms for playing iterated versions of arbitrary single-stage games in an environment of heterogeneous agents. While the Nash equilibrium solution concept has been used to recommend using Nash equilibrium strategies for rational players playing general-sum games, learning algorithms like fictitious play may be preferred for playing against sub-rational players. In this paper, we study the relative performance of learning and non-learning algorithms in an evolutionary tournament where agents periodically adopt relatively successful algorithms in the population. The tournament is played over a testbed composed of all possible structurally distinct 2&times;2 conflicted games with ordinal payoffs: a baseline, neutral testbed for comparing algorithms. Before analyzing results from the evolutionary tournament, we discuss the testbed, our choice of representative learning and non-learning algorithms and relative rankings of these algorithms in a round-robin competition. The results from the tournament highlight the advantage of learning algorithms over players using static equilibrium strategies for repeated plays of arbitrary single-stage games. The results are likely to be of more benefit compared to work on static analysis of equilibrium strategies for choosing decision procedures for open, adapting agent society consisting of a variety of competitors.
ER -