Agent-Based Evolutionary Dynamics in 1 population

Abed-1pop is a modeling framework designed to simulate the evolution of a population of agents who play a symmetric 2-player game and, from time to time, are given the opportunity to revise their strategy

How to install Abed-1pop


To use Abed-1pop, you will have to install NetLogo (free and open source) 6.0.1 or higher and download this zip file. The zip file contains the model itself (abed-1pop.nlogo) and also this very webpage (which contains a description of the model). To run the model, you just have to click on the file abed-1pop.nlogo. The figure below shows Abed-1pop's interface.

abed-1pop-interface

Overview


We use bold green fonts to denote parameters (i.e. variables that can be set by the user) and brown fonts to denote different values that a parameter might take.

Abed-1pop is a modeling framework designed to simulate the evolution of a population of n-of-agents agents who recurrently play a symmetric 2-player game defined by a payoff-matrix. Agents use pure strategies only, and they occasionally revise their strategy.

The simulation runs in discrete time-steps called ticks. Within each tick:

  1. The set of revising agents is chosen
  2. The game is played by the revising agents and all the agents involved in their revision
  3. The revising agents simultaneously update their strategies

This sequence of events, which is explained below in detail, is repetead iteratively.


Selection of revising agents

All agents are equally likely to revise their strategy in every tick. There are two ways in which the set of revising agents can be chosen:

  • By setting parameter prob-revision, which is the probability that each individual agent revises his strategy in every tick.
  • By setting parameter n-of-revisions-per-tick, which is the number of randomly selected agents that will revise their strategy in every tick.

Computation of payoffs

The payoff that each agent obtains in each tick is computed as the average payoff over n-of-trials matches (where the agent uses his particular pure strategy, and his -randomly chosen- counterparts theirs).

Note that two agents using the same strategy may get different payoffs because they may encounter different counterparts in their trials.

Specifying how payoffs are computed

There are parameters to specify:

  • the n-of-trials agents make,
  • whether these are conducted with or without replacement ( trials-with-replacement? ), and
  • whether agents may play with themselves or not ( self-matching? ).

Strategy revision

Agents are occasionally given the opportunity to revise their strategy. The specific protocol used by revising agents is determined by two parameters:

  • candidate-selection, which indicates whether the protocol is imitative or direct (PGED, p. 9), and
  • decision-method, which can be best, logit, positive-proportional or pairwise-diff-proportional.

We briefly explain the meaning of each of these terms below.

Imitative Protocols

Under imitative protocols, agents observe what other agents are doing and their payoff, and then consider imitating one of them. How many agents the revising agent observes is determined by parameter n-to-consider-imitating. The specific selection of potential imitatees is conducted randomly, so popular strategies in the population are more likely to be observed than less popular ones. After having observed some agents, the revising agent will use one of various possible decision-methods (see below) to decide whether to keep on using his strategy or, alternatively, to imitate one of the agents in the set of potential imitatees.

Imitative protocols: Specifying how to compile the set of potential imitatees

There are parameters to specify:

  • the number of agents that revising agents will consider imitating ( n-to-consider-imitating ) ,
  • whether these are selected with or without replacement ( imitatees-with-replacement? ), and
  • whether agents may imitate themselves or not ( consider-imitating-self? ).

Direct Protocols

In contrast, under direct protocols, revising agents choose candidate strategies directly, so a strategy's popularity does not directly influence the probability with which it is considered. Specifically, the revising agent considers a set of n-in-test-set strategies (including his own), tests them, and selects one of them according to one of various possible decision-methods (see below).

Direct protocols: Specifying how to test strategies

There are parameters to specify:

  • the number of strategies that revising agents consider, including their current strategy ( n-in-test-set ),
  • whether the strategies in the test-set are compared according to a payoff computed using one single sample of agents (i.e. the same set of agents for every strategy), or whether the payoff of each strategy in the test-set is computed using a freshly drawn random sample of agents ( single-sample? ).

Decision methods

A decision-method is a procedure that singles out:

  • one particular agent out of a set of candidates, whose strategy will be adopted by the revising agent (in imitative protocols), or
  • one particular strategy out of a set of candidate strategies, which will be adopted by the revising agent (in direct protocols).

The selection is based on the (average) payoff obtained by each of the candidates (agents or strategies) in the input set. In imitative protocols the set of candidates is formed by the set of n-to-consider-imitating potential imitatees previously compiled plus the revising agent himself. In direct protocols the set of candidates is the set of n-in-test-set strategies previously compiled, which always includes the revising agent's strategy. Abed-1pop implements four different decision-methods:

  • best: The candidate with the highest payoff is selected. Ties are resolved according to the procedure indicated with parameter tie-breaker.
  • logit: A random weighted choice is conducted among the candidates. The weight for each candidate is e(payoff / eta).
  • positive-proportional: A candidate is randomly selected with probabilities proportional to payoffs. Thus, payoffs should be non-negative if this decision-method is used.
  • pairwise-diff-proportional: In this method the set of candidates contains two elements only: the one corresponding to the revising agent's own strategy (or to himself in imitative protocols) and another one. Then, the revising agent adopts the other candidate's strategy only if the other candidate's payoff is higher than his own, and he does so with probability proportional to the payoff difference.

Parameters


Parameters to set up the population and its initial strategy distribution

population-setup
  • If random-initial-condition? is On, the population will consist of n-of-agents agents, each of them with a random initial strategy.
  • If random-initial-condition? is Off, the population will be created from the list given as initial-condition. Let this list be [x1 x2 ... xn]; then the initial population will consist of x1 agents playing strategy 1, x2 agents playing strategy 2, ... , and xn agents playing strategy n. The value of n-of-agents is then set to the total number of agents in the population.
The value of n-of-agents can be changed at runtime with immediate effect over the model. If n-of-agents is reduced, the necessary number of (randomly selected) agents are killed. If n-of-agents is increased, the necessary number of (randomly selected) agents are cloned.

Parameters that determine when agents revise their strategy

revision-setup
  • If use-prob-revision? is On, each individual agent revises his strategy with probability prob-revision in every tick.
  • If use-prob-revision? is Off, then n-of-revisions-per-tick agents are randomly selected to revise their strategy in every tick.

The value of these three parameters can be changed at runtime with immediate effect on the model.

Parameters that determine how payoffs are calculated

This flowchart is a summary of all the parameters that determine how payoffs are calculated.

The idea is to start at the top of the flowchart, give a value to every parameter you encounter on your way and, at the decision nodes (i.e. the red diamonds), select the exit arrow whose [label] indicates the value of the parameter you have just chosen. Thus, note that, depending on your choices, it may not be necessary to set the value of all parameters.

Every parameter is explained below

payoffs-flowchart
payoff-matrix

Payoff matrix for the symmetric game. Entry Aij in the matrix denotes the payoff obtained by player row when he chooses strategy i and his counterpart uses strategy j. The number of strategies is assumed to be the number of rows (and columns) in this square matrix.

self-matching
  • If self-matching? is On, agents can be matched with themselves.
  • If self-matching? is Off, agents cannot be matched with themselves.
complete-matching

If complete-matching? is On, payoffs are calculated assuming that everyone plays with:

  • Everyone, including themselves, if self-matching? is On.
  • Everyone else, if self-matching? is Off.

If complete-matching? is Off, then parameters n-of-trials, trials-with-replacement? and single-sample? become relevant.

n-of-trials

Parameter n-of-trials indicates the number of matches over which payoffs are computed. The user can set the value of this parameter only if complete-matching? is Off.

trials-with-replacement

Parameter trials-with-replacement? indicates whether the sample of counterparts used to compute payoffs is taken with replacement ( On ) or without replacement ( Off ). The user can set the value of this parameter only if complete-matching? is Off.

We assume clever payoff evaluation (see PGED, section 11.4.2, pp. 419-421), i.e. when a revising agent tests a strategy s different from his current strategy, he assumes that he switches to this strategy s (so the population state changes), and then he computes the payoff he would get in this new state (i.e. he computes the actual payoff he would get if only he changed his strategy).

Parameters that determine how agents revise their strategy

With probability prob-mutation the revising agent will select a random strategy. With probability (1 - prob-mutation) he will revise his strategy following an algorithm which is mainly determined by the parameters shown in the flowchart below.

The idea is to start at the top of the flowchart, give a value to every parameter you encounter on your way and, at the decision nodes (i.e. the red diamonds), select the exit arrow whose [label] indicates the value of the parameter you have just chosen. Every parameter is explained below.

revision-flowchart

Parameter candidate-selection indicates whether the protocol is imitative or direct (PGED, p. 9). Revising agents compile a set of candidate agents to imitate to (in imitative protocols) or a set of candidate strategies they will consider adopting (in direct protocols). The following describes how the set of candidates is formed.

The set of candidate agents always contains the revising agent. The following diagram shows the different parameters that determine how the set of candidate agents is compiled. We use Sample(n, p, R) to denote a sample of n = n-to-consider-imitating elements from population p with replacement. Similarly, Sample(n, p, NR) denotes the same sampling procedure, but without replacement. Finally, let P denote the whole population of agents.

The white boxes in the figure below show the sets of candidates for each parameter setting

imitative

The set of candidate strategies contains the revising agent's own strategy plus ( n-in-test-set - 1 ) other strategies, selected at random.

single-sample

Parameter single-sample? is only relevant in direct protocols (i.e. if candidate-selection is direct). It indicates whether strategies being compared are tested against one single sample of agents (i.e. the same set of agents for every strategy), or whether the payoff of each strategy in the test-set is computed using a freshly drawn random sample of agents.

The user can set the value of this parameter only if complete-matching? is Off.

Note that when an agent tests a strategy, he assumes he adopts it, so if the agent himself is part of the sample, he will be playing with himself using the strategy under test.

Parameter decision-method determines which strategy the revising agent will adopt.

Specifically, the chosen decision-method (i.e. best, logit, positive-proportional or pairwise-diff-proportional) selects one particular instance from the set of candidates previously compiled, based on the payoff obtained by each of the candidates. The revising agent will then adopt the strategy of the selected candidate.

The candidate with the highest payoff is returned. Ties are resolved using the selected tie-breaker. Possible values of tie-breaker are:

  • stick-uniform: If the current strategy of the revising agent is in the tie, the revising agent sticks to it. Otherwise, a random choice is made using the uniform distribution.
  • stick-min: If the current strategy of the revising agent is in the tie, the revising agent sticks to it. Otherwise, the strategy with the minimum number is selected.
  • uniform: A random choice is made using the uniform distribution.
  • min: The strategy with the minimum number is selected.
  • random-walk: This tie-breaker makes use of an auxiliary random walk that is running in the background. Let N be the number of agents in Abed-1pop and let n be the number of strategies. In the auxiliary random walk, there are N rw-agents, plus a set of n so-called committed rw-agents, one for each of the n strategies in Abed-1pop. The committed rw-agents never change strategy. In each iteration of this random walk, one uncommitted rw-agent is selected at random to imitate another (committed or uncommitted) rw-agent, also chosen at random. We run floor(N × random-walk-speed) iterations of this process per tick. When there is a tie in Abed-1pop, the relative frequency of each strategy in the auxiliary random walk is used as a weight to make a random weighted selection among the tied candidates.

  • The random-walk tie-breaker is inspired by section 11.4.3 in PGED (pp. 421-423).

A random weighted choice is conducted among the candidates. The weight for each candidate is e(payoff / eta). Recall that the payoff for each candidate is the average payoff over the n-of-trials matches.

Parameter eta is called the noise level. If eta is large, choices under logit are nearly uniform. If eta is close to zero, choices resemble those made by best, at least when the difference between the best and the second-best payoff is not too small (PGED, p. 128 & pp. 191-6).

A candidate is randomly selected, with probabilities proportional to payoffs. Because of this, when using positive-proportional, payoffs should not be negative. Recall that the payoff for each candidate is the average payoff over the n-of-trials matches.

In pairwise-diff-proportional, the agent always considers exactly two candidates (one corresponding to his own strategy or to himself, and another one), i.e.:

  • If candidate-selection = imitative, the value of n-to-consider-imitating is automatically set to 1.
  • If candidate-selection = direct, the value of n-in-test-set is automatically set to 2.

Let candidate denote the other agent or strategy. Then, the revising agent will consider adopting the candidate's strategy only if the candidate's payoff is higher than his own, and he will actually adopt it with probability proportional to the payoff difference. In order to turn the difference in payoffs into a meaningful probability, we divide the payoff difference by the maximum possible payoff difference that could occur in the game.

If complete-matching? is On, the mean dynamics for large populations of the imitative pairwise-diff-proportional protocol (sometimes called pairwise proportional imitation) is the replicator dynamics up to a speed factor (see PGED, pp. 126-7).

How to save and load parameter files

Clicking on the button labeled , a window pops up to allow the user to save all parameter values in a .csv file.

Clicking on the button labeled , a window pops up to allow the user to load a parameter file in the same format as it is saved by Abed-1pop.

How to use it



Running the model

Once the model is parameterized (see Parameters section), click on the button labeled to initialize everything. Then, click on to run the model indefinitely (until you click on again). A click on runs the model one tick only.

The value of all parameters can be changed at runtime with immediate effect on the model

Plots and monitors

All plots in Abed-1pop show units of time in the horizontal axis. These are called seconds and milliseconds for simplicity, but they could be called in any other way. One second is defined as the number of ticks over which the expected number of revisions equals the total number of agents. This number of ticks per second is called and is shown in a monitor. The other monitor in Abed-1pop shows the that have been executed. Plots are updated every plot-every-?-secs seconds.

There are two plots to show the strategy distribution in the population of agents as ticks go by, like the one below:

complete-shares

and two other plots to show the expected payoff of each strategy as ticks go by, like the one below:

recent-payoffs

Each of the two types of plots described above comes in two different flavors: one that shows the data from the beginning of the simulation (labeled complete history and active if show-complete-history? is On) and one that shows only the duration-of-recent last seconds in the simulation (labeled recent history and active if show-recent-history? is On).

License


Abed-1pop (Agent-Based Evolutionary Dynamics in 1 population) is a modeling framework designed to simulate the evolution of a population of agents who play a symmetric 2-player game and, from time to time, are given the opportunity to revise their strategy.

Copyright (C) 2017 Luis R. Izquierdo, Segismundo S. Izquierdo & Bill Sandholm

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You can download a copy of the GNU General Public License by clicking here; you can also get a printed copy writing to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.

Contact information:
Luis R. Izquierdo
University of Burgos, Spain.
e-mail: lrizquierdo@ubu.es

Extending the model


You are most welcome to extend the model as you please, either privately or by creating a branch in our Github repository. If you’d like to see any specific feature implemented but you don’t know how to go about it, do feel free to email us and we’ll do our best to make it happen.

References


Sandholm, W. H. (2010). Population Games and Evolutionary Dynamics. MIT Press, Cambridge.


© Luis R. Izquierdo, Segismundo S. Izquierdo & Bill Sandholm 2017