The Strategies

Nine strategies compete in the tournament. Each follows a simple rule for when to cooperate and when to defect.

The Petri Dish

Watch strategies compete and evolve in real time. Inject defectors. Adjust the speed. See what survives.

Generation

0
initialising culture...

Population

1x

Population History

Observations

    I. The Dilemma

    Two players face a choice: cooperate or defect. If both cooperate, both win. If one defects while the other cooperates, the defector takes everything. If both defect, both lose.

    This is the Prisoner's Dilemma — the most studied game in all of game theory. The rational choice is to defect. But if both are rational, both lose.

    This single match — two players, one decision each, a payoff — is the 1v1 game. It's stateless: no memory, no history, just a simultaneous choice. But the game changes when it's iterated. Play the same opponent again and again, and suddenly reputation, retaliation, and forgiveness all come into play.

    II. The Tournament

    In 1980, Robert Axelrod invited game theorists to submit strategies for a computer tournament. Each strategy would play every other strategy in a round-robin of iterated prisoner's dilemma matches.

    This is a different system from the 1v1 game. The tournament wraps many 1v1 matches into a population-level competition — it adds state (scores, rankings) and structure (round-robin pairing) on top of the single match.

    The simplest strategy won: Tit for Tat. Cooperate first, then copy whatever the opponent did last time. Nice, retaliatory, forgiving, clear.

    III. Evolution

    Axelrod then asked: what if strategies could reproduce? This adds a second layer to the tournament: evolutionary selection. After each round-robin, successful strategies grow in population. Failed strategies die out. The tournament becomes a dynamical system with state that changes over time.

    Watch the population evolve. Defectors thrive early by exploiting cooperators. But as cooperators die out, defectors have no one left to exploit. Reciprocal strategies like Tit for Tat inherit the earth.

    IV. Noise & The Shadow

    The real world is noisy. Signals get corrupted. Actions get misinterpreted. What happens to Tit for Tat when noise enters?

    Two Tit-for-Tat players accidentally defect — and they're locked in an endless cycle of retaliation. Enter Generous Tit for Tat: identical to TfT, but occasionally forgives a defection. In noisy environments, forgiveness wins.

    But cooperation also requires a future. If you know this is the last round, defect. Extend the shadow — make the game indefinite — and cooperation becomes rational.

    The longer the match, the more cooperation pays. Short matches favour exploitation. Long matches favour reciprocity. Together, noise and the shadow of the future shape the evolutionary landscape.

    V. The Simulation Matrix

    What happens when we sweep the parameter space? Each cell below is a full evolutionary simulation under different conditions — varying noise (miscommunication rate) and rounds per match (the shadow of the future). Press Play to watch populations shift.

    VI. Under the Hood

    Everything above is built on two formal specifications — one for the 1v1 game and one for the tournament that wraps it.

    The 1v1 Game (OGS Pattern)

    The single match is specified as an OGS pattern — a compositional game structure. Alice decides, Bob decides, the environment computes payoffs. It's stateless: h = g, pure policy with no mechanism or state update.

    The Tournament (GDSSpec)

    The tournament wraps the 1v1 game into a stateful dynamical system. It adds population state, round-robin pairing, evolutionary selection, and tunable parameters (noise, rounds per match). Its canonical form is hθ = fθ ∘ gθ — policy and mechanism, with parameters.