Skip to content

Interoperability: From Specification to Computation

GDS specifications are not just documentation — they are structured representations that project cleanly onto domain-specific computation. This guide demonstrates two concrete projections: Nash equilibrium computation (game theory) and iterated tournament simulation (evolutionary dynamics), both built on the same OGS game structure without modifying the framework.


The Thesis

A GDS specification captures the structural skeleton of a system: blocks, roles, interfaces, composition, and wiring. By design, the framework provides no execution semantics — blocks describe what a system is, not how it runs.

This is a feature, not a limitation. It means the same specification can be consumed by multiple independent tools:

                    ┌─ Nash equilibrium solver (Nashpy)
OGS Pattern ─┬─ PatternIR ─┤
              │             └─ Mermaid visualization (ogs.viz)
              └─ get_payoff() ─┬─ Iterated match simulator
                               ├─ Round-robin tournament
                               └─ Evolutionary dynamics engine

The specification is the single source of truth. The computations are thin projections that extract what they need and add domain-specific logic on top.


Case Study: Prisoner's Dilemma

The Prisoner's Dilemma is formalized as an OGS composition:

(Alice Decision | Bob Decision) >> Payoff Computation
    .feedback([payoff -> decisions])

This encodes:

  • Two players as DecisionGame blocks with (X, Y, R, S) port signatures
  • Payoff computation as a CovariantFunction that maps action pairs to payoffs
  • Feedback carrying payoffs back to decision nodes for iterated play
  • Terminal conditions declaring all four action profiles and their payoffs
  • Action spaces enumerating each player's available moves

The specification exists in two concrete instantiations:

Variant Payoffs (R, T, S, P) Purpose
prisoners_dilemma_nash (3, 5, 0, 1) Standard PD — Nash equilibrium analysis
evolution_of_trust (2, 3, -1, 0) Nicky Case variant — iterated simulation

Both share the identical OGS composition tree. Only the payoff parameters differ.


Projection 1: Nash Equilibrium Computation

Source: packages/gds-examples/games/prisoners_dilemma_nash/model.py

The Nash equilibrium solver extracts payoff matrices from PatternIR metadata and delegates to Nashpy for equilibrium computation.

What the specification provides

The OGS Pattern declares action_spaces and terminal_conditions as structured metadata:

action_spaces=[
    ActionSpace(game="Alice Decision", actions=["Cooperate", "Defect"]),
    ActionSpace(game="Bob Decision", actions=["Cooperate", "Defect"]),
]
terminal_conditions=[
    TerminalCondition(
        name="Mutual Cooperation",
        actions={"Alice Decision": "Cooperate", "Bob Decision": "Cooperate"},
        payoff_description="R=3 each",
    ),
    # ... 3 more conditions
]

What the projection adds

A thin extraction layer (build_payoff_matrices) parses terminal conditions into numpy arrays, then Nashpy computes equilibria via support enumeration:

def build_payoff_matrices(ir: PatternIR):
    # Extract from PatternIR metadata → numpy arrays
    ...

def compute_nash_equilibria(ir: PatternIR):
    alice_payoffs, bob_payoffs = build_payoff_matrices(ir)
    game = nashpy.Game(alice_payoffs, bob_payoffs)
    return list(game.support_enumeration())

What the projection verifies

Cross-references computed equilibria against hand-annotated terminal conditions — the specification declares which outcomes are Nash equilibria, and the solver confirms or refutes them:

verification = verify_terminal_conditions(ir, equilibria)
# → matches: [Mutual Defection (confirmed)]
# → mismatches: [] (none — declared NE matches computed NE)

Additional analyses extracted from the same specification:

  • Dominant strategies — Defect strictly dominates for both players
  • Pareto optimality — Mutual Cooperation is Pareto optimal but not a NE
  • The dilemma — the unique NE is not Pareto optimal

No framework changes required

The Nash solver is 100 lines of pure Python + Nashpy. It reads from PatternIR metadata using the existing API. No modifications to gds-framework, gds-games, or any IR layer were needed.


Projection 2: Iterated Tournament Simulation

Source: packages/gds-examples/games/evolution_of_trust/

The tournament simulator uses the same game structure but projects it differently — instead of computing equilibria, it plays the game repeatedly with concrete strategies.

What the specification provides

The payoff matrix parameters (R=2, T=3, S=-1, P=0) and the game structure define the rules. A direct lookup function is derived from the specification:

def get_payoff(action_a: str, action_b: str) -> tuple[int, int]:
    """Direct payoff lookup from action pair."""
    return {
        ("Cooperate", "Cooperate"): (R, R),
        ("Cooperate", "Defect"): (S, T),
        ("Defect", "Cooperate"): (T, S),
        ("Defect", "Defect"): (P, P),
    }[(action_a, action_b)]

What the projection adds

Three layers of simulation logic, each consuming only get_payoff():

Layer 1 — Strategies. Eight strategy implementations as a Strategy protocol: choose(history, round_num) → action. These are pure Python — no GDS dependency:

Strategy Logic Character
Always Cooperate Always C Naive cooperator
Always Defect Always D Pure exploiter
Tit for Tat Copy opponent's last move Retaliatory but forgiving
Grim Trigger C until betrayed, then D forever Unforgiving
Detective Probe C,D,C,C then exploit or TfT Strategic prober
Tit for Two Tats C unless opponent D'd twice in a row Extra forgiving
Pavlov Win-stay, lose-shift Adaptive
Random 50/50 coin flip Baseline noise

Layer 2 — Tournament. play_match() runs iterated rounds between two strategies. play_round_robin() runs all-pairs competition. Both use get_payoff() as the sole interface to the game specification.

Layer 3 — Evolutionary dynamics. run_evolution() runs generational selection: each generation plays a tournament, the worst performer loses a member, the best gains one. Population dynamics emerge from repeated tournament play.

The simulation stack

┌──────────────────────────────────────────────┐
│            Evolutionary dynamics              │
│  run_evolution(populations, generations, ...) │
├──────────────────────────────────────────────┤
│          Round-robin tournament               │
│  play_round_robin(strategies, rounds, noise)  │
├──────────────────────────────────────────────┤
│            Iterated match                     │
│  play_match(strategy_a, strategy_b, rounds)   │
├──────────────────────────────────────────────┤
│          Payoff lookup                        │
│  get_payoff(action_a, action_b) → (int, int)  │
├──────────────────────────────────────────────┤
│   OGS specification (R=2, T=3, S=-1, P=0)    │
│   build_game() → build_pattern() → build_ir() │
└──────────────────────────────────────────────┘

Each layer is independently testable. The simulation code knows nothing about OGS composition trees, PatternIR, or GDS blocks — it only knows payoff values.

Thin runner, not a general simulator

This is not a GDS execution engine. It is a domain-specific simulation that uses the GDS specification as its source of truth for game rules. The strategies, match logic, and evolutionary dynamics are all hand-written Python specific to the iterated PD. A general gds-sim would require solving the much harder problem of executing arbitrary GDS specifications — see Research Boundaries.


The Pattern: Specification as Interoperability Layer

Both projections follow the same architectural pattern:

1. Build OGS specification     →  Pattern + PatternIR
2. Extract domain-relevant     →  Payoff matrices (Nash)
   data from the specification    or payoff lookup (simulation)
3. Add domain-specific logic   →  Nashpy solver / Strategy protocol
4. Produce domain-specific     →  Equilibria / Tournament results /
   results                        Evolutionary trajectories

The specification serves as the interoperability contract between different analytical tools. Each tool consumes the subset it needs:

Consumer What it reads from the specification What it adds
Nash solver Action spaces, terminal conditions, payoff descriptions Support enumeration, dominance analysis, Pareto optimality
Tournament Payoff parameters (R, T, S, P) Strategy implementations, match replay, noise model
Evolutionary engine Payoff parameters Population dynamics, generational selection
Mermaid visualizer Game tree structure, flows, feedback Diagram rendering
OGS reports Full PatternIR (games, flows, metadata) Jinja2 text reports

No consumer modifies the specification. No consumer needs to understand the full OGS type system. Each extracts a projection and operates in its own domain vocabulary.


Why This Matters

For game theorists

GDS provides a compositional specification language for games that separates structure from analysis. The same game structure supports both closed-form equilibrium computation and agent-based simulation without duplication. New analytical tools (e.g., correlated equilibria, mechanism design verifiers) can be added as additional projections without modifying the game definition.

For simulation engineers

GDS specifications serve as machine-readable game rules that simulation engines can consume. The specification defines the action spaces, payoff structure, and composition topology. The simulator provides strategies, scheduling, and dynamics. The boundary is clean: the specification says what the game is, the simulator says how it plays out.

For software teams

The OGS composition tree is a formal architecture diagram that happens to be executable by analytical tools. The same (Alice | Bob) >> Payoff .feedback(...) description generates Mermaid diagrams for documentation, payoff matrices for game theorists, and payoff lookup functions for simulators. One source, multiple views.

For the GDS ecosystem

This validates GDS as an interoperability substrate, not just a modeling framework. The canonical form h = f ∘ g with varying dimensionality of X absorbs game-theoretic (stateless), control-theoretic (stateful), and stock-flow (state-dominant) formalisms. Each domain projects what it needs from the shared representation without architectural changes.


Running the Examples

Nash Equilibrium Analysis

Source code for packages/gds-examples/notebooks/nash_equilibrium.py

Tip: paste this code into an empty cell, and the marimo editor will create cells for you

"""Nash Equilibrium in the Prisoner's Dilemma — Interactive Marimo Notebook.

Demonstrates the full pipeline: OGS game structure -> payoff matrices ->
Nash equilibrium computation -> dominance and Pareto analysis.

Run interactively:
    uv run marimo edit notebooks/nash_equilibrium.py

Run as read-only app:
    uv run marimo run notebooks/nash_equilibrium.py
"""
# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "gds-examples",
#     "nashpy>=0.0.41",
#     "marimo>=0.20.0",
# ]
# ///

import marimo

__generated_with = "0.20.2"
app = marimo.App(width="medium", app_title="Nash Equilibrium: Prisoner's Dilemma")


# ── Imports ──────────────────────────────────────────────────


@app.cell
def imports():
    import marimo as mo

    return (mo,)


# ── Model Imports & Path Setup ───────────────────────────────


@app.cell
def model_setup():
    import sys
    from pathlib import Path

    _examples_root = Path(__file__).resolve().parent.parent
    _games_path = str(_examples_root / "games")
    if _games_path not in sys.path:
        sys.path.insert(0, _games_path)

    from prisoners_dilemma_nash.model import (
        P,
        R,
        S,
        T,
        analyze_game,
        build_ir,
        build_payoff_matrices,
        compute_nash_equilibria,
        verify_terminal_conditions,
    )

    from ogs.viz import (
        architecture_by_domain_to_mermaid,
        structural_to_mermaid,
        terminal_conditions_to_mermaid,
    )

    ir = build_ir()

    return (
        R,
        S,
        T,
        P,
        analyze_game,
        architecture_by_domain_to_mermaid,
        build_payoff_matrices,
        compute_nash_equilibria,
        ir,
        structural_to_mermaid,
        terminal_conditions_to_mermaid,
        verify_terminal_conditions,
    )


# ── Header ───────────────────────────────────────────────────


@app.cell
def header(mo):
    mo.md(
        """
        # Nash Equilibrium: Prisoner's Dilemma

        The **Prisoner's Dilemma** is the canonical example of a game where
        individually rational decisions lead to a collectively suboptimal outcome.
        Two players simultaneously choose to **Cooperate** or **Defect**, and the
        payoff structure creates a tension between self-interest and mutual benefit.

        This notebook walks through the full analysis pipeline:

        1. **Game Structure** — the OGS composition tree and metadata
        2. **Payoff Matrices** — extracted from PatternIR terminal conditions
        3. **Nash Equilibria** — computed via Nashpy support enumeration
        4. **Game Analysis** — dominance, Pareto optimality, and the dilemma itself
        """
    )
    return ()


# ── Game Structure ───────────────────────────────────────────


@app.cell
def game_structure(
    mo,
    ir,
    structural_to_mermaid,
    terminal_conditions_to_mermaid,
    architecture_by_domain_to_mermaid,
):
    _tabs = mo.ui.tabs(
        {
            "Structural": mo.vstack(
                [
                    mo.md(
                        "Full game topology: Alice and Bob make simultaneous "
                        "decisions, feeding into a payoff computation with "
                        "feedback loops carrying payoffs back to each player."
                    ),
                    mo.mermaid(structural_to_mermaid(ir)),
                ]
            ),
            "Terminal Conditions": mo.vstack(
                [
                    mo.md(
                        "State diagram of all possible outcomes. Each terminal "
                        "state is an action profile with associated payoffs."
                    ),
                    mo.mermaid(terminal_conditions_to_mermaid(ir)),
                ]
            ),
            "By Domain": mo.vstack(
                [
                    mo.md(
                        "Architecture grouped by domain: **Alice**, **Bob**, and "
                        "**Environment**. Shows the symmetric structure of the game."
                    ),
                    mo.mermaid(architecture_by_domain_to_mermaid(ir)),
                ]
            ),
        }
    )

    mo.vstack(
        [
            mo.md(
                """\
---

## Game Structure

The Prisoner's Dilemma is built from OGS primitives:
two `DecisionGame` blocks (Alice, Bob) composed in parallel,
sequenced into a `CovariantFunction` (payoff computation),
with feedback loops carrying payoffs back to the decision nodes.

```
(Alice | Bob) >> Payoff .feedback([payoff -> decisions])
```
"""
            ),
            _tabs,
        ]
    )
    return ()


# ── Payoff Matrices ──────────────────────────────────────────


@app.cell
def payoff_matrices(mo, ir, R, T, S, P, build_payoff_matrices):
    _alice_payoffs, _bob_payoffs = build_payoff_matrices(ir)

    mo.vstack(
        [
            mo.md(
                f"""\
---

## Payoff Matrices

Extracted from PatternIR terminal conditions. The standard PD
parameters satisfy **T > R > P > S** and **2R > T + S**:

| Parameter | Value | Meaning |
|-----------|-------|---------|
| R (Reward) | {R} | Mutual cooperation |
| T (Temptation) | {T} | Defect while other cooperates |
| S (Sucker) | {S} | Cooperate while other defects |
| P (Punishment) | {P} | Mutual defection |
"""
            ),
            mo.md(
                "**Alice's Payoffs:**\n\n"
                "| | Bob: Coop | Bob: Defect |\n"
                "|---|---|---|\n"
                f"| **Cooperate** | {_alice_payoffs[0, 0]:.0f} (R) "
                f"| {_alice_payoffs[0, 1]:.0f} (S) |\n"
                f"| **Defect** | {_alice_payoffs[1, 0]:.0f} (T) "
                f"| {_alice_payoffs[1, 1]:.0f} (P) |\n\n"
                "**Bob's Payoffs:**\n\n"
                "| | Bob: Coop | Bob: Defect |\n"
                "|---|---|---|\n"
                f"| **Cooperate** | {_bob_payoffs[0, 0]:.0f} (R) "
                f"| {_bob_payoffs[0, 1]:.0f} (T) |\n"
                f"| **Defect** | {_bob_payoffs[1, 0]:.0f} (S) "
                f"| {_bob_payoffs[1, 1]:.0f} (P) |"
            ),
        ]
    )
    return ()


# ── Nash Equilibria ──────────────────────────────────────────


@app.cell
def nash_equilibria(mo, ir, compute_nash_equilibria, verify_terminal_conditions):
    import numpy as _np

    equilibria = compute_nash_equilibria(ir)
    verification = verify_terminal_conditions(ir, equilibria)

    _actions = ["Cooperate", "Defect"]
    _eq_lines = []
    for _i, (_alice_strat, _bob_strat) in enumerate(equilibria):
        _alice_action = _actions[int(_np.argmax(_alice_strat))]
        _bob_action = _actions[int(_np.argmax(_bob_strat))]
        _eq_lines.append(
            f"- **NE {_i + 1}:** Alice = {_alice_action}, Bob = {_bob_action}"
        )

    _match_lines = []
    for _m in verification["matches"]:
        _match_lines.append(f"- **{_m.name}**: {_m.outcome}")
    _mismatch_lines = []
    for _mm in verification["mismatches"]:
        _mismatch_lines.append(f"- **{_mm.name}**: {_mm.outcome}")

    _match_text = "\n".join(_match_lines) if _match_lines else "- None"
    _mismatch_text = "\n".join(_mismatch_lines) if _mismatch_lines else "- None"

    mo.vstack(
        [
            mo.md(
                f"""\
---

## Nash Equilibria

Computed via **Nashpy** support enumeration on the extracted
payoff matrices.

### Computed Equilibria ({len(equilibria)} found)

{"\\n".join(_eq_lines)}

### Verification Against Declared Terminal Conditions

Cross-referencing computed equilibria against the hand-annotated
terminal conditions in the OGS Pattern:

**Matches** (declared NE confirmed by computation):

{_match_text}

**Mismatches** (declared NE not confirmed):

{_mismatch_text}
"""
            ),
        ]
    )
    return (equilibria,)


# ── Game Analysis ────────────────────────────────────────────


@app.cell
def game_analysis(mo, ir, analyze_game):
    analysis = analyze_game(ir)

    _alice_dom = analysis["alice_dominant_strategy"]
    _bob_dom = analysis["bob_dominant_strategy"]
    _pareto = analysis["pareto_optimal"]

    _pareto_rows = []
    for _o in _pareto:
        _pareto_rows.append(
            f"| {_o['alice_action']} | {_o['bob_action']} | "
            f"{_o['alice_payoff']:.0f} | {_o['bob_payoff']:.0f} |"
        )
    _pareto_table = "\n".join(_pareto_rows)

    mo.vstack(
        [
            mo.md(
                f"""\
---

## Game Analysis

### Dominant Strategies

A strategy is **strictly dominant** if it yields a higher payoff
regardless of the opponent's choice.

| Player | Dominant Strategy |
|--------|-------------------|
| Alice | **{_alice_dom or "None"}** |
| Bob | **{_bob_dom or "None"}** |

**Defect** strictly dominates for both players: no matter
what the opponent does, defecting always yields a higher
payoff (T > R and P > S).

### Pareto Optimal Outcomes ({len(_pareto)} of 4)

An outcome is **Pareto optimal** if no other outcome makes one player
better off without making the other worse off.

| Alice | Bob | Alice Payoff | Bob Payoff |
|-------|-----|-------------|------------|
{_pareto_table}

The Nash equilibrium (Defect, Defect) with payoffs (P, P) = (1, 1)
is **not** Pareto optimal — both players could do better with
(Cooperate, Cooperate) yielding (R, R) = (3, 3).
"""
            ),
        ]
    )
    return ()


# ── The Dilemma ──────────────────────────────────────────────


@app.cell
def the_dilemma(mo):
    mo.md(
        """
        ---

        ## The Dilemma

        The Prisoner's Dilemma is defined by this tension:

        > **The unique Nash equilibrium is not Pareto optimal.**

        Each player's dominant strategy (Defect) leads to a collectively
        worse outcome than mutual cooperation. This is the fundamental
        problem of non-cooperative game theory: **individual rationality
        does not imply collective rationality.**

        | Property | Outcome |
        |----------|---------|
        | Nash Equilibrium | (Defect, Defect) — payoff (1, 1) |
        | Pareto Optimum | (Cooperate, Cooperate) — payoff (3, 3) |
        | Dominant Strategy | Defect (for both players) |

        The OGS formalization makes this structure explicit: the game is
        **stateless** (h = g, no mechanism layer), all computation lives
        in the policy layer, and the feedback loop carries payoff
        information — not state updates.
        """
    )
    return ()


if __name__ == "__main__":
    app.run()

To run locally:

uv sync --all-packages --extra nash
uv run marimo run packages/gds-examples/notebooks/nash_equilibrium.py
# Run tests (22 tests)
uv run --package gds-examples pytest \
    packages/gds-examples/games/prisoners_dilemma_nash/ -v

Evolution of Trust Simulation

Source code for packages/gds-examples/notebooks/evolution_of_trust.py

Tip: paste this code into an empty cell, and the marimo editor will create cells for you

"""The Evolution of Trust — Iterated Prisoner's Dilemma Interactive Notebook.

Inspired by Nicky Case's "The Evolution of Trust" (https://ncase.me/trust/).
Demonstrates 8 strategies, round-robin tournaments, and evolutionary dynamics
built on an OGS game structure.

Run interactively:
    uv run marimo edit notebooks/evolution_of_trust.py

Run as read-only app:
    uv run marimo run notebooks/evolution_of_trust.py
"""
# /// script
# requires-python = ">=3.12"
# dependencies = [
#     "gds-examples",
#     "plotly>=5.0",
#     "marimo>=0.20.0",
# ]
# ///

import marimo

__generated_with = "0.20.2"
app = marimo.App(width="medium", app_title="The Evolution of Trust")


# ── Imports ──────────────────────────────────────────────────


@app.cell
def imports():
    import marimo as mo

    return (mo,)


# ── Model Imports & Path Setup ───────────────────────────────


@app.cell
def model_setup():
    import sys
    from pathlib import Path

    _examples_root = Path(__file__).resolve().parent.parent
    _games_path = str(_examples_root / "games")
    if _games_path not in sys.path:
        sys.path.insert(0, _games_path)

    from evolution_of_trust.model import (
        P,
        R,
        S,
        T,
        build_ir,
        get_payoff,
    )
    from evolution_of_trust.strategies import (
        ALL_STRATEGIES,
        AlwaysCooperate,
        AlwaysDefect,
        Detective,
        GrimTrigger,
        Pavlov,
        RandomStrategy,
        TitForTat,
        TitForTwoTats,
    )
    from evolution_of_trust.tournament import (
        head_to_head,
        play_round_robin,
        run_evolution,
    )

    from ogs.viz import (
        architecture_by_domain_to_mermaid,
        structural_to_mermaid,
        terminal_conditions_to_mermaid,
    )

    ir = build_ir()

    strategy_map = {
        "Always Cooperate": AlwaysCooperate,
        "Always Defect": AlwaysDefect,
        "Tit for Tat": TitForTat,
        "Grim Trigger": GrimTrigger,
        "Detective": Detective,
        "Tit for Two Tats": TitForTwoTats,
        "Pavlov": Pavlov,
        "Random": RandomStrategy,
    }

    # Nicky Case color palette
    COLORS = {
        "Always Cooperate": "#FF75FF",
        "Always Defect": "#52537F",
        "Tit for Tat": "#4089DD",
        "Grim Trigger": "#EFC701",
        "Detective": "#F6B24C",
        "Tit for Two Tats": "#88A8CE",
        "Pavlov": "#86C448",
        "Random": "#FF5E5E",
    }

    return (
        R,
        S,
        T,
        P,
        ir,
        get_payoff,
        ALL_STRATEGIES,
        AlwaysCooperate,
        AlwaysDefect,
        Detective,
        GrimTrigger,
        Pavlov,
        RandomStrategy,
        TitForTat,
        TitForTwoTats,
        head_to_head,
        play_round_robin,
        run_evolution,
        architecture_by_domain_to_mermaid,
        structural_to_mermaid,
        terminal_conditions_to_mermaid,
        strategy_map,
        COLORS,
    )


# ── Header ───────────────────────────────────────────────────


@app.cell
def header(mo):
    mo.md(
        """
        # The Evolution of Trust

        *Inspired by [Nicky Case's interactive guide](
        https://ncase.me/trust/)
        to game theory and the evolution of cooperation.*

        This notebook explores the **iterated Prisoner's Dilemma**:

        1. **Head-to-Head** — watch two strategies
           face off round by round
        2. **Tournament** — round-robin competition
           among all 8 strategies
        3. **Evolution** — populations compete and
           evolve over generations
        """
    )
    return ()


# ── Game Structure (accordion) ───────────────────────────────


@app.cell
def game_structure(
    mo,
    ir,
    R,
    T,
    S,
    P,
    structural_to_mermaid,
    terminal_conditions_to_mermaid,
    architecture_by_domain_to_mermaid,
):
    _struct_tab = mo.ui.tabs(
        {
            "Structural": mo.vstack(
                [
                    mo.md(
                        "Full game topology: simultaneous "
                        "decisions feeding into payoff "
                        "computation with feedback."
                    ),
                    mo.mermaid(structural_to_mermaid(ir)),
                ]
            ),
            "Terminal Conditions": mo.vstack(
                [
                    mo.md("All four action profiles with their payoffs."),
                    mo.mermaid(terminal_conditions_to_mermaid(ir)),
                ]
            ),
            "By Domain": mo.vstack(
                [
                    mo.md(
                        "Architecture grouped by domain: "
                        "**Alice**, **Bob**, "
                        "and **Environment**."
                    ),
                    mo.mermaid(architecture_by_domain_to_mermaid(ir)),
                ]
            ),
        }
    )

    _payoff_detail = mo.md(
        f"""
**Payoff Matrix** — Nicky Case's non-zero-sum variant
where mutual defection yields zero:

|  | Cooperate | Defect |
|---|---|---|
| **Cooperate** | ({R}, {R}) | ({S}, {T}) |
| **Defect** | ({T}, {S}) | ({P}, {P}) |

| Parameter | Value | Meaning |
|---|---|---|
| R (Reward) | {R} | Mutual cooperation |
| T (Temptation) | {T} | Defect while other cooperates |
| S (Sucker) | {S} | Cooperate while other defects |
| P (Punishment) | {P} | Mutual defection |

S = {S} (negative!) means being exploited actually
*costs* you, making the stakes higher than standard PD.
"""
    )

    mo.accordion(
        {
            "Under the Hood: OGS Game Structure": _struct_tab,
            "Under the Hood: Payoff Matrix": _payoff_detail,
        }
    )
    return ()


# ── Strategy Catalog (accordion) ─────────────────────────────


@app.cell
def strategy_catalog(mo, COLORS):
    def _badge(name, ncase_name, logic):
        _c = COLORS[name]
        return mo.md(
            f'<span style="display:inline-block;'
            f"width:12px;height:12px;"
            f"border-radius:50%;background:{_c};"
            f'margin-right:6px;vertical-align:middle">'
            f"</span>"
            f"**{name}** ({ncase_name}) — {logic}"
        )

    _cards = mo.vstack(
        [
            _badge(
                "Always Cooperate",
                "Always Cooperate",
                "Always C",
            ),
            _badge(
                "Always Defect",
                "Always Cheat",
                "Always D",
            ),
            _badge(
                "Tit for Tat",
                "Copycat",
                "C first, then copy opponent's last",
            ),
            _badge(
                "Grim Trigger",
                "Grudger",
                "C until opponent D's, then D forever",
            ),
            _badge(
                "Detective",
                "Detective",
                "Probe C,D,C,C; exploit or TfT",
            ),
            _badge(
                "Tit for Two Tats",
                "Copykitten",
                "C unless opponent D'd last 2 rounds",
            ),
            _badge(
                "Pavlov",
                "Simpleton",
                "Win-stay, lose-shift",
            ),
            _badge(
                "Random",
                "Random",
                "50/50 coin flip",
            ),
        ],
        gap=0.25,
    )

    mo.accordion({"The 8 Strategies": _cards})
    return ()


# ── Head-to-Head Controls ────────────────────────────────────


@app.cell
def head_to_head_controls(mo, strategy_map):
    _names = list(strategy_map.keys())
    dropdown_a = mo.ui.dropdown(
        options=_names,
        value="Tit for Tat",
        label="Strategy A",
    )
    dropdown_b = mo.ui.dropdown(
        options=_names,
        value="Always Defect",
        label="Strategy B",
    )
    slider_rounds = mo.ui.slider(start=5, stop=50, step=5, value=10, label="Rounds")

    mo.vstack(
        [
            mo.md("---\n\n## Head-to-Head Match"),
            mo.hstack(
                [dropdown_a, dropdown_b, slider_rounds],
                gap=1,
            ),
        ]
    )

    return (dropdown_a, dropdown_b, slider_rounds)


# ── Head-to-Head Result ──────────────────────────────────────


@app.cell
def head_to_head_result(
    mo,
    dropdown_a,
    dropdown_b,
    slider_rounds,
    strategy_map,
    head_to_head,
    COLORS,
):
    import plotly.graph_objects as _go

    _cls_a = strategy_map[dropdown_a.value]
    _cls_b = strategy_map[dropdown_b.value]
    _h2h = head_to_head(_cls_a(), _cls_b(), rounds=slider_rounds.value)
    _details = _h2h["round_details"]
    _result = _h2h["result"]
    _name_a = _result.strategy_a
    _name_b = _result.strategy_b
    _color_a = COLORS.get(_name_a, "#888")
    _color_b = COLORS.get(_name_b, "#888")

    # Scoreboard stats
    _winner = (
        _name_a
        if _result.score_a > _result.score_b
        else (_name_b if _result.score_b > _result.score_a else "Tie")
    )
    _stats = mo.hstack(
        [
            mo.stat(
                value=str(_result.score_a),
                label=_name_a,
                bordered=True,
            ),
            mo.stat(
                value=_winner,
                label="Winner",
                bordered=True,
            ),
            mo.stat(
                value=str(_result.score_b),
                label=_name_b,
                bordered=True,
            ),
        ],
        justify="center",
        gap=1,
    )

    # Cumulative score chart
    _rounds = [d["round"] for d in _details]
    _fig = _go.Figure()
    _fig.add_trace(
        _go.Scatter(
            x=_rounds,
            y=_h2h["cumulative_a"],
            mode="lines+markers",
            name=_name_a,
            line=dict(color=_color_a, width=3),
            marker=dict(size=8, color=_color_a),
            fill="tozeroy",
            fillcolor=(
                f"rgba({int(_color_a[1:3], 16)}, {int(_color_a[3:5], 16)}, "
                f"{int(_color_a[5:7], 16)}, 0.13)"
            ),
        )
    )
    _fig.add_trace(
        _go.Scatter(
            x=_rounds,
            y=_h2h["cumulative_b"],
            mode="lines+markers",
            name=_name_b,
            line=dict(color=_color_b, width=3),
            marker=dict(size=8, color=_color_b),
            fill="tozeroy",
            fillcolor=(
                f"rgba({int(_color_b[1:3], 16)}, {int(_color_b[3:5], 16)}, "
                f"{int(_color_b[5:7], 16)}, 0.13)"
            ),
        )
    )
    _fig.update_layout(
        title=dict(
            text=f"{_name_a} vs {_name_b}",
            font=dict(size=18),
        ),
        xaxis=dict(
            title="Round",
            dtick=1,
            gridcolor="#eee",
        ),
        yaxis=dict(
            title="Cumulative Score",
            gridcolor="#eee",
            zeroline=True,
            zerolinecolor="#ccc",
        ),
        plot_bgcolor="white",
        paper_bgcolor="white",
        legend=dict(
            orientation="h",
            yanchor="bottom",
            y=1.02,
            xanchor="center",
            x=0.5,
        ),
        margin=dict(t=60, b=40, l=50, r=20),
        height=350,
    )

    # Action grid — colored C/D per round
    _action_rows = []
    for _d in _details:
        _ca = "**C**" if _d["action_a"] == "Cooperate" else "D"
        _cb = "**C**" if _d["action_b"] == "Cooperate" else "D"
        _action_rows.append(
            f"| {_d['round']} | {_ca} | {_cb} "
            f"| {_d['payoff_a']:+d} | {_d['payoff_b']:+d} |"
        )
    _action_table = mo.md(
        "| Round | A | B | A pts | B pts |\n"
        "|:---:|:---:|:---:|:---:|:---:|\n" + "\n".join(_action_rows)
    )

    mo.vstack(
        [
            _stats,
            mo.ui.plotly(_fig),
            mo.accordion({"Round-by-Round Details": _action_table}),
        ],
        gap=0.5,
    )
    return ()


# ── Tournament ───────────────────────────────────────────────


@app.cell
def tournament_result(mo, strategy_map, play_round_robin, COLORS):
    import plotly.graph_objects as _go

    _instances = [cls() for cls in strategy_map.values()]
    _tournament = play_round_robin(_instances, rounds_per_match=10)
    _avg = _tournament.avg_scores

    # Sort by avg score
    _sorted = sorted(_avg.items(), key=lambda x: x[1])
    _names = [s[0] for s in _sorted]
    _scores = [s[1] for s in _sorted]
    _bar_colors = [COLORS.get(n, "#888") for n in _names]

    _fig = _go.Figure()
    _fig.add_trace(
        _go.Bar(
            x=_scores,
            y=_names,
            orientation="h",
            marker=dict(
                color=_bar_colors,
                line=dict(color="#333", width=1),
            ),
            text=[f"{s:.1f}" for s in _scores],
            textposition="outside",
            textfont=dict(size=12),
        )
    )
    _fig.update_layout(
        title=dict(
            text="Round-Robin Tournament — Average Score per Match",
            font=dict(size=18),
        ),
        xaxis=dict(
            title="Average Score",
            gridcolor="#eee",
            zeroline=True,
            zerolinecolor="#999",
            zerolinewidth=2,
        ),
        yaxis=dict(
            title="",
            tickfont=dict(size=13),
        ),
        plot_bgcolor="white",
        paper_bgcolor="white",
        margin=dict(t=50, b=40, l=130, r=60),
        height=380,
        showlegend=False,
    )

    # Winner callout
    _best = _sorted[-1]
    _worst = _sorted[0]

    mo.vstack(
        [
            mo.md(
                "---\n\n## Round-Robin Tournament\n\n"
                "Every strategy plays every other "
                "(plus itself) for 10 rounds."
            ),
            mo.ui.plotly(_fig),
            mo.hstack(
                [
                    mo.callout(
                        mo.md(f"**{_best[0]}** leads with {_best[1]:.1f} avg points"),
                        kind="success",
                    ),
                    mo.callout(
                        mo.md(
                            f"**{_worst[0]}** trails with {_worst[1]:.1f} avg points"
                        ),
                        kind="warn",
                    ),
                ],
                gap=1,
            ),
        ],
        gap=0.5,
    )
    return ()


# ── Evolution Controls ───────────────────────────────────────


@app.cell
def evolution_controls(mo):
    slider_gens = mo.ui.slider(
        start=10,
        stop=100,
        step=10,
        value=30,
        label="Generations",
    )
    slider_noise = mo.ui.slider(
        start=0.0,
        stop=0.2,
        step=0.01,
        value=0.05,
        label="Noise",
    )
    slider_match_rounds = mo.ui.slider(
        start=5,
        stop=30,
        step=5,
        value=10,
        label="Rounds per Match",
    )

    mo.vstack(
        [
            mo.md(
                "---\n\n## Evolutionary Dynamics\n\n"
                "Strategies compete over generations. "
                "Each generation, the worst performer "
                "loses a member and the best gains one."
            ),
            mo.hstack(
                [
                    slider_gens,
                    slider_noise,
                    slider_match_rounds,
                ],
                gap=1,
            ),
        ]
    )

    return (slider_gens, slider_noise, slider_match_rounds)


# ── Evolution Result ─────────────────────────────────────────


@app.cell
def evolution_result(
    mo,
    slider_gens,
    slider_noise,
    slider_match_rounds,
    strategy_map,
    run_evolution,
    COLORS,
):
    import plotly.graph_objects as _go

    _initial_pops = {name: 5 for name in strategy_map}
    _snapshots = run_evolution(
        initial_populations=_initial_pops,
        strategy_factories=strategy_map,
        generations=slider_gens.value,
        rounds_per_match=slider_match_rounds.value,
        noise=slider_noise.value,
        seed=42,
    )

    _gens = [s.generation for s in _snapshots]
    _strat_names = list(strategy_map.keys())

    _fig = _go.Figure()

    # Stacked area — add in reverse so first strategy
    # is on top visually
    for _name in reversed(_strat_names):
        _pops = [s.populations.get(_name, 0) for s in _snapshots]
        _c = COLORS.get(_name, "#888")
        _fig.add_trace(
            _go.Scatter(
                x=_gens,
                y=_pops,
                mode="lines",
                name=_name,
                line=dict(width=0.5, color=_c),
                stackgroup="one",
                fillcolor=(
                    f"rgba({int(_c[1:3], 16)}, {int(_c[3:5], 16)}, "
                    f"{int(_c[5:7], 16)}, 0.8)"
                ),
                hovertemplate=(
                    f"<b>{_name}</b><br>Gen %{{x}}: %{{y}} members<extra></extra>"
                ),
            )
        )

    _fig.update_layout(
        title=dict(
            text="Population Over Generations",
            font=dict(size=18),
        ),
        xaxis=dict(
            title="Generation",
            gridcolor="#eee",
        ),
        yaxis=dict(
            title="Population",
            gridcolor="#eee",
        ),
        plot_bgcolor="white",
        paper_bgcolor="white",
        legend=dict(
            orientation="h",
            yanchor="bottom",
            y=1.02,
            xanchor="center",
            x=0.5,
            font=dict(size=11),
        ),
        margin=dict(t=80, b=40, l=50, r=20),
        height=500,
        hovermode="x unified",
    )

    # Find survivors at final generation
    _final = _snapshots[-1].populations
    _survivors = {k: v for k, v in _final.items() if v > 0}
    _survivor_text = ", ".join(
        f"**{k}** ({v})"
        for k, v in sorted(
            _survivors.items(),
            key=lambda x: x[1],
            reverse=True,
        )
    )

    mo.vstack(
        [
            mo.ui.plotly(_fig),
            mo.callout(
                mo.md(f"After {slider_gens.value} generations: {_survivor_text}"),
                kind="info",
            ),
        ],
        gap=0.5,
    )
    return ()


# ── The Lesson ───────────────────────────────────────────────


@app.cell
def the_lesson(mo):
    mo.vstack(
        [
            mo.md("---\n\n## The Lesson"),
            mo.callout(
                mo.md(
                    '*"What the game of trust teaches us '
                    "is that the success of trust depends "
                    "not just on individual character, but "
                    "on the environment — how many "
                    "interactions there are, whether there "
                    "are mistakes, and what the payoff "
                    'structure looks like."*'
                ),
                kind="neutral",
            ),
            mo.md(
                """
- **Tit-for-Tat** succeeds not by winning
  individual matches, but by building cooperation
  and retaliating against exploitation
- **Always Defect** thrives in short games but
  loses when cooperation has time to establish
- **Noise** (mistakes) favors forgiving strategies
  like **Tit-for-Two-Tats** and **Pavlov**
- The structure of the game matters as much
  as the strategies

---

*Built with [OGS](https://github.com/BlockScience/gds-core)
and inspired by
[The Evolution of Trust](https://ncase.me/trust/)
by Nicky Case.*
"""
            ),
        ]
    )
    return ()


if __name__ == "__main__":
    app.run()

To run locally:

uv run marimo run packages/gds-examples/notebooks/evolution_of_trust.py
# Run tests (71 tests)
uv run --package gds-examples pytest \
    packages/gds-examples/games/evolution_of_trust/ -v

Source files

File Purpose
games/prisoners_dilemma_nash/model.py OGS structure + Nash solver + verification
games/evolution_of_trust/model.py OGS structure with Nicky Case payoffs
games/evolution_of_trust/strategies.py 8 strategy implementations
games/evolution_of_trust/tournament.py Match, tournament, evolutionary dynamics
notebooks/nash_equilibrium.py Interactive Nash analysis notebook
notebooks/evolution_of_trust.py Interactive simulation notebook

All paths relative to packages/gds-examples/.


Connection to Research Boundaries

This work provides concrete evidence for two open questions in Research Boundaries:

RQ2 (Timestep semantics): The tournament simulator implements a specific execution model — synchronous iterated play with optional noise — on top of a structural specification that encodes no execution semantics. This is exactly the pattern anticipated in RQ2: "Each DSL defines its own execution contract if/when it adds simulation."

RQ3 (OGS as degenerate dynamical system): Both projections confirm that OGS games are pure policy (h = g, f = ∅). The Nash solver computes equilibria over the policy layer. The simulator plays strategies through the policy layer. Neither requires a state transition mechanism. The "iterated" aspect of the tournament is handled entirely by the simulation harness, not by GDS temporal loops.

RQ4 (Cross-lens analysis): The two projections operate on different analytical lenses — equilibrium (static, game-theoretic) vs. tournament dynamics (iterated, evolutionary). The specification supports both simultaneously. Whether the Nash equilibrium (Defect, Defect) is also an evolutionary stable strategy is answerable by running both tools on the same specification — a concrete instance of the cross-lens analysis envisioned in RQ4.