# GDS Ecosystem > Typed compositional specifications for complex systems — framework, visualization, games, and examples GDS (Generalized Dynamical Systems) is a typed compositional specification framework for complex systems. It provides a composition algebra (stack, parallel, feedback, loop), a 3-stage compiler to flat IR, and a verification engine. Five domain DSLs (games, control, stock-flow, software, business) compile to the same GDS IR, validating it as a universal transition calculus. # Framework (gds-framework) # GDS Ecosystem **Typed compositional specifications for complex systems**, grounded in [Generalized Dynamical Systems](https://doi.org/10.57938/e8d456ea-d975-4111-ac41-052ce73cb0cc) theory (Zargham & Shorish, 2022). GDS gives you a composition algebra for modeling complex systems — from epidemics and control loops to game theory and software architecture — with built-in verification, visualization, and a shared formal foundation. ## Where to Start | | | | -------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- | | **[Start Here](https://blockscience.github.io/gds-core/tutorials/getting-started/index.md)** | New to GDS? Follow the hands-on tutorial to build your first model in minutes. | | **[Learning Path](https://blockscience.github.io/gds-core/examples/learning-path/index.md)** | Work through seven example models in recommended order, from simple to complex. | | **[Choosing a DSL](https://blockscience.github.io/gds-core/guides/choosing-a-dsl/index.md)** | Compare all seven domain DSLs and pick the right one for your problem. | | **[Rosetta Stone](https://blockscience.github.io/gds-core/guides/rosetta-stone/index.md)** | See the same problem modeled with stockflow, control, and game theory DSLs side by side. | ## Interactive Notebooks Key guides include embedded [marimo](https://marimo.io) notebooks — run code, tweak parameters, and see results directly in the docs. No local setup required. | Guide | What You'll Explore | | ------------------------------------------------------------------------------------------------ | --------------------------------------------------------- | | **[Getting Started](https://blockscience.github.io/gds-core/guides/getting-started/index.md)** | Build a thermostat model in 5 progressive stages | | **[Rosetta Stone](https://blockscience.github.io/gds-core/guides/rosetta-stone/index.md)** | Same problem modeled with three different DSLs | | **[Verification](https://blockscience.github.io/gds-core/guides/verification/index.md)** | All 3 verification layers with deliberately broken models | | **[Visualization](https://blockscience.github.io/gds-core/guides/visualization/index.md)** | 6 view types, 5 themes, cross-DSL rendering | | **[Interoperability](https://blockscience.github.io/gds-core/guides/interoperability/index.md)** | Cross-DSL composition and data exchange | ## Packages Install just what you need: `uv add gds-core[control,continuous]` ### Structural Specification | Package | Import | Description | | ----------------------------------------------------------------------------- | --------------------- | ---------------------------------------------------------------------------------------------------- | | [`gds-framework`](https://blockscience.github.io/gds-core/framework/index.md) | `gds` | Core engine -- composition algebra, compiler, verification | | [`gds-viz`](https://blockscience.github.io/gds-core/viz/index.md) | `gds_viz` | Mermaid diagrams + [phase portraits](https://blockscience.github.io/gds-core/viz/index.md) `[phase]` | | [`gds-interchange`](https://blockscience.github.io/gds-core/owl/index.md) | `gds_interchange.owl` | OWL/SHACL/SPARQL export for formal representability | ### Domain DSLs | Package | Import | Description | | --------------------------------------------------------------------------- | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------- | | [`gds-domains`](https://blockscience.github.io/gds-core/stockflow/index.md) | `gds_domains.stockflow` | Declarative stock-flow DSL | | | `gds_domains.control` | State-space control DSL | | | `gds_domains.games` | Compositional game theory + [Nash equilibrium](https://blockscience.github.io/gds-core/games/equilibrium/index.md) `[games]` | | | `gds_domains.software` | Software architecture DSL (DFD, SM, C4, ERD) | | | `gds_domains.business` | Business dynamics DSL (CLD, SCN, VSM) | | | `gds_domains.symbolic` | SymPy bridge for control models `[symbolic]` | ### Simulation & Analysis | Package | Import | Description | | ------------------------------------------------------------------------------- | ------------------- | -------------------------------------------- | | [`gds-sim`](https://pypi.org/project/gds-sim/) | `gds_sim` | Discrete-time simulation engine (standalone) | | [`gds-continuous`](https://blockscience.github.io/gds-core/continuous/index.md) | `gds_continuous` | Continuous-time ODE engine `[scipy]` | | [`gds-analysis`](https://blockscience.github.io/gds-core/analysis/index.md) | `gds_analysis` | GDSSpec-to-gds-sim bridge, reachability | | [`gds-analysis[psuu]`](https://blockscience.github.io/gds-core/psuu/index.md) | `gds_analysis.psuu` | Parameter sweep + Optuna optimization | ### Tutorials | Package | Description | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `gds-examples` | [Tutorial models](https://blockscience.github.io/gds-core/examples/learning-path/index.md) + [Homicidal Chauffeur](https://blockscience.github.io/gds-core/continuous/getting-started/index.md) notebook | ## Architecture ``` graph TD classDef core fill:#e0e7ff,stroke:#4f46e5,stroke-width:2px,color:#1e1b4b classDef dsl fill:#fef3c7,stroke:#d97706,stroke-width:2px,color:#78350f classDef sim fill:#d1fae5,stroke:#059669,stroke-width:2px,color:#064e3b classDef tool fill:#f3e8ff,stroke:#7c3aed,stroke-width:2px,color:#4c1d95 classDef ext fill:#e5e7eb,stroke:#6b7280,stroke-width:1px,color:#374151 FW["gds-framework
core engine (pydantic only)"]:::core VIZ["gds-viz
Mermaid + phase portraits"]:::tool OWL["gds-interchange
OWL / SHACL / SPARQL"]:::tool GAMES["gds-domains.games
game theory DSL"]:::dsl SF["gds-domains.stockflow
stock-flow DSL"]:::dsl CTRL["gds-domains.control
control systems DSL"]:::dsl SW["gds-domains.software
software architecture DSL"]:::dsl BIZ["gds-domains.business
business dynamics DSL"]:::dsl SYM["gds-domains.symbolic
SymPy + Hamiltonian"]:::tool EX["gds-examples
tutorials + notebooks"]:::ext SIM["gds-sim
discrete-time simulation"]:::sim AN["gds-analysis
reachability + metrics"]:::sim PSUU["gds-analysis.psuu
parameter sweep"]:::sim CONT["gds-continuous
ODE engine (scipy)"]:::sim FW --> VIZ FW --> OWL FW --> GAMES FW --> SF FW --> CTRL FW --> SW FW --> BIZ CTRL --> SYM FW --> EX VIZ --> EX FW --> AN SIM --> AN SIM --> PSUU CONT --> AN ``` **Legend:** :blue_square: Core | :yellow_square: Domain DSLs | :green_square: Simulation & Analysis | :purple_square: Tooling ## For AI Agents and LLMs This documentation is available in a machine-readable format for AI coding assistants, agents, and LLMs: | Resource | URL | Use | | ----------------- | ----------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | **llms.txt** | [/llms.txt](https://blockscience.github.io/gds-core/llms.txt) | Compact index of all documentation pages with one-line descriptions | | **llms-full.txt** | [/llms-full.txt](https://blockscience.github.io/gds-core/llms-full.txt) | Full concatenated documentation — feed this to an LLM for complete context on the GDS ecosystem | **If you are an AI agent** working with gds-core, fetch `llms-full.txt` to get a comprehensive understanding of the framework's architecture, all 14 packages, the composition algebra, verification engine, and domain DSLs. The file follows the [llms.txt](https://llmstxt.org) standard and contains every documentation page in this site as plain Markdown. ## Changelog See the [Changelog](https://blockscience.github.io/gds-core/changelog/index.md) for a complete history of releases, breaking changes, and new capabilities across all packages. ## License Apache-2.0 — [BlockScience](https://block.science) # gds-framework **Typed compositional specifications for complex systems**, grounded in [Generalized Dynamical Systems](https://doi.org/10.57938/e8d456ea-d975-4111-ac41-052ce73cb0cc) theory (Zargham & Shorish, 2022). ## What is this? `gds-framework` is a **foundation layer** for specifying dynamical systems as compositions of typed blocks. It provides the domain-neutral primitives — you bring the domain knowledge. ``` gds-framework Your domain package ───────────────── ────────────────── Block, Interface, Port PredatorBlock, PreyBlock >> | .feedback() .loop() predator >> prey >> environment TypeDef, Space, Entity Population(int, ≥0), EcosystemState GDSSpec, verify() check_conservation(), check_stability() compile_system() → SystemIR visualize(), simulate() ``` A [Generalized Dynamical System](https://doi.org/10.57938/e8d456ea-d975-4111-ac41-052ce73cb0cc) is a pair **{h, X}** where **X** is a state space and **h: X → X** is a state transition map. The GDS canonical form decomposes **h** into a pipeline of typed blocks — observations, decisions, and state updates — that compose via wiring: | GDS concept | Paper notation | gds-framework | | --------------------------- | -------------- | ------------------------------ | | State Space | X | `Entity` with `StateVariable`s | | Exogenous observation | g(·) | `BoundaryAction` | | Decision / policy | g: X → U_x | `Policy` | | State update | f: X × U_x → X | `Mechanism` | | Admissible input constraint | U: X → ℘(U) | `ControlAction` | | Transition map | h = f | \_x ∘ g | | Trajectory | x₀, x₁, ... | Temporal loop (`.loop()`) | ## Quick Start ``` pip install gds-framework ``` ``` from gds import ( BoundaryAction, Policy, ControlAction, interface, Wiring, compile_system, verify, ) from gds.ir.models import FlowDirection # Define blocks with GDS roles and typed interfaces sensor = BoundaryAction( name="Temperature Sensor", interface=interface(forward_out=["Temperature"]), ) controller = Policy( name="PID Controller", interface=interface( forward_in=["Temperature", "Setpoint"], forward_out=["Heater Command"], backward_in=["Energy Cost"], ), ) plant = ControlAction( name="Room", interface=interface( forward_in=["Heater Command"], forward_out=["Temperature"], backward_out=["Energy Cost"], ), ) # Compose with operators — types checked at construction time system = (sensor >> controller >> plant).feedback([ Wiring( source_block="Room", source_port="Energy Cost", target_block="PID Controller", target_port="Energy Cost", direction=FlowDirection.CONTRAVARIANT, ) ]) # Compile to flat IR and verify ir = compile_system("Thermostat", system) report = verify(ir) print(f"{len(ir.blocks)} blocks, {len(ir.wirings)} wirings") # 3 blocks, 3 wirings print(f"{report.checks_passed}/{report.checks_total} checks passed") # 13/14 checks passed (G-002 flags BoundaryAction for having no inputs — expected) ``` ## What's Included **Layer 1 — Composition Algebra:** Blocks with bidirectional typed interfaces, composed via four operators (`>>`, `|`, `.feedback()`, `.loop()`). A 3-stage compiler flattens composition trees into flat IR. Six generic verification checks validate structural properties. **Layer 2 — Specification Layer:** `TypeDef` with runtime constraints, typed `Space`s, `Entity` with `StateVariable`s, block roles (`BoundaryAction`, `Policy`, `Mechanism`, `ControlAction`), `GDSSpec` registry, `ParameterSchema` for configuration space Θ, `CanonicalGDS` projection deriving the formal h = f ∘ g decomposition, `Tagged` mixin for inert semantic annotations, semantic verification (completeness, determinism, reachability, type safety, parameter references, canonical wellformedness), `SpecQuery` for dependency analysis, and JSON serialization. ## Status **v0.2.0 — Alpha.** Both layers are implemented and tested (347 tests, 99% coverage). The composition algebra and specification layer are stable. Domain packages and simulation execution are not yet built — `gds-framework` is the foundation they will build on. ## Credits **Author:** [Rohan Mehta](https://github.com/rororowyourboat) — [BlockScience](https://block.science/) **Theoretical foundation:** [Dr. Michael Zargham](https://github.com/mzargham) and [Dr. Jamsheed Shorish](https://github.com/jshorish) — [Generalized Dynamical Systems, Part I: Foundations](https://blog.block.science/generalized-dynamical-systems-part-i-foundations-2/) (2021). **Architectural inspiration:** [Sean McOwen](https://github.com/SeanMcOwen) — [MSML](https://github.com/BlockScience/MSML) and [bdp-lib](https://github.com/BlockScience/bdp-lib). **Contributors:** - [Michael Zargham](https://github.com/mzargham) — Project direction, GDS theory guidance, and technical review (BlockScience). - [Peter Hacker](https://github.com/phacker3) — Code auditing and review (BlockScience). **Lineage:** Part of the [cadCAD](https://github.com/cadCAD-org/cadCAD) ecosystem for Complex Adaptive Dynamics. # API Quick Reference A compact cheatsheet of every public constructor and helper in `gds-framework`. For full docstrings and source, see the [API Reference](https://blockscience.github.io/gds-core/framework/api/init/index.md) pages. ______________________________________________________________________ ## Imports Everything below is importable directly from the top-level `gds` package: ``` from gds import ( # Types TypeDef, typedef, Probability, NonNegativeFloat, PositiveInt, TokenAmount, AgentID, Timestamp, # State Entity, StateVariable, entity, state_var, # Spaces Space, space, EMPTY, TERMINAL, # Blocks & Roles AtomicBlock, Block, BoundaryAction, Policy, Mechanism, ControlAction, Interface, Port, port, interface, # Composition StackComposition, ParallelComposition, FeedbackLoop, TemporalLoop, Wiring, # Specification GDSSpec, SpecWiring, Wire, # Parameters ParameterDef, ParameterSchema, # Canonical projection CanonicalGDS, project_canonical, # Compilation compile_system, SystemIR, BlockIR, WiringIR, HierarchyNodeIR, flatten_blocks, extract_wirings, extract_hierarchy, StructuralWiring, WiringOrigin, # Verification verify, Finding, Severity, VerificationReport, all_checks, gds_check, get_custom_checks, # Serialization spec_to_dict, spec_to_json, save_ir, load_ir, IRDocument, IRMetadata, # Query SpecQuery, # Tokens tokenize, tokens_overlap, tokens_subset, ) ``` ______________________________________________________________________ ## Types ### `typedef(name, python_type, *, constraint=None, description="", units=None)` Factory for `TypeDef`. Wraps a Python type with an optional runtime constraint predicate. ``` from gds import typedef Rate = typedef("Rate", float, constraint=lambda x: x >= 0, units="1/time") Name = typedef("Name", str, description="Human-readable label") ``` ### `TypeDef` Direct Pydantic constructor (frozen model). ``` from gds import TypeDef Rate = TypeDef( name="Rate", python_type=float, constraint=lambda x: x >= 0, description="Non-negative rate", units="1/time", ) Rate.check_value(0.5) # True Rate.check_value(-1.0) # False ``` ### Built-in types | Name | Python type | Constraint | | ------------------ | ----------- | ------------------------- | | `Probability` | `float` | `0.0 <= x <= 1.0` | | `NonNegativeFloat` | `float` | `x >= 0` | | `PositiveInt` | `int` | `x > 0` | | `TokenAmount` | `float` | `x >= 0` (units: tokens) | | `AgentID` | `str` | none | | `Timestamp` | `float` | `x >= 0` (units: seconds) | ______________________________________________________________________ ## State ### `state_var(td, *, symbol="", description="")` Create a `StateVariable` from a `TypeDef`. The name is resolved by `entity()` from its keyword argument key. ``` from gds import typedef, state_var Population = typedef("Population", float, constraint=lambda x: x >= 0) pop = state_var(Population, symbol="S", description="Susceptible count") ``` ### `entity(name, *, description="", **variables)` Create an `Entity` with `StateVariable` keyword arguments. Each kwarg key becomes the variable name. ``` from gds import typedef, state_var, entity Pop = typedef("Population", float, constraint=lambda x: x >= 0) population = entity( "Population", description="SIR compartments", susceptible=state_var(Pop, symbol="S"), infected=state_var(Pop, symbol="I"), recovered=state_var(Pop, symbol="R"), ) ``` ### `StateVariable` Direct constructor (frozen model). ``` from gds import StateVariable, TypeDef Pop = TypeDef(name="Population", python_type=float) sv = StateVariable(name="susceptible", typedef=Pop, symbol="S") ``` ### `Entity` Direct constructor (frozen model, supports tags). ``` from gds import Entity e = Entity(name="Population", variables={"susceptible": sv, "infected": sv2}) e.validate_state({"susceptible": 100.0, "infected": 10.0}) # [] (no errors) ``` ______________________________________________________________________ ## Spaces ### `space(name, *, description="", **fields)` Create a `Space` with `TypeDef` keyword arguments. ``` from gds import typedef, space Rate = typedef("Rate", float, constraint=lambda x: x >= 0) infection_space = space( "Infection Signal", description="Carries the infection rate", rate=Rate, ) ``` ### `Space` Direct constructor (frozen model). ``` from gds import Space, TypeDef Rate = TypeDef(name="Rate", python_type=float) s = Space(name="Infection Signal", fields={"rate": Rate}) s.validate_data({"rate": 0.1}) # [] (no errors) ``` ### Sentinel spaces | Name | Purpose | | ---------- | ------------------------------------ | | `EMPTY` | No data flows through this port | | `TERMINAL` | Signal terminates here (state write) | ______________________________________________________________________ ## Blocks and Roles ### `interface(*, forward_in=None, forward_out=None, backward_in=None, backward_out=None)` Create an `Interface` from lists of port name strings. Each string is auto-tokenized into a `Port`. ``` from gds import interface iface = interface( forward_in=["Infection Rate"], forward_out=["New Infections"], ) ``` ### `port(name)` Create a single `Port` with auto-tokenized type tokens. ``` from gds import port p = port("Infection Rate") # p.type_tokens == frozenset({"infection", "rate"}) ``` ### `BoundaryAction(name, interface, *, options=[], params_used=[], constraints=[])` Exogenous input block. Enforces `forward_in = ()` (no internal forward inputs). ``` from gds import BoundaryAction, interface exo = BoundaryAction( name="Environment", interface=interface(forward_out=["Temperature"]), params_used=["ambient_temp"], ) ``` ### `Policy(name, interface, *, options=[], params_used=[], constraints=[])` Decision logic block. Maps signals to mechanism inputs. ``` from gds import Policy, interface decide = Policy( name="Infection Policy", interface=interface( forward_in=["Contact Rate", "Population State"], forward_out=["Infection Rate"], ), options=["frequency_dependent", "density_dependent"], params_used=["beta"], ) ``` ### `Mechanism(name, interface, *, updates=[], params_used=[], constraints=[])` State update block. The only block type that writes to state. Enforces no backward ports. ``` from gds import Mechanism, interface update = Mechanism( name="Update Infected", interface=interface( forward_in=["New Infections"], ), updates=[("Population", "infected"), ("Population", "susceptible")], ) ``` ### `ControlAction(name, interface, *, options=[], params_used=[], constraints=[])` Endogenous control block. Reads state, emits control signals. ``` from gds import ControlAction, interface ctrl = ControlAction( name="Observer", interface=interface( forward_in=["Population State"], forward_out=["Control Signal"], ), ) ``` ______________________________________________________________________ ## Composition Operators All operators are methods on `Block` and return composite blocks. ### `a >> b` -- Stack (sequential) composition Chains blocks so the first's `forward_out` feeds the second's `forward_in`. Auto-wires by token overlap when no explicit wiring is provided. ``` system = boundary >> policy >> mechanism ``` ### `a | b` -- Parallel composition Runs blocks side-by-side with no shared wires. ``` inputs = boundary_a | boundary_b ``` ### `block.feedback(wiring)` -- Feedback loop Backward feedback within a single evaluation. Wiring connects `backward_out` to `backward_in`. ``` from gds import Wiring from gds.ir.models import FlowDirection system = (policy >> mechanism).feedback([ Wiring( source_block="Update Infected", source_port="State Feedback", target_block="Infection Policy", target_port="Population State", direction=FlowDirection.CONTRAVARIANT, ), ]) ``` ### `block.loop(wiring, exit_condition="")` -- Temporal loop Structural recurrence across temporal boundaries. All temporal wiring must be `COVARIANT`. ``` system = (boundary >> policy >> mechanism).loop( wiring=[ Wiring( source_block="Update Infected", source_port="Population State", target_block="Infection Policy", target_port="Population State", ), ], exit_condition="t >= 100", ) ``` ### `Wiring(source_block, source_port, target_block, target_port, direction=COVARIANT)` Explicit connection between two blocks (frozen model). ``` from gds import Wiring from gds.ir.models import FlowDirection w = Wiring( source_block="A", source_port="Output", target_block="B", target_port="Input", direction=FlowDirection.COVARIANT, ) ``` ### `StackComposition(name, first, second, wiring=[])` Explicit constructor for `>>`. Use when you need custom wiring between stages. ``` from gds import StackComposition, Wiring composed = StackComposition( name="Custom Stack", first=policy, second=mechanism, wiring=[ Wiring( source_block="Policy", source_port="Rate", target_block="Mechanism", target_port="Delta", ), ], ) ``` ______________________________________________________________________ ## Specification ### `GDSSpec(name, description="")` Central registry that ties types, spaces, entities, blocks, wirings, and parameters into a validated specification. All `register_*` methods are chainable. ``` from gds import GDSSpec spec = GDSSpec(name="SIR Model", description="Susceptible-Infected-Recovered") ``` ### `.collect(*objects)` Bulk-register `TypeDef`, `Space`, `Entity`, `Block`, and `ParameterDef` instances by type dispatch. ``` spec.collect(Rate, Pop, infection_space, population, boundary, policy, mechanism) ``` ### `.register_type(t)` / `.register_space(s)` / `.register_entity(e)` / `.register_block(b)` Individual registration methods. Raise `ValueError` on duplicate names. ``` spec.register_type(Rate).register_space(infection_space).register_entity(population) ``` ### `.register_wiring(w)` Register a `SpecWiring` (not handled by `.collect()`). ``` from gds import SpecWiring, Wire spec.register_wiring(SpecWiring( name="Main Wiring", block_names=["Environment", "Infection Policy", "Update Infected"], wires=[ Wire(source="Environment", target="Infection Policy"), Wire(source="Infection Policy", target="Update Infected"), ], )) ``` ### `.register_parameter(param_or_name, typedef=None)` Register a `ParameterDef` or use the `(name, typedef)` shorthand. ``` from gds import ParameterDef spec.register_parameter(ParameterDef(name="beta", typedef=Rate)) # or shorthand: spec.register_parameter("beta", Rate) ``` ### `.validate_spec()` Run structural validation. Returns a list of error strings (empty means valid). ``` errors = spec.validate_spec() assert errors == [] ``` ### `Wire(source, target, space="", optional=False)` A connection between two blocks in a `SpecWiring` (frozen model). ### `SpecWiring(name, block_names=[], wires=[], description="")` A named composition of blocks connected by wires (frozen model). ______________________________________________________________________ ## Parameters ### `ParameterDef(name, typedef, *, description="", bounds=None)` Schema definition for a single parameter dimension (frozen model). ``` from gds import ParameterDef, typedef Rate = typedef("Rate", float, constraint=lambda x: x >= 0) beta = ParameterDef(name="beta", typedef=Rate, bounds=(0.0, 1.0)) beta.check_value(0.5) # True ``` ### `ParameterSchema` Immutable registry of `ParameterDef` instances. Usually accessed through `GDSSpec.parameter_schema`. ``` from gds import ParameterSchema, ParameterDef schema = ParameterSchema() schema = schema.add(beta) # returns new schema (immutable) "beta" in schema # True ``` ______________________________________________________________________ ## Canonical Projection ### `project_canonical(spec)` Pure function: `GDSSpec` to `CanonicalGDS`. Derives the formal `h = f . g` decomposition. ``` from gds import project_canonical canonical = project_canonical(spec) print(canonical.formula()) # "h_theta : X -> X (h = f_theta o g_theta, theta in Theta)" print(canonical.state_variables) # (("Population", "susceptible"), ...) print(canonical.boundary_blocks) # ("Environment",) print(canonical.policy_blocks) # ("Infection Policy",) print(canonical.mechanism_blocks) # ("Update Infected",) ``` ### `CanonicalGDS` Frozen model with the following fields: | Field | Type | Description | | ------------------ | ------------------------------------ | ------------------------------------------- | | `state_variables` | `tuple[tuple[str, str], ...]` | `(entity, variable)` pairs forming X | | `parameter_schema` | `ParameterSchema` | Parameter space Theta | | `input_ports` | `tuple[tuple[str, str], ...]` | `(block, port)` from BoundaryAction outputs | | `decision_ports` | `tuple[tuple[str, str], ...]` | `(block, port)` from Policy outputs | | `boundary_blocks` | `tuple[str, ...]` | BoundaryAction block names | | `control_blocks` | `tuple[str, ...]` | ControlAction block names | | `policy_blocks` | `tuple[str, ...]` | Policy block names | | `mechanism_blocks` | `tuple[str, ...]` | Mechanism block names | | `update_map` | `tuple[tuple[str, tuple[...]], ...]` | Mechanism update targets | ______________________________________________________________________ ## Compilation ### `compile_system(name, root, *, block_compiler=None, wiring_emitter=None, composition_type=SEQUENTIAL, source="", inputs=None)` Compile a `Block` composition tree into a flat `SystemIR`. ``` from gds import compile_system system_ir = compile_system("SIR", root=boundary >> policy >> mechanism) print(system_ir.blocks) # list[BlockIR] print(system_ir.wirings) # list[WiringIR] print(system_ir.hierarchy) # HierarchyNodeIR tree ``` ### `flatten_blocks(root, block_compiler)` Stage 1: walk the composition tree and map each leaf through a callback. ### `extract_wirings(root, wiring_emitter=None)` Stage 2: walk the tree and emit all wirings (explicit, auto-wired, feedback, temporal). ### `extract_hierarchy(root)` Stage 3: build a `HierarchyNodeIR` tree with flattened sequential/parallel chains. ______________________________________________________________________ ## Verification ### `verify(system, checks=None)` Run verification checks against a `SystemIR`. Returns a `VerificationReport`. ``` from gds import compile_system, verify system_ir = compile_system("SIR", root=composed) report = verify(system_ir) print(report.errors) # count of failed ERROR-level checks print(report.warnings) # count of failed WARNING-level checks print(report.checks_passed) # count of passed checks print(report.checks_total) # total checks run ``` ### Built-in generic checks (G-001 to G-006) | Check | ID | What it validates | | ------------------------------------------ | ----- | ----------------------------------- | | `check_g001_domain_codomain_matching` | G-001 | Domain/codomain port matching | | `check_g002_signature_completeness` | G-002 | All ports have signatures | | `check_g003_direction_consistency` | G-003 | Wiring direction consistency | | `check_g004_dangling_wirings` | G-004 | No wirings reference missing blocks | | `check_g005_sequential_type_compatibility` | G-005 | Sequential type token compatibility | | `check_g006_covariant_acyclicity` | G-006 | No cycles in covariant wirings | ### `@gds_check(check_id, severity=Severity.ERROR)` Decorator to register custom verification checks. ``` from gds import gds_check, Finding, Severity from gds.ir.models import SystemIR @gds_check("CUSTOM-001", Severity.WARNING) def check_no_orphan_blocks(system: SystemIR) -> list[Finding]: ... ``` ### `all_checks()` Returns built-in generic checks + all custom-registered checks. ### `get_custom_checks()` Returns only checks registered via `@gds_check`. ______________________________________________________________________ ## Common Patterns ### Minimal complete model ``` from gds import ( typedef, state_var, entity, space, interface, BoundaryAction, Policy, Mechanism, GDSSpec, ParameterDef, compile_system, verify, project_canonical, ) # 1. Define types Pop = typedef("Population", float, constraint=lambda x: x >= 0) Rate = typedef("Rate", float, constraint=lambda x: x >= 0) # 2. Define entities (state space X) population = entity( "Population", susceptible=state_var(Pop, symbol="S"), infected=state_var(Pop, symbol="I"), recovered=state_var(Pop, symbol="R"), ) # 3. Define spaces (signal shapes) infection_signal = space("Infection Signal", rate=Rate) # 4. Define blocks env = BoundaryAction( name="Environment", interface=interface(forward_out=["Contact Rate"]), ) policy = Policy( name="Infection Policy", interface=interface( forward_in=["Contact Rate"], forward_out=["Infection Rate"], ), params_used=["beta"], ) update = Mechanism( name="Update SIR", interface=interface(forward_in=["Infection Rate"]), updates=[("Population", "susceptible"), ("Population", "infected")], ) # 5. Compose (>> auto-wires by token overlap) composed = env >> policy >> update # 6. Build specification spec = GDSSpec(name="SIR Model") spec.collect(Pop, Rate, population, infection_signal, env, policy, update) spec.register_parameter("beta", Rate) errors = spec.validate_spec() assert errors == [], errors # 7. Compile to IR system_ir = compile_system("SIR", root=composed) # 8. Verify structural properties report = verify(system_ir) print(f"Passed: {report.checks_passed}/{report.checks_total}") # 9. Canonical projection canonical = project_canonical(spec) print(canonical.formula()) # h_theta : X -> X (h = f_theta o g_theta, theta in Theta) ``` ### Composition with explicit wiring When token overlap does not hold between stages, use `StackComposition` with explicit `Wiring`: ``` from gds import StackComposition, Wiring tier_1_to_2 = StackComposition( name="Inputs >> Decisions", first=inputs_tier, second=decisions_tier, wiring=[ Wiring( source_block="Sensor", source_port="Raw Reading", target_block="Controller", target_port="Measurement", ), ], ) ``` ### Feedback pattern ``` from gds import Wiring from gds.ir.models import FlowDirection system_with_feedback = (env >> policy >> mechanism).feedback([ Wiring( source_block="Update SIR", source_port="State", target_block="Infection Policy", target_port="Population State", direction=FlowDirection.CONTRAVARIANT, ), ]) ``` # Installation ## From PyPI ``` pip install gds-framework ``` Or with [uv](https://docs.astral.sh/uv/): ``` uv add gds-framework ``` ## Requirements - Python 3.12 or later - [pydantic](https://docs.pydantic.dev/) >= 2.10 (installed automatically) ## Import The package is installed as `gds-framework` but imported as `gds`: ``` import gds print(gds.__version__) ``` ## Development Setup ``` git clone https://github.com/BlockScience/gds-framework.git cd gds-framework uv sync uv run pytest tests/ -v ``` # Quick Start ## Define Blocks Every block has a **name**, an **interface** (typed ports), and a **role** that constrains its port layout. ``` from gds import BoundaryAction, Policy, Mechanism, interface sensor = BoundaryAction( name="Contact Process", interface=interface(forward_out=["Contact Signal"]), ) policy = Policy( name="Infection Policy", interface=interface( forward_in=["Contact Signal"], forward_out=["Susceptible Delta", "Infected Delta", "Recovered Delta"], ), ) update_s = Mechanism( name="Update Susceptible", interface=interface(forward_in=["Susceptible Delta"]), updates=[("Susceptible", "count")], ) ``` ## Compose Chain blocks with `>>` (sequential) and `|` (parallel). Types are checked at construction time. ``` system = sensor >> policy >> (update_s | update_i | update_r) ``` ## Compile & Verify ``` from gds import compile_system, verify ir = compile_system("SIR Epidemic", system) report = verify(ir) print(f"{report.checks_passed}/{report.checks_total} checks passed") ``` ## Four Composition Operators | Operator | Name | Direction | Use Case | | ------------- | ---------- | ------------------------------------ | ------------------------- | | `>>` | Sequential | Forward | Pipeline stages | | `\|` | Parallel | Independent | Concurrent updates | | `.feedback()` | Feedback | Backward (within evaluation) | Cost signals, constraints | | `.loop()` | Temporal | Forward (across temporal boundaries) | Recurrence, convergence | ## Next Steps - [Architecture](https://blockscience.github.io/gds-core/framework/guide/architecture/index.md) — understand the two-layer design - [Blocks & Roles](https://blockscience.github.io/gds-core/framework/guide/blocks/index.md) — role constraints and composition - [Examples](https://blockscience.github.io/gds-examples) — six complete tutorial models # Architecture ## Two-Layer Design ### Layer 1 — Composition Algebra (the engine) Blocks with bidirectional typed interfaces, composed via four operators (`>>`, `|`, `.feedback()`, `.loop()`). A 3-stage compiler flattens composition trees into flat IR (blocks + wirings + hierarchy). Six generic verification checks (G-001..G-006) validate structural properties on the IR. ### Layer 2 — Specification Layer (the framework) TypeDef with runtime constraints, typed Spaces, Entities with StateVariables, Block roles (BoundaryAction/Policy/Mechanism/ControlAction), GDSSpec registry, ParameterSchema (Θ), canonical projection (CanonicalGDS), Tagged mixin, semantic verification (SC-001..SC-007), SpecQuery for dependency analysis, and JSON serialization. ### Why Two Layers? Layer 0 is domain-neutral by design. It knows about blocks with typed ports, four composition operators, and structural topology — nothing about games, stocks, or controllers. This neutrality is what allows five different DSLs to compile to the same IR. Domain judgment enters at Layer 1: when a modeler decides "this is a Mechanism, not a Policy" or "this variable is part of the system state." Layer 0 cannot make these decisions because they require knowledge of the problem being modeled. The three-stage compiler (flatten, wire, extract hierarchy) is pure algebra. The role annotations (BoundaryAction, Policy, Mechanism) are domain commitments. This separation means Layer 0 specifications stay verifiable without knowing anything about the domain — they can be composed and checked formally. Layer 1 adds the meaning that makes a specification useful for a particular problem. ## Temporal Stack The core algebra is temporally agnostic. The flag `is_temporal=True` on a wiring asserts structural recurrence -- nothing about discrete steps, continuous flow, or events. Time models are DSL-layer declarations. ``` Layer 0 — gds-framework (core) is_temporal=True encodes structural recurrence only. h = f . g is a single atemporal map application. Layer 1 — DSL (ExecutionContract) The DSL declares what "temporal boundary" means: discrete | continuous | event | atemporal Layer 2 — Simulation (SolverInterface / runner) A solver instantiates the time model concretely. Specification and verification are valid without a solver. ``` See the full treatment in the [Temporal Agnosticism](https://blockscience.github.io/gds-core/framework/design/temporal-agnosticism/index.md) design document. ## Foundation + Domain Packages ``` gds-framework (pip install gds-framework) │ │ Domain-neutral composition algebra, typed spaces, │ state model, verification engine, flat IR compiler. │ No domain-specific concepts. No simulation. No rendering. │ ├── Domain: Ecology │ └── Predator-prey dynamics, population models, SIR epidemiology │ ├── Domain: Control Systems │ └── Controllers, plants, sensors, stability/controllability checks │ ├── Domain: Game Theory │ └── gds-games — open games DSL, iterated games, equilibrium analysis │ └── Domain: Multi-Agent Systems └── Agent policies, environment dynamics, coordination protocols ``` Each domain package is a thin layer. The heavy lifting — composition, compilation, verification, querying — lives in `gds-framework`. ## Compilation Pipeline The compiler is decomposed into three reusable stages, each exposed as a public function with generic callbacks: ``` flatten_blocks(root, block_compiler) → list[BlockIR] extract_wirings(root, wiring_emitter) → list[WiringIR] extract_hierarchy(root) → HierarchyNodeIR compile_system(name, root, ..., inputs=None) → SystemIR # Thin wrapper: calls the three stages + assembles SystemIR ``` Domain packages supply callbacks to produce their own IR types (e.g., OGS provides `_compile_game` → `OpenGameIR` and `_ogs_wiring_emitter` → `FlowIR`). Layer 0 owns traversal; Layer 1 owns vocabulary. Auto-wiring for `>>` matches `forward_out` ports to `forward_in` ports by token overlap. Feedback marks `is_feedback=True`; temporal marks `is_temporal=True`. `SystemIR.inputs` accepts `list[InputIR]` — typed external inputs with a `metadata` bag. Layer 0 never infers inputs; domain packages supply them via `compile_system(..., inputs=...)` or by populating them in their own compilation (e.g., OGS `compile_to_ir()`). ## Canonical Projection `project_canonical(spec: GDSSpec) → CanonicalGDS` derives the formal GDS decomposition: - **X** — state space (all Entity variables) - **Z** — exogenous signal space (BoundaryAction outputs) - **D** — decision space (Policy outputs) - **Θ** — parameter space (ParameterSchema) - **g** — policy map (BoundaryAction + Policy blocks) - **f** — state update map (Mechanism blocks) This operates on `GDSSpec` (not `SystemIR`) because SystemIR is flat and lacks role/entity info. # Blocks & Roles ## Block Hierarchy The composition algebra is **sealed** — only 5 concrete Block types exist: - `AtomicBlock` — leaf node (domain packages subclass this) - `StackComposition` (`>>`) — sequential, validates token overlap - `ParallelComposition` (`|`) — independent, no type validation - `FeedbackLoop` (`.feedback()`) — backward within evaluation - `TemporalLoop` (`.loop()`) — forward across temporal boundaries, enforces COVARIANT only ## GDS Roles Block roles subclass `AtomicBlock` and add interface constraints: | Role | `forward_in` | `forward_out` | `backward_in` | `backward_out` | Purpose | | ------------------ | ------------ | ------------- | ------------- | -------------- | ----------------------------- | | **BoundaryAction** | MUST be `()` | any | any | any | Exogenous observation | | **Policy** | any | any | any | any | Decision logic | | **ControlAction** | any | any | any | any | Output observable y = C(x, d) | | **Mechanism** | any | any | MUST be `()` | MUST be `()` | State update | Violating the MUST constraints raises `GDSCompositionError` immediately at construction time. ### BoundaryAction Models exogenous observations — the system boundary. Has no forward inputs. ``` from gds import BoundaryAction, interface sensor = BoundaryAction( name="Temperature Sensor", interface=interface(forward_out=["Temperature"]), ) ``` ### Policy Core decision logic — maps observations to actions. No port restrictions. ``` from gds import Policy, interface controller = Policy( name="PID Controller", interface=interface( forward_in=["Temperature", "Setpoint"], forward_out=["Heater Command"], backward_in=["Energy Cost"], ), params_used=["Kp", "Ki", "Kd", "setpoint"], ) ``` ### ControlAction Output observable — the system's observable output `y = C(x, d)` for composition with other systems. From the plant (inside) perspective, this is what the system emits. From the controller (outside) perspective at a `>>` boundary, it becomes a control action on the next system. ``` from gds import ControlAction, interface plant = ControlAction( name="Room Plant", interface=interface( forward_in=["Heater Command"], forward_out=["Room State"], backward_out=["Energy Cost"], ), ) ``` ### Mechanism State update — the only blocks that write to state. Cannot have backward ports. ``` from gds import Mechanism, interface update = Mechanism( name="Update Room", interface=interface(forward_in=["Room State"]), updates=[("Room", "temperature"), ("Room", "energy_consumed")], ) ``` ## Composition Operators ### Sequential (`>>`) ``` pipeline = sensor >> controller >> plant ``` Auto-wires by token overlap between `forward_out` and `forward_in` ports. Raises `GDSTypeError` if no overlap. ### Parallel (`|`) ``` updates = update_s | update_i | update_r ``` Independent blocks — no type validation. ### Feedback (`.feedback()`) ``` from gds import Wiring from gds.ir.models import FlowDirection system = pipeline.feedback([ Wiring( source_block="Room Plant", source_port="Energy Cost", target_block="PID Controller", target_port="Energy Cost", direction=FlowDirection.CONTRAVARIANT, ) ]) ``` Within-evaluation backward flow. Requires CONTRAVARIANT direction. ### Temporal Loop (`.loop()`) ``` system = pipeline.loop([ Wiring( source_block="Update Prey", source_port="Population", target_block="Observe", target_port="Population", direction=FlowDirection.COVARIANT, ) ]) ``` Cross-boundary forward flow. COVARIANT is mandatory -- CONTRAVARIANT raises `GDSTypeError`. ## Tagged Mixin All blocks inherit from `Tagged`, providing semantic annotations: ``` sensor = BoundaryAction( name="Sensor", interface=interface(forward_out=["Temperature"]), tags={"domain": "Observation"}, ) sensor.has_tag("domain") # True sensor.get_tag("domain") # "Observation" ``` Tags are inert — stripped at compile time, never affect verification or composition. # Composition Algebra The composition algebra is the core engine of GDS. Four operators combine blocks into larger systems, forming a composition tree that the compiler flattens into a flat intermediate representation (IR) of blocks + wirings + hierarchy. These operators are domain-neutral — they work identically whether you're composing stock-flow models, control systems, or open games. Domain DSLs build their composition trees using these same four primitives. ## Operator Reference | Operator | Python Syntax | Type | Constraint | Purpose | | ----------------- | -------------------- | --------------------- | --------------------------------------------- | ------------------------------------------ | | **Stack** | `a >> b` | `StackComposition` | Token overlap (auto) or explicit wiring | Sequential data flow | | **Parallel** | `a \| b` | `ParallelComposition` | None | Independent side-by-side blocks | | **Feedback** | `a.feedback(wiring)` | `FeedbackLoop` | Wiring should be CONTRAVARIANT (not enforced) | Backward signals within an evaluation | | **Temporal Loop** | `a.loop(wiring)` | `TemporalLoop` | Wiring must be COVARIANT | Forward signals across temporal boundaries | All four operators return a new `Block`, so they can be chained and nested freely. The result is always a tree of composition nodes with `AtomicBlock` leaves. ## Block Interfaces Every block has an `Interface` with four directional port slots: ``` ┌─────────────────────┐ forward_in ───►│ │───► forward_out │ Block │ backward_out ◄───│ │◄─── backward_in └─────────────────────┘ ``` - **forward_in / forward_out** — covariant data flow (domain inputs and outputs) - **backward_in / backward_out** — contravariant feedback flow (backward signals) Ports are created from human-readable names using the `port()` factory or the `interface()` helper: ``` from gds import interface iface = interface( forward_in=["Temperature", "Setpoint"], forward_out=["Heater Command"], backward_in=["Energy Cost"], ) ``` Each port name is automatically tokenized for structural type matching (see [Token-Based Matching](#token-based-matching) below). ______________________________________________________________________ ## Sequential Composition (`>>`) Sequential composition chains two blocks so that the first block's outputs can feed the second block's inputs. This is the most common operator and the backbone of any GDS system. ``` pipeline = sensor >> controller >> actuator ``` ### Data Flow ``` graph LR A["sensor"] -->|"forward_out → forward_in"| B["controller"] B -->|"forward_out → forward_in"| C["actuator"] ``` ### Token-Based Matching When you use `>>` without explicit wiring, the validator checks that the first block's `forward_out` tokens overlap with the second block's `forward_in` tokens. This is the **auto-wiring** mechanism. Port names are tokenized by splitting on `+` (space-plus-space) and `,` (comma-space), then lowercasing each part: | Port Name | Tokens | | -------------------------- | ----------------------------- | | `"Temperature"` | `{"temperature"}` | | `"Heater Command"` | `{"heater command"}` | | `"Temperature + Setpoint"` | `{"temperature", "setpoint"}` | | `"Temperature, Pressure"` | `{"temperature", "pressure"}` | Note that plain spaces are **not** delimiters — `"Heater Command"` is a single token `"heater command"`. Only `+` and `,` split. Two ports **match** when their token sets share at least one element. This means `"Temperature + Setpoint"` auto-wires to `"Temperature"` because they share the token `"temperature"`. ``` from gds import AtomicBlock, interface a = AtomicBlock( name="A", interface=interface(forward_out=["Temperature"]), ) b = AtomicBlock( name="B", interface=interface(forward_in=["Temperature"]), ) # Token overlap: {"temperature"} & {"temperature"} = {"temperature"} ✓ pipeline = a >> b ``` ### Token Mismatch Error If the tokens don't overlap at all, construction fails immediately with `GDSTypeError`: ``` a = AtomicBlock( name="A", interface=interface(forward_out=["Temperature"]), ) c = AtomicBlock( name="C", interface=interface(forward_in=["Pressure"]), ) # Token overlap: {"temperature"} & {"pressure"} = {} — empty! pipeline = a >> c # Raises GDSTypeError ``` The error message will show exactly which token sets failed to overlap: ``` GDSTypeError: Stack composition 'A >> C': first.forward_out tokens frozenset({'temperature'}) have no overlap with second.forward_in tokens frozenset({'pressure'}) ``` ### Explicit Wiring Fallback When tokens don't naturally overlap — or when you need precise control over connections — use `StackComposition` directly with explicit `Wiring` objects: ``` from gds import Wiring from gds.blocks.composition import StackComposition comp = StackComposition( name="A >> C", first=a, second=c, wiring=[ Wiring( source_block="A", source_port="Temperature", target_block="C", target_port="Pressure", ) ], ) ``` When explicit wiring is provided, the token overlap check is bypassed entirely. This is how domain DSLs handle cross-tier connections where port names use different vocabularies. ### Interface Propagation The composite block's interface is the **union** of both children's interfaces — all ports from both blocks are preserved: ``` a = AtomicBlock( name="A", interface=interface(forward_out=["Temperature"]), ) b = AtomicBlock( name="B", interface=interface( forward_in=["Temperature"], forward_out=["Command"], ), ) comp = a >> b # comp.interface.forward_out = (Port("Temperature"), Port("Command")) # comp.interface.forward_in = (Port("Temperature"),) ``` This union propagation means that outer compositions can see all ports from inner blocks, enabling multi-level wiring. ### Chaining `>>` is left-associative, so `a >> b >> c` creates: ``` graph TD S1["StackComposition: a >> b >> c"] S2["StackComposition: a >> b"] A["a (AtomicBlock)"] B["b (AtomicBlock)"] C["c (AtomicBlock)"] S1 -->|first| S2 S1 -->|second| C S2 -->|first| A S2 -->|second| B ``` The compiler flattens this binary tree into a flat list `[a, b, c]` in evaluation order. ______________________________________________________________________ ## Parallel Composition (`|`) Parallel composition places two blocks side-by-side with no shared wires. The blocks are completely independent — there is no type validation between them. ``` updates = update_prey | update_predator ``` ### Data Flow ``` graph LR subgraph parallel["update_prey | update_predator"] A["update_prey"] B["update_predator"] end ``` ### No Validation Unlike `>>`, the `|` operator performs no token checks. Any two blocks can be composed in parallel: ``` a = AtomicBlock(name="A", interface=interface(forward_out=["Temperature"])) b = AtomicBlock(name="B", interface=interface(forward_in=["Pressure"])) # No error — blocks are independent parallel = a | b ``` ### Interface Union The composite interface is the concatenation of both children's port tuples: ``` parallel = a | b # parallel.interface.forward_out = (Port("Temperature"),) # parallel.interface.forward_in = (Port("Pressure"),) ``` This is how you build "tiers" in the standard composition pattern — group related blocks in parallel, then wire tiers together sequentially. ### Building Tiers A common helper pattern from the DSLs: ``` def parallel_tier(blocks: list[Block]) -> Block: """Compose a list of blocks in parallel.""" tier = blocks[0] for b in blocks[1:]: tier = tier | b return tier # Three stock update mechanisms running in parallel stock_tier = parallel_tier([update_s, update_i, update_r]) ``` ______________________________________________________________________ ## Feedback Loop (`.feedback()`) Feedback wraps a block (or composition) with backward signals that flow within a single evaluation. This models intra-evaluation feedback -- information that propagates backward through the system before the current evaluation completes. ``` from gds import Wiring from gds.ir.models import FlowDirection system = (controller >> plant).feedback([ Wiring( source_block="Room", source_port="Energy Cost", target_block="PID Controller", target_port="Energy Cost", direction=FlowDirection.CONTRAVARIANT, ) ]) ``` ### Data Flow ``` graph LR C["PID Controller"] -->|"Heater Command"| R["Room"] R -.->|"Energy Cost (backward)"| C style R fill:#f9f,stroke:#333 ``` ### Contravariant Direction Feedback wirings carry signals **backward** -- from outputs to inputs within the same evaluation. The wiring direction should be `FlowDirection.CONTRAVARIANT` to reflect this backward flow. Not enforced at construction Unlike `TemporalLoop` which **rejects** non-COVARIANT wirings at construction time, `FeedbackLoop` does not validate the direction. Using CONTRAVARIANT is a convention that correctly reflects the semantics, but passing COVARIANT will not raise an error. Feedback uses the `backward_out` and `backward_in` port slots: - The **source block** emits the feedback signal on its `backward_out` port - The **target block** receives it on its `backward_in` port ### Interface Preservation The feedback wrapper preserves the inner block's interface unchanged. Feedback is an internal routing concern — it doesn't alter the block's external boundary: ``` inner = controller >> plant fb = inner.feedback([...]) assert fb.interface == inner.interface # True ``` ### Use Cases - **Cost/utility feedback** — a downstream block reports a cost that upstream blocks use for optimization - **Constraint propagation** — feasibility constraints flow backward from mechanisms to policies - **Equilibrium computation** — game-theoretic blocks exchange payoff signals within a step ______________________________________________________________________ ## Temporal Loop (`.loop()`) Temporal loops connect a block's outputs to its inputs across temporal boundaries. This is how state persists -- the output of one evaluation becomes the input of the subsequent application. ``` system = pipeline.loop( [ Wiring( source_block="Update Prey", source_port="Population", target_block="Observe", target_port="Population", direction=FlowDirection.COVARIANT, ) ], exit_condition="converged", ) ``` ### Data Flow ``` graph LR subgraph "Evaluation k" O["Observe"] -->|"Signal"| D["Decide"] D -->|"Action"| U["Update Prey"] end U -.->|"Population (recurrence)"| O style U fill:#bbf,stroke:#333 ``` ### Covariant Only Temporal wirings must be `FlowDirection.COVARIANT`. This is enforced at construction time — attempting to use `CONTRAVARIANT` raises `GDSTypeError`: ``` # This raises GDSTypeError pipeline.loop([ Wiring( source_block="B", source_port="Command", target_block="A", target_port="Temperature", direction=FlowDirection.CONTRAVARIANT, # Not allowed! ) ]) ``` The error message: ``` GDSTypeError: TemporalLoop 'A >> B [loop]': temporal wiring B.Command → A.Temperature must be COVARIANT (got contravariant) ``` The rationale: temporal loops carry state forward across recurrence boundaries. Backward information flow is not structurally meaningful for state evolution. Use `.feedback()` instead for within-evaluation backward signals. ### Exit Condition The optional `exit_condition` parameter documents when iteration should terminate: ``` system = pipeline.loop(wiring, exit_condition="max_steps reached or population stable") ``` This is structural metadata — GDS does not evaluate or enforce exit conditions. It exists for documentation and downstream tooling. ### Interface Preservation Like feedback, the temporal loop preserves the inner block's interface: ``` inner = observe >> decide >> update looped = inner.loop([...]) assert looped.interface == inner.interface # True ``` ______________________________________________________________________ ## Explicit Wiring The `Wiring` model is the universal connection primitive used by feedback, temporal loops, and explicit stack compositions: ``` from gds import Wiring from gds.ir.models import FlowDirection w = Wiring( source_block="Producer", # Name of the source block source_port="Output Signal", # Port name on the source target_block="Consumer", # Name of the target block target_port="Input Signal", # Port name on the target direction=FlowDirection.COVARIANT, # Default ) ``` ### When to Use Explicit Wiring | Scenario | Operator | Wiring Required? | | ---------------------------------------- | -------------------------------- | --------------------- | | Ports share tokens | `a >> b` | No — auto-wired | | Ports use different vocabularies | `StackComposition(wiring=[...])` | Yes | | Backward feedback within evaluation | `.feedback(wiring)` | Yes — always explicit | | State carried across temporal boundaries | `.loop(wiring)` | Yes — always explicit | Auto-wiring only applies to `>>` (stack composition). All other operators require explicit `Wiring` objects. ### Direction - `FlowDirection.COVARIANT` (default) — forward data flow. Used for temporal loops and most stack wirings. - `FlowDirection.CONTRAVARIANT` — backward feedback flow. Used for feedback loops. ______________________________________________________________________ ## Common Patterns ### The Standard Tiered Pattern All three GDS domain DSLs (stock-flow, control, games) converge on the same tiered composition structure: ``` (exogenous inputs | observers) >> (decision logic) >> (state dynamics) .loop(state dynamics → observers) ``` ``` graph LR subgraph "Tier 1: Boundary" BA["BoundaryAction"] end subgraph "Tier 2: Decision" P["Policy"] end subgraph "Tier 3: State" M["Mechanism"] end BA -->|">>"| P P -->|">>"| M M -.->|".loop()"| BA ``` In code: ``` from gds import BoundaryAction, Policy, Mechanism, Wiring, interface from gds.ir.models import FlowDirection # Tier 1: Exogenous inputs sensor = BoundaryAction( name="Temperature Sensor", interface=interface(forward_out=["Temperature"]), ) # Tier 2: Decision logic controller = Policy( name="PID Controller", interface=interface( forward_in=["Temperature"], forward_out=["Heater Command"], ), ) # Tier 3: State update actuator = Mechanism( name="Update Room", interface=interface( forward_in=["Heater Command"], forward_out=["Room Temperature"], ), updates=[("Room", "temperature")], ) # Compose: sequential across tiers, temporal loop for state feedback system = (sensor >> controller >> actuator).loop([ Wiring( source_block="Update Room", source_port="Room Temperature", target_block="Temperature Sensor", target_port="Temperature", direction=FlowDirection.COVARIANT, ) ]) ``` ### Fan-Out (One-to-Many) A single block's output feeds multiple downstream blocks. Use parallel composition for the consumers, then sequence: ``` # One producer, two consumers producer = AtomicBlock( name="Sensor", interface=interface(forward_out=["Signal"]), ) consumer_a = AtomicBlock( name="Display", interface=interface(forward_in=["Signal"]), ) consumer_b = AtomicBlock( name="Logger", interface=interface(forward_in=["Signal"]), ) system = producer >> (consumer_a | consumer_b) ``` ``` graph LR S["Sensor"] --> D["Display"] S --> L["Logger"] ``` Both consumers match on the `"signal"` token, so auto-wiring connects the producer to both. ### Fan-In (Many-to-One) Multiple blocks feed into a single downstream block. Use parallel composition for the producers: ``` sensor_a = AtomicBlock( name="Temp Sensor", interface=interface(forward_out=["Temperature"]), ) sensor_b = AtomicBlock( name="Setpoint Source", interface=interface(forward_out=["Setpoint"]), ) controller = AtomicBlock( name="Controller", interface=interface(forward_in=["Temperature", "Setpoint"]), ) system = (sensor_a | sensor_b) >> controller ``` ``` graph LR T["Temp Sensor"] --> C["Controller"] S["Setpoint Source"] --> C ``` ### Multi-Tier with Explicit Wiring When tiers use different naming conventions, build explicit inter-tier wirings: ``` from gds.blocks.composition import StackComposition tier1 = producer_a | producer_b tier2 = consumer_x | consumer_y # Build explicit wirings by matching ports manually wirings = [ Wiring( source_block="producer_a", source_port="Output", target_block="consumer_x", target_port="Input", ), Wiring( source_block="producer_b", source_port="Result", target_block="consumer_y", target_port="Data", ), ] system = StackComposition( name="tier1 >> tier2", first=tier1, second=tier2, wiring=wirings, ) ``` ______________________________________________________________________ ## Error Reference ### `GDSTypeError` Raised at construction time when token-based type checking fails. | Cause | Message Pattern | | --------------------------------- | ----------------------------------------------------------------------------------------------------------- | | No token overlap in `>>` | `Stack composition 'X': first.forward_out tokens {...} have no overlap with second.forward_in tokens {...}` | | Contravariant wiring in `.loop()` | `TemporalLoop 'X': temporal wiring A.port → B.port must be COVARIANT (got contravariant)` | ### `GDSCompositionError` Raised at construction time when role-specific structural constraints are violated. | Cause | Message Pattern | | ------------------------------------- | -------------------------------------------------------------------------------------------------------- | | BoundaryAction has `forward_in` ports | `BoundaryAction 'X': forward_in must be empty (boundary actions receive no internal forward signals)` | | Mechanism has backward ports | `Mechanism 'X': backward ports must be empty (mechanisms write state, they don't pass backward signals)` | Both errors are subclasses of `GDSError` and are raised during Pydantic model validation — they occur the instant you construct the invalid block or composition, not at compile time. ______________________________________________________________________ ## How It Compiles The composition tree you build with these operators is not executed directly. Instead, the compiler walks the tree in three stages: 1. **Flatten** — recursively calls `flatten()` on the tree to extract all `AtomicBlock` leaves in evaluation order 1. **Wire** — walks the tree again to collect all wirings (auto-wired from `>>`, explicit from `StackComposition.wiring`, feedback from `.feedback()`, temporal from `.loop()`) 1. **Hierarchy** — captures the tree structure for visualization, flattening binary chains into n-ary groups The result is a `SystemIR` — a flat list of `BlockIR` nodes + `WiringIR` edges + a `HierarchyNodeIR` tree. This IR is what verification checks operate on. ``` graph TD BT["Block Tree
(composition operators)"] F["flatten()"] W["extract_wirings()"] H["extract_hierarchy()"] SIR["SystemIR
(blocks + wirings + hierarchy)"] BT --> F --> SIR BT --> W --> SIR BT --> H --> SIR ``` See the [Architecture](https://blockscience.github.io/gds-core/framework/guide/architecture/index.md) guide for more detail on the compilation pipeline and IR structure. # Glossary GDS terminology mapped to framework concepts. | Term | Definition | In the framework | | -------------------------------- | ------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------- | | **State** (x) | The current configuration of the system — a point in the state space | A value held by `StateVariable`s inside an `Entity` | | **State Space** (X) | All possible configurations; can be any data structure, not just ℝⁿ | Product of all `Entity` variables, each typed by `TypeDef` | | **Exogenous Signal** (z) | An external signal entering the system from outside | `BoundaryAction` outputs flowing through `Port`s. Paper uses u for the selected action; codebase uses z for exogenous signals to avoid conflation. | | **Decision** (d) | The output of the policy mapping d = g(x, z) | `Policy` forward_out ports. Corresponds to the paper's selected action u ∈ U_x. | | **Admissible Input Space** (U_x) | The set of inputs available *given* the current state x (paper Def 2.5) | Structural skeleton via `AdmissibleInputConstraint`; behavioral constraint requires runtime | | **Input Map** (g) | Maps state and exogenous signals to a decision: g(x, z) → d | `Policy` blocks (endogenous decision logic) | | **State Update Map** (f) | Takes current state and decision, produces the next state: f(x, d) → x⁺ | `Mechanism` blocks — the only blocks that write to state | | **State Transition Map** (h) | The composed pipeline h = f | \_x ∘ g — one full step of the system | | **Trajectory** | A sequence of states under repeated application of h | Structural recurrence via `.loop()` | | **Reachability** | Can the system reach state y from state x through some sequence of inputs? | `check_reachability()` in the verification engine | | **Controllability** | Can the system be steered to a target state from any nearby initial condition? | Formal property checked at the spec level | | **Configuration Space** | The subset of X where every point is reachable from some initial condition | Characterized by transitive closure over the wiring graph | ## Intellectual Lineage - **GDS formalism** (Roxin 1960s; [Zargham & Shorish 2022](https://doi.org/10.57938/e8d456ea-d975-4111-ac41-052ce73cb0cc)) — state transitions composed over arbitrary data structures - **MSML** (BlockScience) — block roles, parameter tracking, typed transmission channels - **BDP-lib** (Block Diagram Protocol) — abstract/concrete separation, structural validation - **Categorical cybernetics** (Ghani, Hedges et al.) — bidirectional composition with contravariant feedback # Build, Compile, Verify Pipeline This guide explains the end-to-end pipeline for building a GDS specification, compiling it into an intermediate representation, verifying its structural and semantic properties, and extracting the canonical mathematical decomposition. ## Overview A GDS model moves through six stages from domain definitions to verified formal structure: ``` flowchart LR A["Define
types, spaces,
entities"] --> B["Build
blocks with
interfaces"] B --> C["Compose
operators:
>> | .feedback()
.loop()"] C --> D["Register
GDSSpec
registry"] D --> E["Compile
SystemIR"] E --> F["Verify
generic +
semantic checks"] D --> G["Canonical
h = f ∘ g
projection"] ``` There are two independent paths after registration: - **Compile path**: `GDSSpec` (or raw Block tree) --> `compile_system()` --> `SystemIR` --> `verify()` -- validates structural topology - **Semantic path**: `GDSSpec` --> semantic checks (SC-001..SC-007) + `project_canonical()` -- validates domain properties and extracts the formal decomposition These paths are independent. You can compile without a `GDSSpec`, and you can run semantic checks without compiling. Most real workflows use both. ## Step 1: Define Your Domain Every GDS model starts with domain definitions: what types of data exist, what communication channels carry them, and what stateful entities persist across temporal boundaries. ``` from gds import typedef, space, entity, state_var # Types -- runtime-validated data types Temperature = typedef("Temperature", float, units="K") Command = typedef("Command", float) Energy = typedef("Energy", float, constraint=lambda x: x >= 0) # Spaces -- typed communication channels (transient within an evaluation) sensor_space = space("SensorSpace", measured_temp=Temperature) command_space = space("CommandSpace", heater_command=Command) # Entities -- stateful objects that persist across temporal boundaries (the state space X) room = entity("Room", temperature=state_var(Temperature, symbol="T"), energy_consumed=state_var(Energy, symbol="E"), ) ``` These objects are plain Pydantic models. They do not reference each other and have no behavior. They become meaningful only when registered into a `GDSSpec`. ### What `GDSSpec.collect()` does `GDSSpec` is a mutable registry, not a validator. It stores objects by type and enforces name uniqueness. The `collect()` method type-dispatches each argument to the appropriate `register_*()` call: ``` from gds import GDSSpec spec = GDSSpec(name="Thermostat") spec.collect( Temperature, Command, Energy, # TypeDefs sensor_space, command_space, # Spaces room, # Entities ) ``` This is equivalent to calling `register_type()`, `register_space()`, and `register_entity()` individually. `collect()` accepts `TypeDef`, `Space`, `Entity`, `Block`, and `ParameterDef` instances. `SpecWiring` must be registered explicitly via `register_wiring()`. ## Step 2: Build Blocks Blocks are the computational units of a GDS model. Each block has a **role** (what kind of computation it represents) and an **interface** (what ports it exposes for composition). ``` from gds import BoundaryAction, Policy, Mechanism, interface # BoundaryAction -- exogenous input (no forward_in ports) sensor = BoundaryAction( name="Temperature Sensor", interface=interface(forward_out=["Measured Temperature"]), ) # Policy -- decision / observation logic controller = Policy( name="PID Controller", interface=interface( forward_in=["Measured Temperature"], forward_out=["Heater Command"], ), params_used=["Kp", "Ki"], ) # Mechanism -- state update (no backward ports) heater = Mechanism( name="Heater", interface=interface( forward_in=["Heater Command"], forward_out=["Room Temperature"], ), updates=[("Room", "temperature"), ("Room", "energy_consumed")], ) ``` Four roles exist. Each enforces constraints at construction time: | Role | Purpose | Constraint | | ---------------- | ---------------------------- | ---------------------------------------------- | | `BoundaryAction` | Exogenous input | `forward_in` must be empty | | `Policy` | Decision / observation logic | No special constraints | | `Mechanism` | State update | `backward_in` and `backward_out` must be empty | | `ControlAction` | Reserved | No special constraints | Register blocks into the spec: ``` spec.collect(sensor, controller, heater) ``` ## Step 3: Compose Compose blocks into a tree using four operators: | Operator | Syntax | Meaning | | ---------- | -------------------- | --------------------------------------- | | Sequential | `a >> b` | Output of `a` feeds input of `b` | | Parallel | `a \| b` | Side-by-side, no shared wires | | Feedback | `a.feedback(wiring)` | Backward flow within an evaluation | | Temporal | `a.loop(wiring)` | Forward flow across temporal boundaries | ``` # Build the composition tree root = sensor >> controller >> heater ``` The `>>` operator auto-wires ports by **token overlap**: port names are tokenized (split on `+` and `,`, then lowercased), and ports with overlapping tokens are connected automatically. For example, `"Temperature + Setpoint"` auto-wires to a block with `"Temperature"` because they share the token `"temperature"`. When auto-wiring is insufficient, use explicit `Wiring` declarations in a `StackComposition`. See the [Blocks & Roles](https://blockscience.github.io/gds-core/framework/guide/blocks/index.md) guide for details. ## Step 4: Compile to SystemIR `compile_system()` transforms a Block composition tree into a flat `SystemIR` -- the intermediate representation used for verification and visualization. The compiler runs three stages: ``` flowchart TB subgraph "compile_system(name, root)" direction TB S1["Stage 1: Flatten
DFS walk extracts all
AtomicBlock leaves"] S2["Stage 2: Wire
Extract explicit wirings +
auto-wire stack compositions"] S3["Stage 3: Hierarchy
Build composition tree
for visualization"] S1 --> IR1["list[BlockIR]"] S2 --> IR2["list[WiringIR]"] S3 --> IR3["HierarchyNodeIR"] end IR1 --> SIR["SystemIR"] IR2 --> SIR IR3 --> SIR ``` ``` from gds import compile_system system_ir = compile_system("Thermostat", root) ``` ### What each stage does **Stage 1 -- Flatten** (`flatten_blocks`): Walks the composition tree depth-first and extracts every `AtomicBlock` leaf. Each leaf is passed through a `block_compiler` callback that produces a `BlockIR` with the block's name and interface signature. **Stage 2 -- Wire** (`extract_wirings`): Walks the tree again, collecting all connections between blocks. For `StackComposition` nodes, it emits either the explicit `Wiring` declarations or auto-wires by token overlap. For `FeedbackLoop` and `TemporalLoop` nodes, it emits the feedback/temporal wirings with appropriate flags. Each connection is tagged with its origin (`AUTO`, `EXPLICIT`, `FEEDBACK`, `TEMPORAL`). **Stage 3 -- Hierarchy** (`extract_hierarchy`): Builds a `HierarchyNodeIR` tree that mirrors the composition structure. Binary sequential/parallel chains are flattened into n-ary groups for cleaner visualization. ### SystemIR structure The output `SystemIR` is a flat, serializable model: ``` class SystemIR(BaseModel): name: str blocks: list[BlockIR] # All atomic blocks wirings: list[WiringIR] # All connections inputs: list[InputIR] # External inputs (domain-supplied) composition_type: CompositionType hierarchy: HierarchyNodeIR | None = None # Composition tree for visualization source: str = "" # Origin identifier metadata: dict[str, Any] = {} # Domain-specific metadata parameter_schema: ParameterSchema = ... # Parameter space (Theta) ``` `SystemIR` has no knowledge of types, spaces, entities, or roles. It is a purely structural graph of blocks and wires. This is by design -- it separates the compilation concern (structural topology) from the specification concern (domain semantics). ### Domain-specific compilation Domain DSL packages (stockflow, control, games) provide their own `block_compiler` and `wiring_emitter` callbacks to `compile_system()`. These produce `BlockIR` with domain-specific `block_type` and `metadata` fields. The three stages are also available as standalone functions (`flatten_blocks`, `extract_wirings`, `extract_hierarchy`) for custom compiler pipelines. ## Step 5: Verify Verification is where GDS catches specification errors. There are **two independent verification paths** that operate on different objects and catch different classes of problems. ``` flowchart TB subgraph "Generic Checks (Layer 0)" direction TB SIR["SystemIR"] --> V1["verify(system_ir)"] V1 --> G["G-001..G-006
Structural topology"] end subgraph "Semantic Checks (Layer 1)" direction TB SPEC["GDSSpec"] --> SC1["check_completeness(spec)"] SPEC --> SC2["check_determinism(spec)"] SPEC --> SC3["check_reachability(spec, a, b)"] SPEC --> SC4["check_type_safety(spec)"] SPEC --> SC5["check_parameter_references(spec)"] SPEC --> SC6["check_canonical_wellformedness(spec)"] end G --> R1["VerificationReport"] SC1 --> F["list[Finding]"] SC2 --> F SC3 --> F SC4 --> F SC5 --> F SC6 --> F ``` ### Path A: Generic checks on SystemIR Generic checks validate the **structural topology** of a compiled system. They know nothing about types, spaces, or domain semantics -- only blocks, wirings, and their signatures. ``` from gds import verify report = verify(system_ir) print(f"{report.checks_passed}/{report.checks_total} checks passed") print(f"Errors: {report.errors}, Warnings: {report.warnings}") for finding in report.findings: if not finding.passed: print(f" [{finding.severity}] {finding.check_id}: {finding.message}") ``` The six generic checks: | Check | Name | What It Validates | | ----- | ----------------------------- | ------------------------------------------------------------------------------------------------ | | G-001 | Domain/codomain matching | Covariant wiring labels are token-subsets of source output or target input | | G-002 | Signature completeness | Every block has at least one input and one output slot | | G-003 | Direction consistency | No COVARIANT+feedback or CONTRAVARIANT+temporal contradictions; contravariant port-slot matching | | G-004 | Dangling wirings | All wiring endpoints reference blocks or inputs that exist in the system | | G-005 | Sequential type compatibility | Stack wiring labels are token-subsets of BOTH source output AND target input | | G-006 | Covariant acyclicity | The covariant (within-evaluation) flow graph is a DAG | You can also run a subset of checks: ``` from gds.verification.generic_checks import ( check_g001_domain_codomain_matching, check_g006_covariant_acyclicity, ) report = verify(system_ir, checks=[ check_g001_domain_codomain_matching, check_g006_covariant_acyclicity, ]) ``` ### Path B: Semantic checks on GDSSpec Semantic checks validate **domain properties** at the specification level. They operate on `GDSSpec` (not `SystemIR`) because they need access to entities, types, spaces, parameters, and block roles -- information that is lost during compilation. ``` from gds.verification.spec_checks import ( check_completeness, check_determinism, check_type_safety, check_parameter_references, check_canonical_wellformedness, ) # Each returns list[Finding] findings = check_completeness(spec) findings += check_determinism(spec) findings += check_type_safety(spec) findings += check_parameter_references(spec) findings += check_canonical_wellformedness(spec) for f in findings: if not f.passed: print(f" [{f.severity}] {f.check_id}: {f.message}") ``` The semantic checks: | Check | Name | What It Validates | | ------ | ------------------------ | -------------------------------------------------------------------------- | | SC-001 | Completeness | Every entity variable is updated by at least one Mechanism | | SC-002 | Determinism | No entity variable is updated by multiple Mechanisms in the same wiring | | SC-003 | Reachability | Signal can propagate from one block to another through wires | | SC-004 | Type safety | Wire space references resolve to registered spaces | | SC-005 | Parameter references | All `params_used` entries on blocks match registered parameter definitions | | SC-006 | Canonical wellformedness | At least one Mechanism exists (f is non-empty) | | SC-007 | Canonical wellformedness | State space X is non-empty (entities with variables exist) | Why two verification paths? Generic checks and semantic checks answer fundamentally different questions: - **Generic checks** ask: "Is this composition graph structurally sound?" They work on any Block tree, with or without a `GDSSpec`. - **Semantic checks** ask: "Does this specification make sense as a dynamical system?" They require the full domain context that only `GDSSpec` provides. A model can pass all generic checks (structurally valid graph) but fail semantic checks (e.g., orphan state variables that no Mechanism updates). Conversely, a specification can pass all semantic checks but produce a `SystemIR` with structural problems. ### Findings and reports Both paths produce `Finding` objects: ``` class Finding(BaseModel): check_id: str # e.g., "G-001" or "SC-002" severity: Severity # ERROR, WARNING, or INFO message: str # Human-readable description source_elements: list[str] # Block/variable names involved passed: bool # True if the check passed ``` `verify()` aggregates findings into a `VerificationReport` with convenience properties: ``` class VerificationReport(BaseModel): system_name: str findings: list[Finding] @property def errors(self) -> int: ... # Failed findings with ERROR severity @property def warnings(self) -> int: ... # Failed findings with WARNING severity @property def checks_passed(self) -> int: ... @property def checks_total(self) -> int: ... ``` ### Custom checks You can register custom verification checks with the `@gds_check` decorator: ``` from gds import gds_check, all_checks, Finding, Severity, SystemIR @gds_check("CUSTOM-001", Severity.WARNING) def check_no_orphan_blocks(system: SystemIR) -> list[Finding]: """Flag blocks with no wirings.""" wired = {w.source for w in system.wirings} | {w.target for w in system.wirings} return [ Finding( check_id="CUSTOM-001", severity=Severity.WARNING, message=f"Block '{b.name}' has no wirings", source_elements=[b.name], passed=False, ) for b in system.blocks if b.name not in wired ] # Run built-in + custom checks together report = verify(system_ir, checks=all_checks()) ``` ## Step 6: Canonical Projection `project_canonical()` is a pure function that derives the formal GDS mathematical decomposition from a `GDSSpec`: $$h\_\\theta : X \\to X \\quad \\text{where} \\quad h = f \\circ g, \\quad \\theta \\in \\Theta$$ ``` from gds import project_canonical canonical = project_canonical(spec) print(canonical.formula()) # "h_θ : X → X (h = f_θ ∘ g_θ, θ ∈ Θ)" (with parameters) # "h : X → X (h = f ∘ g)" (without parameters) # State space X: entity variables print(f"|X| = {len(canonical.state_variables)}") for entity_name, var_name in canonical.state_variables: print(f" {entity_name}.{var_name}") # Decomposition print(f"|f| = {len(canonical.mechanism_blocks)} mechanisms") print(f"|g| = {len(canonical.policy_blocks)} policies") print(f"|U| = {len(canonical.input_ports)} input ports") print(f"|D| = {len(canonical.decision_ports)} decision ports") ``` The canonical projection classifies every registered block by its role: | Role | Maps to | Canonical component | | ---------------- | ------------------------ | ----------------------------- | | `BoundaryAction` | Exogenous signal space Z | Exogenous inputs | | `Policy` | Decision space D | g: X x Z --> D | | `Mechanism` | State transition | f: X x D --> X | | `ControlAction` | Output space Y | Output observable y = C(x, d) | `project_canonical()` operates on `GDSSpec`, not `SystemIR`, because it needs role information and entity definitions. It is deterministic, stateless, and never mutates the spec. The `CanonicalGDS` result is frozen (immutable). When to use canonical projection vs verification - Use **verification** to check if a model is well-formed (catches errors). - Use **canonical projection** to understand what a model *means* mathematically (extracts structure). - SC-006 and SC-007 bridge both: they run `project_canonical()` internally to check that the canonical form is well-formed. ## Two Type Systems GDS has two coexisting type systems that serve different purposes at different stages of the pipeline: ### Token-based types (compile-time) Used during **composition and wiring**. Port names are automatically tokenized -- split on spaces, lowercased into a `frozenset`. The `>>` operator and auto-wiring use token overlap to match ports. ``` # These ports auto-wire because they share the token "temperature" # "Measured Temperature" -> {"measured", "temperature"} # "Room Temperature" -> {"room", "temperature"} # Overlap: {"temperature"} -- match! ``` Token matching is used by the compiler (Stage 2) and by generic checks G-001 and G-005. ### TypeDef-based types (runtime) Used for **data validation** at the value level. `TypeDef` wraps a Python type with an optional constraint predicate. Used by `Space` and `Entity` to validate actual data values. ``` Temperature = typedef("Temperature", float, units="K") Energy = typedef("Energy", float, constraint=lambda x: x >= 0) ``` TypeDefs are never invoked during compilation. They exist for runtime validation when actual data flows through the system (e.g., during simulation, which is outside the current scope of GDS). These type systems do not interact Token-based matching operates on port name strings. TypeDef validation operates on Python values. A port named `"Temperature"` will auto-wire to any port whose name contains the token `"temperature"`, regardless of what `TypeDef` (if any) is associated with the data flowing through it. ## Complete Example Putting it all together: ``` from gds import ( GDSSpec, BoundaryAction, Policy, Mechanism, compile_system, verify, project_canonical, typedef, space, entity, state_var, interface, ) from gds.verification.spec_checks import ( check_completeness, check_determinism, check_canonical_wellformedness, ) # --- Step 1: Define domain --- Temperature = typedef("Temperature", float, units="K") Command = typedef("Command", float) room = entity("Room", temperature=state_var(Temperature, symbol="T")) # --- Step 2: Build blocks --- sensor = BoundaryAction( name="Sensor", interface=interface(forward_out=["Temperature"]), ) controller = Policy( name="Controller", interface=interface( forward_in=["Temperature"], forward_out=["Command"], ), ) heater = Mechanism( name="Heater", interface=interface( forward_in=["Command"], forward_out=["Temperature"], ), updates=[("Room", "temperature")], ) # --- Step 3: Compose --- root = sensor >> controller >> heater # --- Step 4: Register --- spec = GDSSpec(name="Thermostat") spec.collect(Temperature, Command, room, sensor, controller, heater) # --- Step 5a: Compile + generic verification --- system_ir = compile_system("Thermostat", root) report = verify(system_ir) print(f"Generic: {report.checks_passed}/{report.checks_total} passed") # --- Step 5b: Semantic verification --- findings = check_completeness(spec) findings += check_determinism(spec) findings += check_canonical_wellformedness(spec) for f in findings: status = "PASS" if f.passed else "FAIL" print(f" [{status}] {f.check_id}: {f.message}") # --- Step 6: Canonical projection --- canonical = project_canonical(spec) print(f"\n{canonical.formula()}") print(f"|X| = {len(canonical.state_variables)}, |f| = {len(canonical.mechanism_blocks)}") ``` # Spaces Spaces define the signal domains that flow between blocks. They describe what kind of data travels through ports in a composition. ## Creating Spaces A `Space` wraps a set of named dimensions, each backed by a `TypeDef`: ``` from gds import space, typedef Temperature = typedef("Temperature", float) Humidity = typedef("Humidity", float, constraint=lambda x: 0.0 <= x <= 1.0) env_space = space("Environment", temperature=Temperature, humidity=Humidity) ``` ## Built-in Spaces Two sentinel spaces are provided for common patterns: | Space | Purpose | | ---------- | ------------------------------------------------------------------------------- | | `EMPTY` | No signals — used for unused port groups (e.g. backward ports on a `Mechanism`) | | `TERMINAL` | Terminal signal — marks the end of a signal chain | ``` from gds import EMPTY, TERMINAL ``` ## Spaces in Blocks Spaces connect to blocks through interfaces. Each block has four port groups, and spaces define the data flowing through them: ``` from gds import Policy, interface, space, typedef Command = typedef("Command", float) Signal = typedef("Signal", float) cmd_space = space("Command Space", command=Command) sig_space = space("Signal Space", signal=Signal) controller = Policy( name="Controller", interface=interface( forward_in=["Signal"], forward_out=["Command"], ), ) ``` ## Registering Spaces Spaces are registered with `GDSSpec` for semantic validation: ``` from gds import GDSSpec spec = GDSSpec(name="My System") spec.collect(env_space, cmd_space) # type-dispatched registration ``` ## See Also - [Type System](https://blockscience.github.io/gds-core/framework/guide/types/index.md) — TypeDefs that back space dimensions - [State & Entities](https://blockscience.github.io/gds-core/framework/guide/state/index.md) — state variables that use TypeDefs - [Blocks & Roles](https://blockscience.github.io/gds-core/framework/guide/blocks/index.md) — how spaces connect to block interfaces - [API Reference](https://blockscience.github.io/gds-core/framework/api/spaces/index.md) — `gds.spaces` module # Specification ## GDSSpec `GDSSpec` is the central registry that holds all components of a model: types, spaces, entities, blocks, parameters, and wirings. ``` from gds import GDSSpec, typedef, entity, state_var, space, BoundaryAction, interface spec = GDSSpec(name="My Model", description="A simple model") # Register types Count = typedef("Count", int, constraint=lambda x: x >= 0) spec.register_type(Count) # Register entities agent = entity("Agent", wealth=state_var(Count, symbol="W")) spec.register_entity(agent) # Register blocks, spaces, parameters... ``` ### Using `collect()` The `collect()` method type-dispatches objects to the right `register_*()` call: ``` spec.collect( Count, RateType, # TypeDefs signal, # Spaces agent, # Entities sensor, controller, # Blocks ) ``` ### SpecWiring Explicit wiring declarations at the specification level: ``` from gds import SpecWiring, Wire spec.register_wiring( SpecWiring( name="Main Pipeline", block_names=["Sensor", "Controller"], wires=[ Wire(source="Sensor", target="Controller", space="TemperatureSpace"), ], ) ) ``` ## Entities & State Entities define the state space X -- what persists across temporal boundaries. ``` from gds import entity, state_var, typedef Temperature = typedef("Temperature", float) Energy = typedef("Energy", float, constraint=lambda x: x >= 0) room = entity("Room", temperature=state_var(Temperature, symbol="T"), energy_consumed=state_var(Energy, symbol="E"), ) ``` ## Spaces Spaces define typed communication channels -- transient signals within an evaluation. ``` from gds import space temp_space = space("TemperatureSpace", measured_temp=Temperature) ``` ## Parameters Parameters define the configuration space Θ — values fixed for a simulation run. ``` spec.register_parameter("Kp", typedef("GainType", float)) spec.register_parameter("Ki", typedef("GainType", float)) ``` Blocks reference parameters via `params_used`: ``` controller = Policy( name="PID Controller", interface=interface(forward_in=["Temperature"], forward_out=["Command"]), params_used=["Kp", "Ki"], ) ``` # State & Entities Entities and state variables define the mutable state that a GDS system evolves over time. They live in Layer 1 (the specification framework) and are validated by TypeDefs at runtime. ## Entities An `Entity` groups related state variables into a named container. Each entity represents a distinct stateful component of the system. ``` from gds import entity, state_var, typedef Count = typedef("Count", int, constraint=lambda x: x >= 0) Rate = typedef("Rate", float, constraint=lambda x: 0.0 <= x <= 1.0) population = entity( "Population", susceptible=state_var(Count, symbol="S"), infected=state_var(Count, symbol="I"), recovered=state_var(Count, symbol="R"), ) ``` ### State Variables Each `StateVariable` has: - **type_def** — a `TypeDef` that validates values at runtime - **symbol** — a short mathematical symbol (e.g. `"S"`, `"I"`, `"R"`) - **description** — optional human-readable description ``` from gds import state_var, typedef Temperature = typedef("Temperature", float) temp = state_var(Temperature, symbol="T", description="Current temperature in Celsius") ``` ## Registering Entities Entities are registered with `GDSSpec` either explicitly or via `collect()`: ``` from gds import GDSSpec, entity, state_var, typedef spec = GDSSpec(name="SIR Model") Count = typedef("Count", int, constraint=lambda x: x >= 0) population = entity("Population", s=state_var(Count, symbol="S")) # Explicit registration spec.register_entity(population) # Or via collect() spec.collect(population) ``` ## Role in Canonical Form Entities define the state space **X** in the canonical decomposition `h = f . g`. The dimension of X (number of state variables across all entities) determines the character of the system: | |X| | Canonical Form | Character | |---|---|---| | 0 | h = g | Stateless (pure policy) | | n > 0 | h = f . g | Full dynamical system | ## See Also - [Type System](https://blockscience.github.io/gds-core/framework/guide/types/index.md) — TypeDefs used by state variables - [Spaces](https://blockscience.github.io/gds-core/framework/guide/spaces/index.md) — signal spaces that connect blocks - [Specification](https://blockscience.github.io/gds-core/framework/guide/spec/index.md) — registering entities with GDSSpec - [API Reference](https://blockscience.github.io/gds-core/framework/api/state/index.md) — `gds.state` module # Type System gds-framework has two type systems that coexist at different levels. ## Token-Based Types Lightweight structural matching at composition/wiring time. Port names auto-tokenize; `tokens_subset()` and `tokens_overlap()` check set containment. ``` from gds import port, tokenize, tokens_overlap p1 = port("Contact Signal") # tokens: {"contact", "signal"} p2 = port("Signal Strength") # tokens: {"signal", "strength"} tokens_overlap(p1.tokens, p2.tokens) # True — "signal" in common ``` Used by composition validators, auto-wiring, and G-001/G-005 checks. ## TypeDef-Based Types Rich runtime validation at the data level. TypeDef wraps a Python type + optional constraint predicate. ``` from gds import typedef Count = typedef("Count", int, constraint=lambda x: x >= 0, description="Non-negative count") Count.check_value(5) # True Count.check_value(-1) # False ``` Used by Spaces and Entities to validate actual data values. ### Built-in TypeDefs | TypeDef | Python type | Constraint | | ------------------ | ----------- | ------------- | | `PositiveInt` | `int` | x > 0 | | `NonNegativeFloat` | `float` | x >= 0 | | `Probability` | `float` | 0 \<= x \<= 1 | | `Timestamp` | `float` | x >= 0 | | `TokenAmount` | `float` | x >= 0 | | `AgentID` | `str` | non-empty | ## Interfaces & Ports Every block has an `Interface` with four port groups: ``` from gds import interface iface = interface( forward_in=["Temperature"], forward_out=["Heater Command"], backward_in=["Energy Cost"], backward_out=[], ) ``` - **forward_in** / **forward_out** — covariant data flow - **backward_in** / **backward_out** — contravariant flow (feedback, cost signals) # Verification Check Catalog GDS runs 15 verification checks across two registries to validate both structural topology and domain semantics. This page is the complete reference for every check. ## Overview Verification answers the question: *is this specification well-formed?* It does not simulate or solve — it validates structure. There are two independent check registries: | Registry | Checks | Operates on | What it validates | | ------------ | --------------------- | ----------- | ------------------------------------------------------------------------------------------------------ | | **Generic** | G-001 through G-006 | `SystemIR` | Structural topology — port matching, acyclicity, dangling references | | **Semantic** | SC-001 through SC-009 | `GDSSpec` | Domain properties — completeness, determinism, type safety, canonical form, admissibility, transitions | ### Formal Specifications Each of the 15 checks has a formal property statement, invariant connection to the canonical form `h = f . g`, failure semantics, and soundness conditions. See [Verification Check Specifications](https://blockscience.github.io/gds-core/framework/design/check-specifications/index.md) for the complete formal treatment. Generic checks run on the compiled IR (after `compile_system()`). Semantic checks run on the specification (the `GDSSpec` registry). Both produce `Finding` objects with a check ID, severity, message, and pass/fail status. ### When to run verification - **After building a SystemIR** — run `verify(system)` to check all generic checks - **After building a GDSSpec** — call individual semantic check functions - **During development** — run checks incrementally to catch errors early - **In tests** — assert that all checks pass for valid models ### Running generic checks ``` from gds import compile_system, verify ir = compile_system("My Model", composed_system) report = verify(ir) print(f"{report.checks_passed}/{report.checks_total} checks passed") for finding in report.findings: if not finding.passed: print(f" [{finding.severity.value}] {finding.check_id}: {finding.message}") ``` ### Running semantic checks Semantic checks are called individually against a `GDSSpec`: ``` from gds.verification.spec_checks import ( check_completeness, check_determinism, check_parameter_references, check_type_safety, check_canonical_wellformedness, check_admissibility_references, check_transition_reads, ) spec = build_spec() findings = [] findings += check_completeness(spec) findings += check_determinism(spec) findings += check_parameter_references(spec) findings += check_type_safety(spec) findings += check_canonical_wellformedness(spec) findings += check_admissibility_references(spec) findings += check_transition_reads(spec) for f in findings: if not f.passed: print(f"[{f.severity.value}] {f.check_id}: {f.message}") ``` ______________________________________________________________________ ## Generic Checks (G-001 through G-006) These checks operate on `SystemIR` — the flat intermediate representation produced by `compile_system()`. They validate structural topology without referencing any domain-specific block types or semantics. All generic checks run automatically when you call `verify(system)`. ### G-001: Domain/Codomain Matching **What it checks:** For every covariant block-to-block wiring, the wiring label must be a token-subset of the source block's `forward_out` or the target block's `forward_in`. This ensures that signals flowing forward through the system reference ports that actually exist on the connected blocks. **Severity:** ERROR **Skips:** Contravariant wirings (handled by G-003 instead). **Trigger example:** Block A outputs `"Temperature"` but Block B expects `"Pressure"`. A wiring labeled `"humidity"` connects them — the label matches neither side. ``` from gds.ir.models import BlockIR, FlowDirection, SystemIR, WiringIR system = SystemIR( name="Bad Wiring", blocks=[ BlockIR(name="A", signature=("", "Temperature", "", "")), BlockIR(name="B", signature=("Pressure", "", "", "")), ], wirings=[ WiringIR( source="A", target="B", label="humidity", direction=FlowDirection.COVARIANT, ), ], ) ``` **Example finding (failure):** ``` [error] G-001: Wiring 'humidity': A out='Temperature' -> B in='Pressure' — MISMATCH ``` **Example finding (pass):** ``` [error] G-001: Wiring 'temperature': Sensor out='Temperature' -> Controller in='Temperature' ``` Note For generic checks (G-001..G-006), passing findings retain `severity=ERROR` — the severity indicates what *would* be reported if the check failed. For semantic checks (SC-001..SC-009), passing findings use `severity=INFO`. Use the `passed` field to distinguish pass from fail. ______________________________________________________________________ ### G-002: Signature Completeness **What it checks:** Every block must have at least one non-empty input slot (forward_in or backward_in) AND at least one non-empty output slot (forward_out or backward_out). A block with no inputs or no outputs is structurally isolated. **Severity:** ERROR **Trigger example:** A block with a completely empty signature — no ports at all. ``` system = SystemIR( name="Incomplete", blocks=[ BlockIR(name="Valid", signature=("In", "Out", "", "")), BlockIR(name="Orphan", signature=("", "", "", "")), ], wirings=[], ) ``` **Example finding (failure):** ``` [error] G-002: Orphan: signature ('', '', '', '') — no inputs, no outputs ``` G-002 and BoundaryActions G-002 flags `BoundaryAction` blocks (which have no `forward_in` by design) and terminal `Mechanism` blocks (which may have no `forward_out`). These are valid GDS boundaries — expect G-002 failures on them. When testing, either skip G-002 or accept these as known findings. ______________________________________________________________________ ### G-003: Direction Consistency **What it checks:** Two validations on every wiring: **A) Flag consistency** — the `direction`, `is_feedback`, and `is_temporal` flags must not contradict each other: - `COVARIANT` + `is_feedback=True` is a contradiction (feedback implies contravariant flow) - `CONTRAVARIANT` + `is_temporal=True` is a contradiction (temporal implies covariant flow) **B) Contravariant port-slot matching** — for `CONTRAVARIANT` wirings, the label must be a token-subset of the source's `backward_out` (signature slot 3) or the target's `backward_in` (signature slot 2). This is the backward-flow counterpart of what G-001 does for covariant wirings. **C) Non-empty backward ports** — for `CONTRAVARIANT` wirings, at least one of `src_bwd_out` or `tgt_bwd_in` must be non-empty. If both are empty, G-003 emits `"CONTRAVARIANT but both backward ports are empty"` with `passed=False`. **Severity:** ERROR **Trigger example (flag contradiction):** ``` system = SystemIR( name="Contradiction", blocks=[BlockIR(name="A"), BlockIR(name="B")], wirings=[ WiringIR( source="A", target="B", label="x", direction=FlowDirection.COVARIANT, is_feedback=True, # contradicts COVARIANT ), ], ) ``` **Example finding (flag contradiction):** ``` [error] G-003: Wiring 'x' (A -> B): COVARIANT + is_feedback — contradiction ``` **Trigger example (contravariant mismatch):** ``` system = SystemIR( name="Mismatch", blocks=[ BlockIR(name="A", signature=("", "", "", "Cost")), BlockIR(name="B", signature=("", "", "Reward", "")), ], wirings=[ WiringIR( source="A", target="B", label="unrelated", direction=FlowDirection.CONTRAVARIANT, is_feedback=True, ), ], ) ``` **Example finding (contravariant mismatch):** ``` [error] G-003: Wiring 'unrelated': A bwd_out='Cost' -> B bwd_in='Reward' — MISMATCH ``` ______________________________________________________________________ ### G-004: Dangling Wirings **What it checks:** Every wiring's `source` and `target` must reference a block or input that exists in the system. A wiring pointing to a non-existent block is dangling — either a typo or a missing block. **Severity:** ERROR **Recognized endpoints:** Block names (`system.blocks`) and input names (`system.inputs`). Inputs are valid wiring sources (they represent exogenous signals entering the system). **Trigger example:** ``` system = SystemIR( name="Dangling", blocks=[BlockIR(name="B", signature=("Signal", "", "", ""))], wirings=[ WiringIR( source="Ghost", target="B", label="signal", direction=FlowDirection.COVARIANT, ), ], ) ``` **Example finding (failure):** ``` [error] G-004: Wiring 'signal' (Ghost -> B) — source 'Ghost' unknown ``` ______________________________________________________________________ ### G-005: Sequential Type Compatibility **What it checks:** In stack (sequential) composition, the wiring label must be a token-subset of BOTH the source's `forward_out` AND the target's `forward_in`. This is stricter than G-001, which only requires the label to match one side. G-005 enforces that the types are compatible on both ends of a sequential connection. **Severity:** ERROR **Skips:** Temporal wirings (`is_temporal=True`), contravariant wirings, and wirings where either endpoint is not in the block set (e.g., `InputIR` endpoints). **Additional failure mode:** If either `src_out` or `tgt_in` is empty (the block has no forward ports), G-001 emits `"Cannot verify domain/codomain: ..."` with `passed=False`. **Trigger example:** ``` system = SystemIR( name="Incompatible", blocks=[ BlockIR(name="A", signature=("", "X", "", "")), BlockIR(name="B", signature=("Y", "", "", "")), ], wirings=[ WiringIR( source="A", target="B", label="z", direction=FlowDirection.COVARIANT, ), ], ) ``` **Example finding (failure):** ``` [error] G-005: Stack A ; B: out='X', in='Y', wiring='z' — type mismatch ``` ______________________________________________________________________ ### G-006: Covariant Acyclicity **What it checks:** The covariant (forward) flow graph must be a directed acyclic graph (DAG). A cycle in the covariant graph means an algebraic loop within a single evaluation -- Block A depends on Block B which depends on Block A, with no recurrence boundary to break the cycle. **Severity:** ERROR **Excludes:** Temporal wirings (`is_temporal=True`) and contravariant wirings. These are legitimate backward or cross-boundary connections that do not create algebraic loops. **Detection method:** DFS-based cycle detection on the adjacency graph of covariant, non-temporal wirings. **Trigger example:** ``` system = SystemIR( name="Cycle", blocks=[ BlockIR(name="A", signature=("Signal", "Signal", "", "")), BlockIR(name="B", signature=("Signal", "Signal", "", "")), BlockIR(name="C", signature=("Signal", "Signal", "", "")), ], wirings=[ WiringIR(source="A", target="B", label="signal", direction=FlowDirection.COVARIANT), WiringIR(source="B", target="C", label="signal", direction=FlowDirection.COVARIANT), WiringIR(source="C", target="A", label="signal", direction=FlowDirection.COVARIANT), ], ) ``` **Example finding (failure):** ``` [error] G-006: Covariant flow graph contains a cycle: A -> B -> C ``` **Example finding (pass):** ``` [error] G-006: Covariant flow graph is acyclic (DAG) ``` ______________________________________________________________________ ## Semantic Checks (SC-001 through SC-009) These checks operate on `GDSSpec` — the specification-level registry. They validate domain properties that require knowledge of entities, roles, parameters, and the canonical decomposition. Semantic checks are called individually (not through `verify()`). ### SC-001: Completeness **What it checks:** Every entity variable must be updated by at least one `Mechanism`. A state variable that no mechanism ever updates is an orphan — it was declared but will never change, which is almost always a specification error. **Severity:** WARNING (orphan variables may be intentional in degenerate cases) **Trigger example:** ``` from gds import GDSSpec, Entity, StateVariable, Policy from gds.types.typedef import TypeDef from gds.types.interface import Interface, port Count = TypeDef(name="Count", python_type=int) spec = GDSSpec(name="Orphan Demo") spec.register_type(Count) spec.register_entity(Entity( name="Reservoir", variables={"level": StateVariable(name="level", typedef=Count, symbol="L")}, )) # No mechanism updates Reservoir.level spec.register_block(Policy( name="Observe", interface=Interface(forward_out=(port("Level Signal"),)), )) ``` **Example finding (failure):** ``` [warning] SC-001: Orphan state variables never updated by any mechanism: ['Reservoir.level'] ``` **Example finding (pass):** ``` [info] SC-001: All state variables are updated by at least one mechanism ``` ______________________________________________________________________ ### SC-002: Determinism **What it checks:** Within each wiring (a named composition), no two mechanisms may update the same entity variable. If `MechanismA` and `MechanismB` both list `("Counter", "value")` in their `updates` and both appear in the same `SpecWiring`, that is a write conflict — the final state is ambiguous. **Severity:** ERROR **Trigger example:** ``` spec = GDSSpec(name="Write Conflict Demo") # ... register Counter entity with "value" variable ... inc = Mechanism( name="Increment Counter", interface=Interface(forward_in=(port("Delta Signal"),)), updates=[("Counter", "value")], ) dec = Mechanism( name="Decrement Counter", interface=Interface(forward_in=(port("Delta Signal"),)), updates=[("Counter", "value")], ) spec.register_block(inc) spec.register_block(dec) spec.register_wiring(SpecWiring( name="Counter Pipeline", block_names=["Source", "Increment Counter", "Decrement Counter"], wires=[ Wire(source="Source", target="Increment Counter"), Wire(source="Source", target="Decrement Counter"), ], )) ``` **Example finding (failure):** ``` [error] SC-002: Write conflict in wiring 'Counter Pipeline': Counter.value updated by ['Increment Counter', 'Decrement Counter'] ``` **Example finding (pass):** ``` [info] SC-002: No write conflicts detected ``` ______________________________________________________________________ ### SC-003: Reachability **What it checks:** Whether signals can reach from one named block to another through the wiring graph. This maps to the GDS attainability correspondence — can a boundary input ultimately influence a state update? **Severity:** WARNING (unreachable blocks may indicate disconnected subgraphs) **Note:** Unlike other semantic checks, SC-003 requires two extra arguments (`from_block` and `to_block`). It is not called automatically — you invoke it for specific block pairs. ``` from gds.verification.spec_checks import check_reachability findings = check_reachability(spec, from_block="Sensor", to_block="Update Tank") ``` **Example finding (reachable):** ``` [info] SC-003: Block 'Sensor' can reach 'Update Tank' ``` **Example finding (unreachable):** ``` [warning] SC-003: Block 'Sensor' cannot reach 'Update Tank' ``` ______________________________________________________________________ ### SC-004: Type Safety **What it checks:** Every wire in every `SpecWiring` that references a `space` must reference a space that is registered in the spec. An unregistered space name on a wire means the data channel is undefined. **Severity:** ERROR **Trigger example:** ``` spec.register_wiring(SpecWiring( name="Pipeline", block_names=["A", "B"], wires=[ Wire(source="A", target="B", space="NonExistentSpace"), ], )) # "NonExistentSpace" is not registered via spec.register_space() ``` **Example finding (failure):** ``` [error] SC-004: Wire A -> B references unregistered space 'NonExistentSpace' ``` **Example finding (pass):** ``` [info] SC-004: All wire space references are valid ``` ______________________________________________________________________ ### SC-005: Parameter References **What it checks:** Every `params_used` entry on every block must correspond to a parameter registered in the spec's `parameter_schema`. If a block declares that it uses parameter `"flow_rate"` but no such parameter is registered, the reference is dangling. **Severity:** ERROR **Trigger example:** ``` source = BoundaryAction( name="Source", interface=Interface(forward_out=(port("Signal"),)), params_used=["flow_rate"], # references a parameter ) spec.register_block(source) # But spec.register_parameter("flow_rate", ...) is never called ``` **Example finding (failure):** ``` [error] SC-005: Unresolved parameter references: ['Source -> flow_rate'] ``` **Example finding (pass):** ``` [info] SC-005: All parameter references resolve to registered definitions ``` ______________________________________________________________________ ### SC-006: Canonical Wellformedness — Mechanisms **What it checks:** The canonical projection (`project_canonical(spec)`) must contain at least one mechanism. If no mechanisms exist, the state transition function *f* is empty — the system has no state dynamics. **Severity:** WARNING (stateless specs like pure game-theoretic models may legitimately have no mechanisms) **Example finding (failure):** ``` [warning] SC-006: No mechanisms found — state transition f is empty ``` **Example finding (pass):** ``` [info] SC-006: State transition f has 3 mechanism(s) ``` ______________________________________________________________________ ### SC-007: Canonical Wellformedness — State Space **What it checks:** The canonical projection must contain at least one state variable. If no entities with variables are defined, the state space *X* is empty. **Severity:** WARNING **Note:** SC-006 and SC-007 are produced by the same function (`check_canonical_wellformedness`). Call it once to get both findings. ``` from gds.verification.spec_checks import check_canonical_wellformedness findings = check_canonical_wellformedness(spec) # Returns findings for both SC-006 and SC-007 ``` **Example finding (failure):** ``` [warning] SC-007: State space X is empty — no entity variables defined ``` **Example finding (pass):** ``` [info] SC-007: State space X has 4 variable(s) ``` ______________________________________________________________________ ### SC-008: Admissibility References **What it checks:** Every registered `AdmissibleInputConstraint` must reference an existing `BoundaryAction` and valid (entity, variable) pairs. This validates that admissibility constraints — which restrict exogenous inputs based on state — are structurally well-formed. **Severity:** ERROR **Note:** If no admissibility constraints are registered, SC-008 emits a passing INFO finding. ``` from gds.verification.spec_checks import check_admissibility_references findings = check_admissibility_references(spec) ``` **Example finding (failure):** ``` [error] SC-008: Admissibility constraint issues: ["'my_constraint': block 'Ghost' not registered"] ``` **Example finding (pass):** ``` [info] SC-008: All 2 admissibility constraint(s) are well-formed ``` ______________________________________________________________________ ### SC-009: Transition Reads **What it checks:** Every registered `TransitionSignature` must reference an existing `Mechanism`, valid (entity, variable) read pairs, and valid `depends_on_blocks`. This validates that transition metadata — which describes read dependencies of state transitions — is structurally consistent. **Severity:** ERROR **Note:** If no transition signatures are registered, SC-009 emits a passing INFO finding. ``` from gds.verification.spec_checks import check_transition_reads findings = check_transition_reads(spec) ``` **Example finding (failure):** ``` [error] SC-009: Transition signature issues: ["'UpdateTank': reads unknown entity 'Ghost'"] ``` **Example finding (pass):** ``` [info] SC-009: All 3 transition signature(s) are consistent ``` ______________________________________________________________________ ## Understanding the Output ### Finding Every check produces one or more `Finding` objects: ``` class Finding(BaseModel): check_id: str # e.g. "G-001", "SC-002" severity: Severity # ERROR, WARNING, or INFO message: str # Human-readable description source_elements: list[str] # Block/variable names involved passed: bool # True = check passed, False = violation exportable_predicate: str # Reserved for formal export ``` Key points: - **`passed`** is the primary field — it tells you whether the check succeeded. A finding with `passed=True` is informational confirmation. - **`severity`** indicates the importance level. Generic checks (G-001..G-006) retain their failure severity even on pass. Semantic checks (SC-001..SC-007) emit `Severity.INFO` on pass. - **`source_elements`** names the blocks, variables, or wirings involved. Useful for tracing back to the specification. ### Severity Levels | Level | Meaning | Action | | --------- | ------------------------------------------- | ----------------------------------- | | `ERROR` | Structural violation — the model is invalid | Must fix before the model is usable | | `WARNING` | Suspicious pattern — may be intentional | Review and either fix or accept | | `INFO` | Informational — no action needed | Confirmation that a check passed | ### VerificationReport The `verify()` function returns a `VerificationReport`: ``` class VerificationReport(BaseModel): system_name: str findings: list[Finding] @property def errors(self) -> int: ... # Count of failed ERROR findings @property def warnings(self) -> int: ... # Count of failed WARNING findings @property def info_count(self) -> int: ... # Count of failed INFO findings @property def checks_passed(self) -> int: ... # Count of passed findings @property def checks_total(self) -> int: ... # Total number of findings ``` Typical usage: ``` report = verify(system) assert report.errors == 0, f"Verification failed: {report.errors} errors" ``` ______________________________________________________________________ ## Writing Custom Checks Use the `@gds_check` decorator to register custom verification functions. Custom checks follow the same `Callable[[SystemIR], list[Finding]]` signature as the built-in generic checks. ### Registration ``` from gds import gds_check, Finding, Severity from gds.ir.models import SystemIR @gds_check("CUSTOM-001", Severity.WARNING) def check_max_block_count(system: SystemIR) -> list[Finding]: """Flag systems with more than 20 blocks.""" count = len(system.blocks) if count > 20: return [Finding( check_id="CUSTOM-001", severity=Severity.WARNING, message=f"System has {count} blocks (limit: 20)", source_elements=[], passed=False, )] return [Finding( check_id="CUSTOM-001", severity=Severity.WARNING, message=f"Block count ({count}) within limit", source_elements=[], passed=True, )] ``` The decorator: 1. Attaches `check_id` and `severity` as function attributes 1. Adds the function to a module-level custom check registry ### Running custom checks Custom checks do not run automatically with `verify()`. Use `all_checks()` to get the combined list of built-in + custom checks: ``` from gds import all_checks, verify report = verify(system, checks=all_checks()) ``` Or pass custom checks explicitly: ``` from gds import verify report = verify(system, checks=[check_max_block_count]) ``` ### Retrieving registered checks ``` from gds import get_custom_checks custom = get_custom_checks() # All @gds_check-decorated functions ``` ______________________________________________________________________ ## Filtering and Suppressing Checks ### Running a subset of checks Pass a specific list to `verify()`: ``` from gds.verification.generic_checks import ( check_g001_domain_codomain_matching, check_g004_dangling_wirings, check_g006_covariant_acyclicity, ) # Only run the checks you care about report = verify(system, checks=[ check_g001_domain_codomain_matching, check_g004_dangling_wirings, check_g006_covariant_acyclicity, ]) ``` ### Skipping G-002 for boundary blocks G-002 flags `BoundaryAction` (no inputs) and terminal `Mechanism` (no outputs) as errors. For valid GDS models these are expected. A common pattern in tests: ``` from gds.verification.generic_checks import ( check_g001_domain_codomain_matching, check_g003_direction_consistency, check_g004_dangling_wirings, check_g005_sequential_type_compatibility, check_g006_covariant_acyclicity, ) # All generic checks except G-002 checks_sans_g002 = [ check_g001_domain_codomain_matching, check_g003_direction_consistency, check_g004_dangling_wirings, check_g005_sequential_type_compatibility, check_g006_covariant_acyclicity, ] report = verify(system, checks=checks_sans_g002) ``` ### Filtering findings after the fact ``` report = verify(system) # Only look at failures failures = [f for f in report.findings if not f.passed] # Only errors (ignore warnings) errors = [f for f in report.findings if not f.passed and f.severity == Severity.ERROR] # Group by check ID from collections import defaultdict by_check = defaultdict(list) for f in report.findings: by_check[f.check_id].append(f) ``` ### Intentional edge cases Some findings are expected in valid models: - **SC-001 (orphan state)** — WARNING severity. A state variable intentionally held constant (e.g., a fixed capacity) will trigger this. Accept the warning or add a no-op mechanism. - **SC-006/SC-007 (empty canonical)** — WARNING severity. Stateless models (pure policy compositions, game-theoretic specs) legitimately have no mechanisms or state variables. - **G-002 (incomplete signature)** — ERROR severity but expected on boundary blocks. Skip this check or filter the findings. ______________________________________________________________________ ## Quick Reference | Code | Name | Operates on | Severity | What it validates | | ------ | ----------------------------- | ----------- | -------- | -------------------------------------------------------------- | | G-001 | Domain/codomain matching | `SystemIR` | ERROR | Covariant wiring label matches source out or target in | | G-002 | Signature completeness | `SystemIR` | ERROR | Every block has at least one input and one output | | G-003 | Direction consistency | `SystemIR` | ERROR | No flag contradictions; contravariant port-slot matching | | G-004 | Dangling wirings | `SystemIR` | ERROR | Wiring endpoints exist in the block/input set | | G-005 | Sequential type compatibility | `SystemIR` | ERROR | Stack wiring label matches both source out AND target in | | G-006 | Covariant acyclicity | `SystemIR` | ERROR | Forward flow graph is a DAG (no algebraic loops) | | SC-001 | Completeness | `GDSSpec` | WARNING | Every entity variable updated by some mechanism | | SC-002 | Determinism | `GDSSpec` | ERROR | No variable updated by multiple mechanisms in same wiring | | SC-003 | Reachability | `GDSSpec` | WARNING | Signal path exists between two named blocks | | SC-004 | Type safety | `GDSSpec` | ERROR | Wire space references resolve to registered spaces | | SC-005 | Parameter references | `GDSSpec` | ERROR | Block `params_used` match registered parameter names | | SC-006 | Canonical wellformedness (f) | `GDSSpec` | WARNING | At least one mechanism exists (f is non-empty) | | SC-007 | Canonical wellformedness (X) | `GDSSpec` | WARNING | At least one state variable exists (X is non-empty) | | SC-008 | Admissibility references | `GDSSpec` | ERROR | Admissibility constraints reference valid blocks and variables | | SC-009 | Transition reads | `GDSSpec` | ERROR | Transition signatures reference valid mechanisms and variables | ______________________________________________________________________ ## Assurance Scope A `verify()` pass checks structural and semantic well-formedness. It proves that the wiring graph is a valid mathematical object and that the specification is internally consistent. It does **not** prove behavioral properties like safety, stability, conservation, or liveness -- those require simulation, formal proof, or domain expert review. For a complete treatment of what verification does and does not prove, the residual obligations by domain, and a one-page verification passport template, see [Assurance Claims and Residual Gaps](https://blockscience.github.io/gds-core/framework/design/assurance-claims/index.md). # API Reference Complete API documentation for `gds-framework`, auto-generated from source docstrings. ## Core | Module | Description | | ----------------------------------------------------------------------------------------- | ----------------------------------------- | | [gds](https://blockscience.github.io/gds-core/framework/api/init/index.md) | Package root — version, top-level imports | | [gds.spec](https://blockscience.github.io/gds-core/framework/api/spec/index.md) | `GDSSpec` central registry | | [gds.canonical](https://blockscience.github.io/gds-core/framework/api/canonical/index.md) | Canonical `h = f . g` decomposition | ## Blocks & Composition | Module | Description | | --------------------------------------------------------------------------------------- | ---------------------------------------------------- | | [gds.blocks](https://blockscience.github.io/gds-core/framework/api/blocks/index.md) | `AtomicBlock`, roles, composition operators | | [gds.compiler](https://blockscience.github.io/gds-core/framework/api/compiler/index.md) | 3-stage compiler: flatten, wire, hierarchy | | [gds.ir](https://blockscience.github.io/gds-core/framework/api/ir/index.md) | `SystemIR`, `BlockIR`, `WiringIR`, `HierarchyNodeIR` | ## Type System | Module | Description | | ----------------------------------------------------------------------------------- | ---------------------------------------- | | [gds.types](https://blockscience.github.io/gds-core/framework/api/types/index.md) | `TypeDef`, token utilities, port helpers | | [gds.spaces](https://blockscience.github.io/gds-core/framework/api/spaces/index.md) | `Space`, `EMPTY`, `TERMINAL` | | [gds.state](https://blockscience.github.io/gds-core/framework/api/state/index.md) | `Entity`, `StateVariable` | ## Verification & Query | Module | Description | | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | | [gds.verification](https://blockscience.github.io/gds-core/framework/api/verification/index.md) | Generic checks (G-001..G-006), semantic checks (SC-001..SC-007) | | [gds.query](https://blockscience.github.io/gds-core/framework/api/query/index.md) | Structural queries on specs and IR | | [gds.parameters](https://blockscience.github.io/gds-core/framework/api/parameters/index.md) | `ParameterDef` — structural metadata | ## Utilities | Module | Description | | ----------------------------------------------------------------------------------------- | --------------------- | | [gds.serialize](https://blockscience.github.io/gds-core/framework/api/serialize/index.md) | Serialization support | # gds.blocks ## Base Bases: `Tagged`, `ABC` Abstract base for all Blocks — both atomic and composite. Every block has a `name` and an `interface` describing its boundary ports. Composition operators (`>>`, `|`, `.feedback()`, `.loop()`) build composite blocks from simpler ones. Source code in `packages/gds-framework/gds/blocks/base.py` ``` class Block(Tagged, ABC): """Abstract base for all Blocks — both atomic and composite. Every block has a ``name`` and an ``interface`` describing its boundary ports. Composition operators (``>>``, ``|``, ``.feedback()``, ``.loop()``) build composite blocks from simpler ones. """ name: str interface: Interface = Interface() @abstractmethod def flatten(self) -> list[AtomicBlock]: """Return all atomic blocks in evaluation order.""" def __rshift__(self, other: Block) -> StackComposition: """``a >> b`` — stack (sequential) composition.""" from gds.blocks.composition import StackComposition return StackComposition( name=f"{self.name} >> {other.name}", first=self, second=other, ) def __or__(self, other: Block) -> ParallelComposition: """``a | b`` — parallel composition.""" from gds.blocks.composition import ParallelComposition return ParallelComposition( name=f"{self.name} | {other.name}", left=self, right=other, ) def feedback(self, wiring: list[Wiring]) -> FeedbackLoop: """Wrap with backward feedback within a single evaluation.""" from gds.blocks.composition import FeedbackLoop return FeedbackLoop( name=f"{self.name} [feedback]", inner=self, feedback_wiring=wiring, ) def loop(self, wiring: list[Wiring], exit_condition: str = "") -> TemporalLoop: """Wrap with structural recurrence across temporal boundaries.""" from gds.blocks.composition import TemporalLoop return TemporalLoop( name=f"{self.name} [loop]", inner=self, temporal_wiring=wiring, exit_condition=exit_condition, ) ``` ## `flatten()` Return all atomic blocks in evaluation order. Source code in `packages/gds-framework/gds/blocks/base.py` ``` @abstractmethod def flatten(self) -> list[AtomicBlock]: """Return all atomic blocks in evaluation order.""" ``` ## `__rshift__(other)` `a >> b` — stack (sequential) composition. Source code in `packages/gds-framework/gds/blocks/base.py` ``` def __rshift__(self, other: Block) -> StackComposition: """``a >> b`` — stack (sequential) composition.""" from gds.blocks.composition import StackComposition return StackComposition( name=f"{self.name} >> {other.name}", first=self, second=other, ) ``` ## `__or__(other)` `a | b` — parallel composition. Source code in `packages/gds-framework/gds/blocks/base.py` ``` def __or__(self, other: Block) -> ParallelComposition: """``a | b`` — parallel composition.""" from gds.blocks.composition import ParallelComposition return ParallelComposition( name=f"{self.name} | {other.name}", left=self, right=other, ) ``` ## `feedback(wiring)` Wrap with backward feedback within a single evaluation. Source code in `packages/gds-framework/gds/blocks/base.py` ``` def feedback(self, wiring: list[Wiring]) -> FeedbackLoop: """Wrap with backward feedback within a single evaluation.""" from gds.blocks.composition import FeedbackLoop return FeedbackLoop( name=f"{self.name} [feedback]", inner=self, feedback_wiring=wiring, ) ``` ## `loop(wiring, exit_condition='')` Wrap with structural recurrence across temporal boundaries. Source code in `packages/gds-framework/gds/blocks/base.py` ``` def loop(self, wiring: list[Wiring], exit_condition: str = "") -> TemporalLoop: """Wrap with structural recurrence across temporal boundaries.""" from gds.blocks.composition import TemporalLoop return TemporalLoop( name=f"{self.name} [loop]", inner=self, temporal_wiring=wiring, exit_condition=exit_condition, ) ``` Bases: `Block` Base class for non-decomposable (leaf) blocks. Domain packages subclass this to define their own atomic block types. Source code in `packages/gds-framework/gds/blocks/base.py` ``` class AtomicBlock(Block): """Base class for non-decomposable (leaf) blocks. Domain packages subclass this to define their own atomic block types. """ def flatten(self) -> list[AtomicBlock]: return [self] ``` ## Composition Bases: `Block` `a >> b` — stack (sequential) composition. Output of the first block feeds input of the second. If no explicit `wiring` is provided, the validator checks that forward_out tokens overlap with forward_in tokens. Source code in `packages/gds-framework/gds/blocks/composition.py` ``` class StackComposition(Block): """``a >> b`` — stack (sequential) composition. Output of the first block feeds input of the second. If no explicit ``wiring`` is provided, the validator checks that forward_out tokens overlap with forward_in tokens. """ first: Block second: Block wiring: list[Wiring] = Field(default_factory=list) @model_validator(mode="after") def _compute_interface_and_validate(self) -> Self: if not self.wiring: first_out_tokens = _collect_tokens(self.first.interface.forward_out) second_in_tokens = _collect_tokens(self.second.interface.forward_in) if ( first_out_tokens and second_in_tokens and not (first_out_tokens & second_in_tokens) ): raise GDSTypeError( f"Stack composition {self.name!r}: " f"first.forward_out tokens {first_out_tokens} have no overlap with " f"second.forward_in tokens {second_in_tokens}" ) self.interface = Interface( forward_in=self.first.interface.forward_in + self.second.interface.forward_in, forward_out=self.first.interface.forward_out + self.second.interface.forward_out, backward_in=self.first.interface.backward_in + self.second.interface.backward_in, backward_out=self.first.interface.backward_out + self.second.interface.backward_out, ) return self def flatten(self) -> list[AtomicBlock]: return self.first.flatten() + self.second.flatten() ``` Bases: `Block` `a | b` — parallel composition: blocks run independently. Source code in `packages/gds-framework/gds/blocks/composition.py` ``` class ParallelComposition(Block): """``a | b`` — parallel composition: blocks run independently.""" left: Block right: Block @model_validator(mode="after") def _compute_interface(self) -> Self: self.interface = Interface( forward_in=self.left.interface.forward_in + self.right.interface.forward_in, forward_out=self.left.interface.forward_out + self.right.interface.forward_out, backward_in=self.left.interface.backward_in + self.right.interface.backward_in, backward_out=self.left.interface.backward_out + self.right.interface.backward_out, ) return self def flatten(self) -> list[AtomicBlock]: return self.left.flatten() + self.right.flatten() ``` Bases: `Block` Backward feedback within a single evaluation (backward_out -> backward_in). Source code in `packages/gds-framework/gds/blocks/composition.py` ``` class FeedbackLoop(Block): """Backward feedback within a single evaluation (backward_out -> backward_in).""" inner: Block feedback_wiring: list[Wiring] @model_validator(mode="after") def _validate_and_compute_interface(self) -> Self: self.interface = self.inner.interface return self def flatten(self) -> list[AtomicBlock]: return self.inner.flatten() ``` Bases: `Block` Structural recurrence across temporal boundaries (forward_out -> forward_in). All temporal wiring must be covariant direction. Source code in `packages/gds-framework/gds/blocks/composition.py` ``` class TemporalLoop(Block): """Structural recurrence across temporal boundaries (forward_out -> forward_in). All temporal wiring must be covariant direction. """ inner: Block temporal_wiring: list[Wiring] exit_condition: str = "" @model_validator(mode="after") def _validate_and_compute_interface(self) -> Self: for w in self.temporal_wiring: if w.direction != FlowDirection.COVARIANT: raise GDSTypeError( f"TemporalLoop {self.name!r}: temporal wiring " f"{w.source_block}.{w.source_port} → " f"{w.target_block}.{w.target_port} " f"must be COVARIANT (got {w.direction.value})" ) self.interface = self.inner.interface return self def flatten(self) -> list[AtomicBlock]: return self.inner.flatten() ``` Bases: `BaseModel` An explicit connection between two blocks. Covariant wirings (the default) carry data forward; contravariant wirings carry feedback backward. Source code in `packages/gds-framework/gds/blocks/composition.py` ``` class Wiring(BaseModel, frozen=True): """An explicit connection between two blocks. Covariant wirings (the default) carry data forward; contravariant wirings carry feedback backward. """ source_block: str source_port: str target_block: str target_port: str direction: FlowDirection = FlowDirection.COVARIANT ``` ## Roles Bases: `AtomicBlock` Exogenous signal — enters the system from outside. In GDS terms: part of the exogenous signal space Z. Boundary actions model external agents, oracles, user inputs, environmental signals — anything the system doesn't control. Enforces `forward_in = ()` since boundary actions receive no internal forward signals. Source code in `packages/gds-framework/gds/blocks/roles.py` ``` class BoundaryAction(AtomicBlock): """Exogenous signal — enters the system from outside. In GDS terms: part of the exogenous signal space Z. Boundary actions model external agents, oracles, user inputs, environmental signals — anything the system doesn't control. Enforces ``forward_in = ()`` since boundary actions receive no internal forward signals. """ kind: str = "boundary" options: list[str] = Field(default_factory=list) params_used: list[str] = Field(default_factory=list) constraints: list[str] = Field(default_factory=list) @model_validator(mode="after") def _enforce_no_forward_in(self) -> Self: if self.interface.forward_in: raise GDSCompositionError( f"BoundaryAction {self.name!r}: forward_in must be empty " f"(boundary actions receive no internal forward signals)" ) return self ``` Bases: `AtomicBlock` Decision logic — maps signals to mechanism inputs. Policies select from feasible actions. Named options support scenario analysis and A/B testing. In GDS terms: policies implement the decision mapping d = g(x, z) within the canonical form h = f ∘ g. Source code in `packages/gds-framework/gds/blocks/roles.py` ``` class Policy(AtomicBlock): """Decision logic — maps signals to mechanism inputs. Policies select from feasible actions. Named options support scenario analysis and A/B testing. In GDS terms: policies implement the decision mapping d = g(x, z) within the canonical form h = f ∘ g. """ kind: str = "policy" options: list[str] = Field(default_factory=list) params_used: list[str] = Field(default_factory=list) constraints: list[str] = Field(default_factory=list) ``` Bases: `AtomicBlock` Output observable — maps state and decisions to observable output. In GDS terms: the output map y = C(x, d). From the plant (inside) perspective, this is the system's observable output. From the controller (outside) perspective at a >> composition boundary, this output becomes a control action on the next system. Source code in `packages/gds-framework/gds/blocks/roles.py` ``` class ControlAction(AtomicBlock): """Output observable — maps state and decisions to observable output. In GDS terms: the output map y = C(x, d). From the plant (inside) perspective, this is the system's observable output. From the controller (outside) perspective at a >> composition boundary, this output becomes a control action on the next system. """ kind: str = "control" options: list[str] = Field(default_factory=list) params_used: list[str] = Field(default_factory=list) constraints: list[str] = Field(default_factory=list) ``` Bases: `AtomicBlock` State update — the only block type that writes to state. Mechanisms are the atomic state transitions that compose into h. They have no backward ports (state writes don't propagate signals). `updates` lists (entity_name, variable_name) pairs specifying which state variables this mechanism modifies. Source code in `packages/gds-framework/gds/blocks/roles.py` ``` class Mechanism(AtomicBlock): """State update — the only block type that writes to state. Mechanisms are the atomic state transitions that compose into h. They have no backward ports (state writes don't propagate signals). ``updates`` lists (entity_name, variable_name) pairs specifying which state variables this mechanism modifies. """ kind: str = "mechanism" updates: list[tuple[str, str]] = Field(default_factory=list) params_used: list[str] = Field(default_factory=list) constraints: list[str] = Field(default_factory=list) @model_validator(mode="after") def _enforce_no_backward(self) -> Self: if self.interface.backward_in or self.interface.backward_out: raise GDSCompositionError( f"Mechanism {self.name!r}: backward ports must be empty " f"(mechanisms write state, they don't pass backward signals)" ) return self ``` ## Errors Bases: `Exception` Base class for all GDS errors. Source code in `packages/gds-framework/gds/blocks/errors.py` ``` class GDSError(Exception): """Base class for all GDS errors.""" ``` Bases: `GDSError` Port type mismatch or invalid port structure during construction. Source code in `packages/gds-framework/gds/blocks/errors.py` ``` class GDSTypeError(GDSError): """Port type mismatch or invalid port structure during construction.""" ``` Bases: `GDSError` Invalid composition structure. Source code in `packages/gds-framework/gds/blocks/errors.py` ``` class GDSCompositionError(GDSError): """Invalid composition structure.""" ``` # gds.canonical Bases: `BaseModel` Canonical projection of a GDSSpec to formal GDS structure. Pure derivation — always computable, never authoritative. GDSSpec remains ground truth. Source code in `packages/gds-framework/gds/canonical.py` ``` class CanonicalGDS(BaseModel): """Canonical projection of a GDSSpec to formal GDS structure. Pure derivation — always computable, never authoritative. GDSSpec remains ground truth. """ model_config = ConfigDict(frozen=True) # State space X: entity.variable entries state_variables: tuple[tuple[str, str], ...] = () # Parameter space Θ parameter_schema: ParameterSchema = Field(default_factory=ParameterSchema) # Controlled input space U_c: BoundaryAction forward_out input_ports: tuple[tuple[str, str], ...] = () # Disturbance space W: disturbance-tagged BoundaryAction forward_out disturbance_ports: tuple[tuple[str, str], ...] = () # Decision space D: (block_name, port_name) from Policy forward_out decision_ports: tuple[tuple[str, str], ...] = () # Output space Y: (block_name, port_name) from ControlAction forward_out output_ports: tuple[tuple[str, str], ...] = () # Structural decomposition: block names by role boundary_blocks: tuple[str, ...] = () control_blocks: tuple[str, ...] = () policy_blocks: tuple[str, ...] = () mechanism_blocks: tuple[str, ...] = () # Mechanism update targets: (entity, variable) per mechanism update_map: tuple[tuple[str, tuple[tuple[str, str], ...]], ...] = () # Admissibility deps: (constraint_name, ((entity, var), ...)) admissibility_map: tuple[tuple[str, tuple[tuple[str, str], ...]], ...] = () # Mechanism read deps: (mechanism_name, ((entity, var), ...)) read_map: tuple[tuple[str, tuple[tuple[str, str], ...]], ...] = () @property def has_parameters(self) -> bool: """True if the system has any parameters.""" return len(self.parameter_schema) > 0 @property def has_disturbances(self) -> bool: """True if the system has disturbance-tagged inputs.""" return len(self.disturbance_ports) > 0 def formula(self) -> str: """Render as mathematical formula string.""" has_f = len(self.mechanism_blocks) > 0 has_g = len(self.policy_blocks) > 0 has_c = len(self.control_blocks) > 0 if has_f and has_g: decomp = "f ∘ g" elif has_g: decomp = "g" elif has_f: decomp = "f" else: decomp = "id" if self.has_parameters: decomp_theta = decomp.replace("f", "f_θ").replace("g", "g_θ") result = f"h_θ : X → X (h = {decomp_theta}, θ ∈ Θ)" else: result = f"h : X → X (h = {decomp})" if has_c: result += ", y = C(x, d)" if self.has_disturbances: result += "; f : X x D x W → X" return result ``` ## `has_parameters` True if the system has any parameters. ## `has_disturbances` True if the system has disturbance-tagged inputs. ## `formula()` Render as mathematical formula string. Source code in `packages/gds-framework/gds/canonical.py` ``` def formula(self) -> str: """Render as mathematical formula string.""" has_f = len(self.mechanism_blocks) > 0 has_g = len(self.policy_blocks) > 0 has_c = len(self.control_blocks) > 0 if has_f and has_g: decomp = "f ∘ g" elif has_g: decomp = "g" elif has_f: decomp = "f" else: decomp = "id" if self.has_parameters: decomp_theta = decomp.replace("f", "f_θ").replace("g", "g_θ") result = f"h_θ : X → X (h = {decomp_theta}, θ ∈ Θ)" else: result = f"h : X → X (h = {decomp})" if has_c: result += ", y = C(x, d)" if self.has_disturbances: result += "; f : X x D x W → X" return result ``` Pure function: GDSSpec → CanonicalGDS. Deterministic, stateless. Never mutates the spec. Source code in `packages/gds-framework/gds/canonical.py` ``` def project_canonical(spec: GDSSpec) -> CanonicalGDS: """Pure function: GDSSpec → CanonicalGDS. Deterministic, stateless. Never mutates the spec. """ # 1. State space X: all entity variables state_variables: list[tuple[str, str]] = [] for entity in spec.entities.values(): for var_name in entity.variables: state_variables.append((entity.name, var_name)) # 2. Parameter space Θ parameter_schema = spec.parameter_schema # 3. Classify blocks by role boundary_blocks: list[str] = [] control_blocks: list[str] = [] policy_blocks: list[str] = [] mechanism_blocks: list[str] = [] for bname, block in spec.blocks.items(): if isinstance(block, BoundaryAction): boundary_blocks.append(bname) elif isinstance(block, ControlAction): control_blocks.append(bname) elif isinstance(block, Policy): policy_blocks.append(bname) elif isinstance(block, Mechanism): mechanism_blocks.append(bname) # 4. U_c / W partition: BoundaryAction forward_out ports input_ports: list[tuple[str, str]] = [] disturbance_ports: list[tuple[str, str]] = [] for bname in boundary_blocks: block = spec.blocks[bname] is_disturbance = getattr(block, "tags", {}).get("role") == "disturbance" target = disturbance_ports if is_disturbance else input_ports for p in block.interface.forward_out: target.append((bname, p.name)) # 5. Decision space D: Policy forward_out ports decision_ports: list[tuple[str, str]] = [] for bname in policy_blocks: block = spec.blocks[bname] for p in block.interface.forward_out: decision_ports.append((bname, p.name)) # 6. Output space Y: ControlAction forward_out ports output_ports: list[tuple[str, str]] = [] for bname in control_blocks: block = spec.blocks[bname] for p in block.interface.forward_out: output_ports.append((bname, p.name)) # 7. Mechanism update targets update_map: list[tuple[str, tuple[tuple[str, str], ...]]] = [] for bname in mechanism_blocks: block = spec.blocks[bname] if isinstance(block, Mechanism): updates = tuple(tuple(pair) for pair in block.updates) update_map.append((bname, updates)) # type: ignore[arg-type] # 8. Admissibility dependencies admissibility_map: list[tuple[str, tuple[tuple[str, str], ...]]] = [] for ac_name, ac in spec.admissibility_constraints.items(): deps = tuple(tuple(pair) for pair in ac.depends_on) admissibility_map.append((ac_name, deps)) # type: ignore[arg-type] # 9. Transition read map read_map: list[tuple[str, tuple[tuple[str, str], ...]]] = [] for mname, ts in spec.transition_signatures.items(): reads = tuple(tuple(pair) for pair in ts.reads) read_map.append((mname, reads)) # type: ignore[arg-type] return CanonicalGDS( state_variables=tuple(state_variables), parameter_schema=parameter_schema, input_ports=tuple(input_ports), disturbance_ports=tuple(disturbance_ports), decision_ports=tuple(decision_ports), output_ports=tuple(output_ports), boundary_blocks=tuple(boundary_blocks), control_blocks=tuple(control_blocks), policy_blocks=tuple(policy_blocks), mechanism_blocks=tuple(mechanism_blocks), update_map=tuple(update_map), admissibility_map=tuple(admissibility_map), read_map=tuple(read_map), ) ``` # gds.compiler Compile a Block tree into a flat SystemIR. Parameters: | Name | Type | Description | Default | | ------------------ | ------------------------------------------ | ----------------------------- | ------------------------------------------------------------------------------------------------------------------ | | `name` | `str` | System name. | *required* | | `root` | `Block` | Root of the composition tree. | *required* | | `block_compiler` | \`Callable\[[AtomicBlock], BlockIR\] | None\` | Domain-specific function to convert AtomicBlock → BlockIR. If None, uses a default that extracts name + interface. | | `wiring_emitter` | \`Callable\[[StructuralWiring], WiringIR\] | None\` | Domain-specific function to convert StructuralWiring → WiringIR. If None, uses the default GDS emitter. | | `composition_type` | `CompositionType` | Top-level composition type. | `SEQUENTIAL` | | `source` | `str` | Source identifier. | `''` | | `inputs` | \`list[InputIR] | None\` | External inputs to include in the SystemIR. Layer 0 never infers inputs — domain packages supply them. | Source code in `packages/gds-framework/gds/compiler/compile.py` ``` def compile_system( name: str, root: Block, block_compiler: Callable[[AtomicBlock], BlockIR] | None = None, wiring_emitter: Callable[[StructuralWiring], WiringIR] | None = None, composition_type: CompositionType = CompositionType.SEQUENTIAL, source: str = "", inputs: list[InputIR] | None = None, ) -> SystemIR: """Compile a Block tree into a flat SystemIR. Args: name: System name. root: Root of the composition tree. block_compiler: Domain-specific function to convert AtomicBlock → BlockIR. If None, uses a default that extracts name + interface. wiring_emitter: Domain-specific function to convert StructuralWiring → WiringIR. If None, uses the default GDS emitter. composition_type: Top-level composition type. source: Source identifier. inputs: External inputs to include in the SystemIR. Layer 0 never infers inputs — domain packages supply them. """ if block_compiler is None: block_compiler = _default_block_compiler blocks = flatten_blocks(root, block_compiler) wirings = extract_wirings(root, wiring_emitter) hierarchy = extract_hierarchy(root) return SystemIR( name=name, blocks=blocks, wirings=wirings, inputs=inputs or [], composition_type=composition_type, hierarchy=hierarchy, source=source, ) ``` # gds Public API — all symbols re-exported from `gds.__init__`. Generalized Dynamical Systems — typed compositional specs. GDS synthesizes ideas from GDS theory (Roxin, Zargham & Shorish), MSML (BlockScience), BDP-lib (Block Diagram Protocol), and categorical cybernetics (Ghani, Hedges et al.) into a single, dependency-light Python framework. # gds.ir ## Models Bases: `BaseModel` A complete composed system — the top-level IR unit. Domain packages wrap this with additional metadata (e.g., terminal conditions, action spaces for open games). Source code in `packages/gds-framework/gds/ir/models.py` ``` class SystemIR(BaseModel): """A complete composed system — the top-level IR unit. Domain packages wrap this with additional metadata (e.g., terminal conditions, action spaces for open games). """ name: str blocks: list[BlockIR] = Field(default_factory=list) wirings: list[WiringIR] = Field(default_factory=list) inputs: list[InputIR] = Field(default_factory=list) composition_type: CompositionType = CompositionType.SEQUENTIAL hierarchy: HierarchyNodeIR | None = None source: str = "" metadata: dict[str, Any] = Field(default_factory=dict) parameter_schema: ParameterSchema = Field(default_factory=ParameterSchema) ``` Bases: `BaseModel` A single block in the flat IR representation. The `block_type` is a plain string — domain packages define their own type taxonomies (e.g., "decision", "policy", "mechanism"). Source code in `packages/gds-framework/gds/ir/models.py` ``` class BlockIR(BaseModel): """A single block in the flat IR representation. The ``block_type`` is a plain string — domain packages define their own type taxonomies (e.g., "decision", "policy", "mechanism"). """ name: str block_type: str = "" signature: tuple[str, str, str, str] = ("", "", "", "") logic: str = "" color_code: int = 1 metadata: dict[str, Any] = Field(default_factory=dict) ``` Bases: `BaseModel` A directed connection (edge) between blocks in the IR. `is_feedback` and `is_temporal` flags distinguish special wiring categories for verification. The `category` field is an open string that domain packages can use for domain-specific edge classification; the generic protocol only interprets `"dataflow"`. Source code in `packages/gds-framework/gds/ir/models.py` ``` class WiringIR(BaseModel): """A directed connection (edge) between blocks in the IR. ``is_feedback`` and ``is_temporal`` flags distinguish special wiring categories for verification. The ``category`` field is an open string that domain packages can use for domain-specific edge classification; the generic protocol only interprets ``"dataflow"``. """ source: str target: str label: str wiring_type: str = "" direction: FlowDirection is_feedback: bool = False is_temporal: bool = False # Structural recurrence marker — no time model implied category: str = "dataflow" ``` Bases: `BaseModel` An external input to the system. Layer 0 defines only `name` and a generic `metadata` bag. Domain packages store their richer fields (e.g., input_type, schema_hint) inside `metadata` when projecting to SystemIR. Source code in `packages/gds-framework/gds/ir/models.py` ``` class InputIR(BaseModel, frozen=True): """An external input to the system. Layer 0 defines only ``name`` and a generic ``metadata`` bag. Domain packages store their richer fields (e.g., input_type, schema_hint) inside ``metadata`` when projecting to SystemIR. """ name: str metadata: dict[str, Any] = Field(default_factory=dict) ``` Bases: `BaseModel` A node in the composition tree for visualization. Leaf nodes (composition_type=None) map 1:1 to a BlockIR. Interior nodes represent composition operators and contain children. Source code in `packages/gds-framework/gds/ir/models.py` ``` class HierarchyNodeIR(BaseModel): """A node in the composition tree for visualization. Leaf nodes (composition_type=None) map 1:1 to a BlockIR. Interior nodes represent composition operators and contain children. """ id: str name: str composition_type: CompositionType | None = None children: list[HierarchyNodeIR] = Field(default_factory=list) block_name: str | None = None exit_condition: str = "" ``` Bases: `StrEnum` Direction of an information flow in a block composition. - COVARIANT — forward data flow (forward_in → forward_out direction). - CONTRAVARIANT — backward feedback flow (backward_out → backward_in direction). Source code in `packages/gds-framework/gds/ir/models.py` ``` class FlowDirection(StrEnum): """Direction of an information flow in a block composition. - COVARIANT — forward data flow (forward_in → forward_out direction). - CONTRAVARIANT — backward feedback flow (backward_out → backward_in direction). """ COVARIANT = "covariant" CONTRAVARIANT = "contravariant" ``` Bases: `StrEnum` How blocks are composed within a system. - SEQUENTIAL — output of one feeds input of next (stack). - PARALLEL — blocks run side-by-side with no shared wires. - FEEDBACK — backward_out→backward_in connections within an evaluation. - TEMPORAL — forward_out→forward_in connections across temporal boundaries. Source code in `packages/gds-framework/gds/ir/models.py` ``` class CompositionType(StrEnum): """How blocks are composed within a system. - SEQUENTIAL — output of one feeds input of next (stack). - PARALLEL — blocks run side-by-side with no shared wires. - FEEDBACK — backward_out→backward_in connections within an evaluation. - TEMPORAL — forward_out→forward_in connections across temporal boundaries. """ SEQUENTIAL = "sequential" PARALLEL = "parallel" FEEDBACK = "feedback" TEMPORAL = "temporal" ``` ## Utilities Convert an arbitrary name to a valid IR/Mermaid identifier. Replaces any character that is not alphanumeric or underscore with `_`. Prepends `_` if the result starts with a digit. Source code in `packages/gds-framework/gds/ir/models.py` ``` def sanitize_id(name: str) -> str: """Convert an arbitrary name to a valid IR/Mermaid identifier. Replaces any character that is not alphanumeric or underscore with ``_``. Prepends ``_`` if the result starts with a digit. """ sanitized = re.sub(r"[^A-Za-z0-9_]", "_", name) if sanitized and sanitized[0].isdigit(): sanitized = "_" + sanitized return sanitized ``` ## Serialization Bases: `BaseModel` Top-level IR document containing one or more systems. Source code in `packages/gds-framework/gds/ir/serialization.py` ``` class IRDocument(BaseModel): """Top-level IR document containing one or more systems.""" version: str = "1.0" systems: list[SystemIR] metadata: IRMetadata ``` Bases: `BaseModel` Metadata envelope for an IR document. Source code in `packages/gds-framework/gds/ir/serialization.py` ``` class IRMetadata(BaseModel): """Metadata envelope for an IR document.""" sources: list[str] = Field(default_factory=list) generated_at: datetime = Field(default_factory=lambda: datetime.now(UTC)) version: str = "0.2.1" ``` Serialize an IR document to JSON. Source code in `packages/gds-framework/gds/ir/serialization.py` ``` def save_ir(doc: IRDocument, path: Path) -> None: """Serialize an IR document to JSON.""" path.write_text(doc.model_dump_json(indent=2)) ``` Deserialize an IR document from JSON. Source code in `packages/gds-framework/gds/ir/serialization.py` ``` def load_ir(path: Path) -> IRDocument: """Deserialize an IR document from JSON.""" return IRDocument.model_validate_json(path.read_text()) ``` # gds.parameters Bases: `BaseModel` Schema definition for a single parameter. Defines one dimension of Θ structurally — type, constraints, and bounds. No values, no binding, no execution semantics. Source code in `packages/gds-framework/gds/parameters.py` ``` class ParameterDef(BaseModel): """Schema definition for a single parameter. Defines one dimension of Θ structurally — type, constraints, and bounds. No values, no binding, no execution semantics. """ model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) name: str typedef: TypeDef description: str = "" bounds: tuple[Any, Any] | None = None @model_validator(mode="after") def _validate_bounds(self) -> Self: """Validate that bounds are comparable and correctly ordered.""" if self.bounds is None: return self low, high = self.bounds try: result = low <= high except TypeError as e: raise ValueError( f"ParameterDef '{self.name}': bounds ({low!r}, {high!r}) " f"are not comparable: {e}" ) from None if not result: raise ValueError( f"ParameterDef '{self.name}': lower bound {low!r} " f"exceeds upper bound {high!r}" ) return self def check_value(self, value: Any) -> bool: """Check if a value satisfies this parameter's type and constraints.""" if not self.typedef.check_value(value): return False if self.bounds is not None: try: low, high = self.bounds if not (low <= value <= high): return False except Exception: return False return True ``` ## `check_value(value)` Check if a value satisfies this parameter's type and constraints. Source code in `packages/gds-framework/gds/parameters.py` ``` def check_value(self, value: Any) -> bool: """Check if a value satisfies this parameter's type and constraints.""" if not self.typedef.check_value(value): return False if self.bounds is not None: try: low, high = self.bounds if not (low <= value <= high): return False except Exception: return False return True ``` Bases: `BaseModel` Defines the parameter space Θ at specification level. Immutable registry of parameter definitions. GDS does not interpret values — only validates structural references. Source code in `packages/gds-framework/gds/parameters.py` ``` class ParameterSchema(BaseModel): """Defines the parameter space Θ at specification level. Immutable registry of parameter definitions. GDS does not interpret values — only validates structural references. """ model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) parameters: dict[str, ParameterDef] = Field(default_factory=dict) def add(self, param: ParameterDef) -> ParameterSchema: """Return new schema with added parameter (immutable).""" if param.name in self.parameters: raise ValueError(f"Parameter '{param.name}' already registered") new_params = dict(self.parameters) new_params[param.name] = param return self.model_copy(update={"parameters": new_params}) def get(self, name: str) -> ParameterDef: """Get a parameter definition by name.""" return self.parameters[name] def names(self) -> set[str]: """Return all parameter names.""" return set(self.parameters.keys()) def validate_references(self, ref_names: set[str]) -> list[str]: """Validate that all referenced parameter names exist in schema. Returns list of error strings (empty = all references valid). """ errors: list[str] = [] for name in sorted(ref_names): if name not in self.parameters: errors.append(f"Referenced parameter '{name}' not defined in schema") return errors def __len__(self) -> int: return len(self.parameters) def __contains__(self, name: str) -> bool: return name in self.parameters ``` ## `add(param)` Return new schema with added parameter (immutable). Source code in `packages/gds-framework/gds/parameters.py` ``` def add(self, param: ParameterDef) -> ParameterSchema: """Return new schema with added parameter (immutable).""" if param.name in self.parameters: raise ValueError(f"Parameter '{param.name}' already registered") new_params = dict(self.parameters) new_params[param.name] = param return self.model_copy(update={"parameters": new_params}) ``` ## `get(name)` Get a parameter definition by name. Source code in `packages/gds-framework/gds/parameters.py` ``` def get(self, name: str) -> ParameterDef: """Get a parameter definition by name.""" return self.parameters[name] ``` ## `names()` Return all parameter names. Source code in `packages/gds-framework/gds/parameters.py` ``` def names(self) -> set[str]: """Return all parameter names.""" return set(self.parameters.keys()) ``` ## `validate_references(ref_names)` Validate that all referenced parameter names exist in schema. Returns list of error strings (empty = all references valid). Source code in `packages/gds-framework/gds/parameters.py` ``` def validate_references(self, ref_names: set[str]) -> list[str]: """Validate that all referenced parameter names exist in schema. Returns list of error strings (empty = all references valid). """ errors: list[str] = [] for name in sorted(ref_names): if name not in self.parameters: errors.append(f"Referenced parameter '{name}' not defined in schema") return errors ``` # gds.query Query engine for exploring GDSSpec structure. Source code in `packages/gds-framework/gds/query.py` ``` class SpecQuery: """Query engine for exploring GDSSpec structure.""" def __init__(self, spec: GDSSpec) -> None: self.spec = spec def param_to_blocks(self) -> dict[str, list[str]]: """Map each parameter to the blocks that use it.""" mapping: dict[str, list[str]] = {p: [] for p in self.spec.parameters} for bname, block in self.spec.blocks.items(): if isinstance(block, HasParams): for param in block.params_used: if param in mapping: mapping[param].append(bname) return mapping def block_to_params(self) -> dict[str, list[str]]: """Map each block to the parameters it uses.""" result: dict[str, list[str]] = {} for bname, block in self.spec.blocks.items(): if isinstance(block, HasParams): result[bname] = list(block.params_used) else: result[bname] = [] return result def entity_update_map(self) -> dict[str, dict[str, list[str]]]: """Map entity -> variable -> list of mechanisms that update it.""" result: dict[str, dict[str, list[str]]] = {} for ename, entity in self.spec.entities.items(): result[ename] = {vname: [] for vname in entity.variables} for bname, block in self.spec.blocks.items(): if isinstance(block, Mechanism): for ename, vname in block.updates: if ename in result and vname in result[ename]: result[ename][vname].append(bname) return result def dependency_graph(self) -> dict[str, set[str]]: """Full block dependency DAG (who feeds whom) from all wirings.""" adj: dict[str, set[str]] = defaultdict(set) for wiring in self.spec.wirings.values(): for wire in wiring.wires: adj[wire.source].add(wire.target) return dict(adj) def blocks_by_kind(self) -> dict[str, list[str]]: """Group blocks by their GDS role (kind).""" result: dict[str, list[str]] = { "boundary": [], "control": [], "policy": [], "mechanism": [], "generic": [], } for bname, block in self.spec.blocks.items(): kind = getattr(block, "kind", "generic") if kind in result: result[kind].append(bname) else: result[kind] = [bname] return result def blocks_affecting(self, entity: str, variable: str) -> list[str]: """Which blocks can transitively affect this variable? Finds all mechanisms that directly update the variable, then all blocks that can transitively reach those mechanisms. """ direct: list[str] = [] for bname, block in self.spec.blocks.items(): if isinstance(block, Mechanism) and (entity, variable) in block.updates: direct.append(bname) adj: dict[str, set[str]] = defaultdict(set) for wiring in self.spec.wirings.values(): for wire in wiring.wires: adj[wire.source].add(wire.target) all_affecting: set[str] = set(direct) for mech_name in direct: for bname in self.spec.blocks: if self._can_reach(adj, bname, mech_name): all_affecting.add(bname) return sorted(all_affecting) def admissibility_dependency_map(self) -> dict[str, list[tuple[str, str]]]: """Map boundary block -> state variables constraining its inputs.""" result: dict[str, list[tuple[str, str]]] = {} for ac in self.spec.admissibility_constraints.values(): result.setdefault(ac.boundary_block, []).extend(ac.depends_on) return result def mechanism_read_map(self) -> dict[str, list[tuple[str, str]]]: """Map mechanism -> state variables it reads.""" return { mname: list(ts.reads) for mname, ts in self.spec.transition_signatures.items() } def variable_readers(self, entity: str, variable: str) -> list[str]: """Which mechanisms declare reading this state variable?""" ref = (entity, variable) return [ mname for mname, ts in self.spec.transition_signatures.items() if ref in ts.reads ] @staticmethod def _can_reach(adj: dict[str, set[str]], source: str, target: str) -> bool: """BFS reachability check.""" if source == target: return False visited: set[str] = set() queue = [source] while queue: current = queue.pop(0) if current == target: return True if current in visited: continue visited.add(current) queue.extend(adj.get(current, set())) return False ``` ## `param_to_blocks()` Map each parameter to the blocks that use it. Source code in `packages/gds-framework/gds/query.py` ``` def param_to_blocks(self) -> dict[str, list[str]]: """Map each parameter to the blocks that use it.""" mapping: dict[str, list[str]] = {p: [] for p in self.spec.parameters} for bname, block in self.spec.blocks.items(): if isinstance(block, HasParams): for param in block.params_used: if param in mapping: mapping[param].append(bname) return mapping ``` ## `block_to_params()` Map each block to the parameters it uses. Source code in `packages/gds-framework/gds/query.py` ``` def block_to_params(self) -> dict[str, list[str]]: """Map each block to the parameters it uses.""" result: dict[str, list[str]] = {} for bname, block in self.spec.blocks.items(): if isinstance(block, HasParams): result[bname] = list(block.params_used) else: result[bname] = [] return result ``` ## `entity_update_map()` Map entity -> variable -> list of mechanisms that update it. Source code in `packages/gds-framework/gds/query.py` ``` def entity_update_map(self) -> dict[str, dict[str, list[str]]]: """Map entity -> variable -> list of mechanisms that update it.""" result: dict[str, dict[str, list[str]]] = {} for ename, entity in self.spec.entities.items(): result[ename] = {vname: [] for vname in entity.variables} for bname, block in self.spec.blocks.items(): if isinstance(block, Mechanism): for ename, vname in block.updates: if ename in result and vname in result[ename]: result[ename][vname].append(bname) return result ``` ## `dependency_graph()` Full block dependency DAG (who feeds whom) from all wirings. Source code in `packages/gds-framework/gds/query.py` ``` def dependency_graph(self) -> dict[str, set[str]]: """Full block dependency DAG (who feeds whom) from all wirings.""" adj: dict[str, set[str]] = defaultdict(set) for wiring in self.spec.wirings.values(): for wire in wiring.wires: adj[wire.source].add(wire.target) return dict(adj) ``` ## `blocks_by_kind()` Group blocks by their GDS role (kind). Source code in `packages/gds-framework/gds/query.py` ``` def blocks_by_kind(self) -> dict[str, list[str]]: """Group blocks by their GDS role (kind).""" result: dict[str, list[str]] = { "boundary": [], "control": [], "policy": [], "mechanism": [], "generic": [], } for bname, block in self.spec.blocks.items(): kind = getattr(block, "kind", "generic") if kind in result: result[kind].append(bname) else: result[kind] = [bname] return result ``` ## `blocks_affecting(entity, variable)` Which blocks can transitively affect this variable? Finds all mechanisms that directly update the variable, then all blocks that can transitively reach those mechanisms. Source code in `packages/gds-framework/gds/query.py` ``` def blocks_affecting(self, entity: str, variable: str) -> list[str]: """Which blocks can transitively affect this variable? Finds all mechanisms that directly update the variable, then all blocks that can transitively reach those mechanisms. """ direct: list[str] = [] for bname, block in self.spec.blocks.items(): if isinstance(block, Mechanism) and (entity, variable) in block.updates: direct.append(bname) adj: dict[str, set[str]] = defaultdict(set) for wiring in self.spec.wirings.values(): for wire in wiring.wires: adj[wire.source].add(wire.target) all_affecting: set[str] = set(direct) for mech_name in direct: for bname in self.spec.blocks: if self._can_reach(adj, bname, mech_name): all_affecting.add(bname) return sorted(all_affecting) ``` ## `admissibility_dependency_map()` Map boundary block -> state variables constraining its inputs. Source code in `packages/gds-framework/gds/query.py` ``` def admissibility_dependency_map(self) -> dict[str, list[tuple[str, str]]]: """Map boundary block -> state variables constraining its inputs.""" result: dict[str, list[tuple[str, str]]] = {} for ac in self.spec.admissibility_constraints.values(): result.setdefault(ac.boundary_block, []).extend(ac.depends_on) return result ``` ## `mechanism_read_map()` Map mechanism -> state variables it reads. Source code in `packages/gds-framework/gds/query.py` ``` def mechanism_read_map(self) -> dict[str, list[tuple[str, str]]]: """Map mechanism -> state variables it reads.""" return { mname: list(ts.reads) for mname, ts in self.spec.transition_signatures.items() } ``` ## `variable_readers(entity, variable)` Which mechanisms declare reading this state variable? Source code in `packages/gds-framework/gds/query.py` ``` def variable_readers(self, entity: str, variable: str) -> list[str]: """Which mechanisms declare reading this state variable?""" ref = (entity, variable) return [ mname for mname, ts in self.spec.transition_signatures.items() if ref in ts.reads ] ``` # gds.serialize Serialize a GDSSpec to a plain dict (JSON-compatible). Source code in `packages/gds-framework/gds/serialize.py` ``` def spec_to_dict(spec: GDSSpec) -> dict[str, Any]: """Serialize a GDSSpec to a plain dict (JSON-compatible).""" return { "name": spec.name, "description": spec.description, "types": { name: { "name": t.name, "python_type": t.python_type.__name__, "description": t.description, "units": t.units, } for name, t in spec.types.items() }, "spaces": { name: { "name": s.name, "schema": {fname: tdef.name for fname, tdef in s.fields.items()}, "description": s.description, } for name, s in spec.spaces.items() }, "entities": { name: { "name": e.name, "description": e.description, "variables": { vname: { "name": v.name, "type": v.typedef.name, "description": v.description, "symbol": v.symbol, } for vname, v in e.variables.items() }, } for name, e in spec.entities.items() }, "blocks": {name: _block_to_dict(b) for name, b in spec.blocks.items()}, "wirings": { name: { "name": w.name, "description": w.description, "blocks": list(w.block_names), "wires": [ { "source": wire.source, "target": wire.target, "space": wire.space, "optional": wire.optional, } for wire in w.wires ], } for name, w in spec.wirings.items() }, "parameters": { name: { "name": p.name, "typedef": p.typedef.name, "python_type": p.typedef.python_type.__name__, "description": p.description, "bounds": list(p.bounds) if p.bounds is not None else None, } for name, p in spec.parameter_schema.parameters.items() }, "admissibility_constraints": { name: { "name": ac.name, "boundary_block": ac.boundary_block, "depends_on": [list(pair) for pair in ac.depends_on], "has_constraint": ac.constraint is not None, "description": ac.description, } for name, ac in spec.admissibility_constraints.items() }, "transition_signatures": { name: { "mechanism": ts.mechanism, "reads": [list(pair) for pair in ts.reads], "depends_on_blocks": list(ts.depends_on_blocks), "preserves_invariant": ts.preserves_invariant, } for name, ts in spec.transition_signatures.items() }, } ``` Serialize a GDSSpec to a JSON string. Source code in `packages/gds-framework/gds/serialize.py` ``` def spec_to_json(spec: GDSSpec, indent: int = 2) -> str: """Serialize a GDSSpec to a JSON string.""" return json.dumps(spec_to_dict(spec), indent=indent) ``` # gds.spaces Bases: `BaseModel` A typed product space — defines the shape of data flowing between blocks. Each field in the schema maps a name to a TypeDef. `validate_data()` checks a data dict against the schema, returning a list of error strings. Source code in `packages/gds-framework/gds/spaces.py` ``` class Space(BaseModel): """A typed product space — defines the shape of data flowing between blocks. Each field in the schema maps a name to a TypeDef. ``validate_data()`` checks a data dict against the schema, returning a list of error strings. """ model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) name: str fields: dict[str, TypeDef] = Field(default_factory=dict) description: str = "" def validate_data(self, data: dict[str, Any]) -> list[str]: """Validate a data dict against this space's field schema. Returns a list of error strings (empty means valid). """ errors: list[str] = [] for field_name, typedef in self.fields.items(): if field_name not in data: errors.append(f"Missing field: {field_name}") elif not typedef.check_value(data[field_name]): errors.append( f"{field_name}: expected {typedef.name}, " f"got {type(data[field_name]).__name__} " f"with value {data[field_name]!r}" ) extra_fields = set(data.keys()) - set(self.fields.keys()) if extra_fields: errors.append(f"Unexpected fields: {extra_fields}") return errors def is_compatible(self, other: Space) -> bool: """Check if another space has the same structure (field names and types).""" if set(self.fields.keys()) != set(other.fields.keys()): return False return all(self.fields[k] == other.fields[k] for k in self.fields) ``` ## `validate_data(data)` Validate a data dict against this space's field schema. Returns a list of error strings (empty means valid). Source code in `packages/gds-framework/gds/spaces.py` ``` def validate_data(self, data: dict[str, Any]) -> list[str]: """Validate a data dict against this space's field schema. Returns a list of error strings (empty means valid). """ errors: list[str] = [] for field_name, typedef in self.fields.items(): if field_name not in data: errors.append(f"Missing field: {field_name}") elif not typedef.check_value(data[field_name]): errors.append( f"{field_name}: expected {typedef.name}, " f"got {type(data[field_name]).__name__} " f"with value {data[field_name]!r}" ) extra_fields = set(data.keys()) - set(self.fields.keys()) if extra_fields: errors.append(f"Unexpected fields: {extra_fields}") return errors ``` ## `is_compatible(other)` Check if another space has the same structure (field names and types). Source code in `packages/gds-framework/gds/spaces.py` ``` def is_compatible(self, other: Space) -> bool: """Check if another space has the same structure (field names and types).""" if set(self.fields.keys()) != set(other.fields.keys()): return False return all(self.fields[k] == other.fields[k] for k in self.fields) ``` # gds.spec Bases: `Tagged` Complete Generalized Dynamical System specification. GDS = {h, X} where X = state space (product of entity states) h = transition map (composed from wirings) Registration methods are chainable spec.register_type(t).register_space(s).register_entity(e) Source code in `packages/gds-framework/gds/spec.py` ``` class GDSSpec(Tagged): """Complete Generalized Dynamical System specification. Mathematically: GDS = {h, X} where X = state space (product of entity states) h = transition map (composed from wirings) Registration methods are chainable: spec.register_type(t).register_space(s).register_entity(e) """ model_config = ConfigDict(arbitrary_types_allowed=True) name: str description: str = "" types: dict[str, TypeDef] = Field(default_factory=dict) spaces: dict[str, Space] = Field(default_factory=dict) entities: dict[str, Entity] = Field(default_factory=dict) blocks: dict[str, Block] = Field(default_factory=dict) wirings: dict[str, SpecWiring] = Field(default_factory=dict) parameter_schema: ParameterSchema = Field(default_factory=ParameterSchema) admissibility_constraints: dict[str, AdmissibleInputConstraint] = Field( default_factory=dict ) transition_signatures: dict[str, TransitionSignature] = Field(default_factory=dict) state_metrics: dict[str, StateMetric] = Field(default_factory=dict) execution_contract: ExecutionContract | None = None # ── Registration ──────────────────────────────────────── def register_type(self, t: TypeDef) -> GDSSpec: """Register a TypeDef. Raises if name already registered.""" if t.name in self.types: raise ValueError(f"Type '{t.name}' already registered") self.types[t.name] = t return self def register_space(self, s: Space) -> GDSSpec: """Register a Space. Raises if name already registered.""" if s.name in self.spaces: raise ValueError(f"Space '{s.name}' already registered") self.spaces[s.name] = s return self def register_entity(self, e: Entity) -> GDSSpec: """Register an Entity. Raises if name already registered.""" if e.name in self.entities: raise ValueError(f"Entity '{e.name}' already registered") self.entities[e.name] = e return self def register_block(self, b: Block) -> GDSSpec: """Register a Block. Raises if name already registered.""" if b.name in self.blocks: raise ValueError(f"Block '{b.name}' already registered") self.blocks[b.name] = b return self def register_wiring(self, w: SpecWiring) -> GDSSpec: """Register a SpecWiring. Raises if name already registered.""" if w.name in self.wirings: raise ValueError(f"Wiring '{w.name}' already registered") self.wirings[w.name] = w return self def register_parameter( self, param_or_name: ParameterDef | str, typedef: TypeDef | None = None ) -> GDSSpec: """Register a parameter definition. Accepts either: spec.register_parameter(ParameterDef(name="rate", typedef=Rate)) spec.register_parameter("rate", Rate) # legacy convenience """ if isinstance(param_or_name, str): if typedef is None: raise ValueError("typedef is required when registering by name string") param = ParameterDef(name=param_or_name, typedef=typedef) else: param = param_or_name self.parameter_schema = self.parameter_schema.add(param) return self def register_admissibility(self, ac: AdmissibleInputConstraint) -> GDSSpec: """Register an admissible input constraint. Raises if name already registered. """ if ac.name in self.admissibility_constraints: raise ValueError(f"Admissibility constraint '{ac.name}' already registered") self.admissibility_constraints[ac.name] = ac return self def register_transition_signature(self, ts: TransitionSignature) -> GDSSpec: """Register a transition signature. Raises if mechanism already has one.""" if ts.mechanism in self.transition_signatures: raise ValueError( f"Transition signature for '{ts.mechanism}' already registered" ) self.transition_signatures[ts.mechanism] = ts return self def register_state_metric(self, sm: StateMetric) -> GDSSpec: """Register a state metric. Raises if name already registered.""" if sm.name in self.state_metrics: raise ValueError(f"State metric '{sm.name}' already registered") self.state_metrics[sm.name] = sm return self @property def parameters(self) -> dict[str, TypeDef]: """Legacy access: parameter name → TypeDef mapping.""" return {name: p.typedef for name, p in self.parameter_schema.parameters.items()} # ── Bulk registration ───────────────────────────────── def collect( self, *objects: TypeDef | Space | Entity | Block | ParameterDef ) -> GDSSpec: """Register multiple objects by type-dispatching each. Accepts any mix of TypeDef, Space, Entity, Block, and ParameterDef instances. Does not handle SpecWiring, AdmissibleInputConstraint, TransitionSignature, or (name, typedef) parameter shorthand --- those stay explicit via their respective ``register_*()`` methods. Raises TypeError for unrecognized types. """ for obj in objects: if isinstance(obj, TypeDef): self.register_type(obj) elif isinstance(obj, Space): self.register_space(obj) elif isinstance(obj, Entity): self.register_entity(obj) elif isinstance(obj, ParameterDef): self.register_parameter(obj) elif isinstance(obj, Block): self.register_block(obj) else: raise TypeError( f"collect() does not accept {type(obj).__name__!r}; " f"expected TypeDef, Space, Entity, Block, or ParameterDef" ) return self # ── Validation ────────────────────────────────────────── def validate_spec(self) -> list[str]: """Full structural validation. Returns list of error strings.""" errors: list[str] = [] errors += self._validate_space_types() errors += self._validate_wiring_blocks() errors += self._validate_mechanism_updates() errors += self._validate_param_references() errors += self._validate_admissibility_constraints() errors += self._validate_transition_signatures() errors += self._validate_state_metrics() return errors def _validate_space_types(self) -> list[str]: """Every TypeDef used in a Space is registered.""" errors: list[str] = [] for space in self.spaces.values(): for field_name, typedef in space.fields.items(): if typedef.name not in self.types: errors.append( f"Space '{space.name}' field '{field_name}' uses " f"unregistered type '{typedef.name}'" ) return errors def _validate_wiring_blocks(self) -> list[str]: """Every block referenced in a wiring is registered.""" errors: list[str] = [] for wiring in self.wirings.values(): for bname in wiring.block_names: if bname not in self.blocks: errors.append( f"Wiring '{wiring.name}' references " f"unregistered block '{bname}'" ) for wire in wiring.wires: if wire.source not in self.blocks: errors.append( f"Wiring '{wiring.name}' wire source " f"'{wire.source}' not in registered blocks" ) if wire.target not in self.blocks: errors.append( f"Wiring '{wiring.name}' wire target " f"'{wire.target}' not in registered blocks" ) if wire.space and wire.space not in self.spaces: errors.append( f"Wiring '{wiring.name}' wire references " f"unregistered space '{wire.space}'" ) return errors def _validate_mechanism_updates(self) -> list[str]: """Mechanisms only update existing entity variables.""" errors: list[str] = [] for block in self.blocks.values(): if isinstance(block, Mechanism): for entity_name, var_name in block.updates: if entity_name not in self.entities: errors.append( f"Mechanism '{block.name}' updates " f"unknown entity '{entity_name}'" ) elif var_name not in self.entities[entity_name].variables: errors.append( f"Mechanism '{block.name}' updates " f"unknown variable '{entity_name}.{var_name}'" ) return errors def _validate_param_references(self) -> list[str]: """All parameter references in blocks are registered.""" errors: list[str] = [] param_names = self.parameter_schema.names() for block in self.blocks.values(): if isinstance(block, HasParams): for param in block.params_used: if param not in param_names: errors.append( f"Block '{block.name}' references " f"unregistered parameter '{param}'" ) return errors def _validate_admissibility_constraints(self) -> list[str]: """Admissibility constraints reference existing blocks and variables.""" errors: list[str] = [] for ac in self.admissibility_constraints.values(): if ac.boundary_block not in self.blocks: errors.append( f"Admissibility constraint '{ac.name}' references " f"unregistered block '{ac.boundary_block}'" ) elif not isinstance(self.blocks[ac.boundary_block], BoundaryAction): errors.append( f"Admissibility constraint '{ac.name}': " f"block '{ac.boundary_block}' is not a BoundaryAction" ) for entity_name, var_name in ac.depends_on: if entity_name not in self.entities: errors.append( f"Admissibility constraint '{ac.name}' depends on " f"unknown entity '{entity_name}'" ) elif var_name not in self.entities[entity_name].variables: errors.append( f"Admissibility constraint '{ac.name}' depends on " f"unknown variable '{entity_name}.{var_name}'" ) return errors def _validate_transition_signatures(self) -> list[str]: """Transition signatures reference existing Mechanisms and variables.""" errors: list[str] = [] for ts in self.transition_signatures.values(): if ts.mechanism not in self.blocks: errors.append( f"Transition signature references " f"unregistered block '{ts.mechanism}'" ) elif not isinstance(self.blocks[ts.mechanism], Mechanism): errors.append( f"Transition signature for '{ts.mechanism}': " f"block is not a Mechanism" ) for entity_name, var_name in ts.reads: if entity_name not in self.entities: errors.append( f"Transition signature for '{ts.mechanism}' reads " f"unknown entity '{entity_name}'" ) elif var_name not in self.entities[entity_name].variables: errors.append( f"Transition signature for '{ts.mechanism}' reads " f"unknown variable '{entity_name}.{var_name}'" ) for bname in ts.depends_on_blocks: if bname not in self.blocks: errors.append( f"Transition signature for '{ts.mechanism}' " f"depends on unregistered block '{bname}'" ) return errors def _validate_state_metrics(self) -> list[str]: """State metrics reference existing entities and variables.""" errors: list[str] = [] for sm in self.state_metrics.values(): if not sm.variables: errors.append(f"State metric '{sm.name}' has no variables") for entity_name, var_name in sm.variables: if entity_name not in self.entities: errors.append( f"State metric '{sm.name}' references " f"unknown entity '{entity_name}'" ) elif var_name not in self.entities[entity_name].variables: errors.append( f"State metric '{sm.name}' references " f"unknown variable '{entity_name}.{var_name}'" ) return errors ``` ## `parameters` Legacy access: parameter name → TypeDef mapping. ## `register_type(t)` Register a TypeDef. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_type(self, t: TypeDef) -> GDSSpec: """Register a TypeDef. Raises if name already registered.""" if t.name in self.types: raise ValueError(f"Type '{t.name}' already registered") self.types[t.name] = t return self ``` ## `register_space(s)` Register a Space. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_space(self, s: Space) -> GDSSpec: """Register a Space. Raises if name already registered.""" if s.name in self.spaces: raise ValueError(f"Space '{s.name}' already registered") self.spaces[s.name] = s return self ``` ## `register_entity(e)` Register an Entity. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_entity(self, e: Entity) -> GDSSpec: """Register an Entity. Raises if name already registered.""" if e.name in self.entities: raise ValueError(f"Entity '{e.name}' already registered") self.entities[e.name] = e return self ``` ## `register_block(b)` Register a Block. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_block(self, b: Block) -> GDSSpec: """Register a Block. Raises if name already registered.""" if b.name in self.blocks: raise ValueError(f"Block '{b.name}' already registered") self.blocks[b.name] = b return self ``` ## `register_wiring(w)` Register a SpecWiring. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_wiring(self, w: SpecWiring) -> GDSSpec: """Register a SpecWiring. Raises if name already registered.""" if w.name in self.wirings: raise ValueError(f"Wiring '{w.name}' already registered") self.wirings[w.name] = w return self ``` ## `register_parameter(param_or_name, typedef=None)` Register a parameter definition. Accepts either spec.register_parameter(ParameterDef(name="rate", typedef=Rate)) spec.register_parameter("rate", Rate) # legacy convenience Source code in `packages/gds-framework/gds/spec.py` ``` def register_parameter( self, param_or_name: ParameterDef | str, typedef: TypeDef | None = None ) -> GDSSpec: """Register a parameter definition. Accepts either: spec.register_parameter(ParameterDef(name="rate", typedef=Rate)) spec.register_parameter("rate", Rate) # legacy convenience """ if isinstance(param_or_name, str): if typedef is None: raise ValueError("typedef is required when registering by name string") param = ParameterDef(name=param_or_name, typedef=typedef) else: param = param_or_name self.parameter_schema = self.parameter_schema.add(param) return self ``` ## `register_admissibility(ac)` Register an admissible input constraint. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_admissibility(self, ac: AdmissibleInputConstraint) -> GDSSpec: """Register an admissible input constraint. Raises if name already registered. """ if ac.name in self.admissibility_constraints: raise ValueError(f"Admissibility constraint '{ac.name}' already registered") self.admissibility_constraints[ac.name] = ac return self ``` ## `register_transition_signature(ts)` Register a transition signature. Raises if mechanism already has one. Source code in `packages/gds-framework/gds/spec.py` ``` def register_transition_signature(self, ts: TransitionSignature) -> GDSSpec: """Register a transition signature. Raises if mechanism already has one.""" if ts.mechanism in self.transition_signatures: raise ValueError( f"Transition signature for '{ts.mechanism}' already registered" ) self.transition_signatures[ts.mechanism] = ts return self ``` ## `register_state_metric(sm)` Register a state metric. Raises if name already registered. Source code in `packages/gds-framework/gds/spec.py` ``` def register_state_metric(self, sm: StateMetric) -> GDSSpec: """Register a state metric. Raises if name already registered.""" if sm.name in self.state_metrics: raise ValueError(f"State metric '{sm.name}' already registered") self.state_metrics[sm.name] = sm return self ``` ## `collect(*objects)` Register multiple objects by type-dispatching each. Accepts any mix of TypeDef, Space, Entity, Block, and ParameterDef instances. Does not handle SpecWiring, AdmissibleInputConstraint, TransitionSignature, or (name, typedef) parameter shorthand --- those stay explicit via their respective `register_*()` methods. Raises TypeError for unrecognized types. Source code in `packages/gds-framework/gds/spec.py` ``` def collect( self, *objects: TypeDef | Space | Entity | Block | ParameterDef ) -> GDSSpec: """Register multiple objects by type-dispatching each. Accepts any mix of TypeDef, Space, Entity, Block, and ParameterDef instances. Does not handle SpecWiring, AdmissibleInputConstraint, TransitionSignature, or (name, typedef) parameter shorthand --- those stay explicit via their respective ``register_*()`` methods. Raises TypeError for unrecognized types. """ for obj in objects: if isinstance(obj, TypeDef): self.register_type(obj) elif isinstance(obj, Space): self.register_space(obj) elif isinstance(obj, Entity): self.register_entity(obj) elif isinstance(obj, ParameterDef): self.register_parameter(obj) elif isinstance(obj, Block): self.register_block(obj) else: raise TypeError( f"collect() does not accept {type(obj).__name__!r}; " f"expected TypeDef, Space, Entity, Block, or ParameterDef" ) return self ``` ## `validate_spec()` Full structural validation. Returns list of error strings. Source code in `packages/gds-framework/gds/spec.py` ``` def validate_spec(self) -> list[str]: """Full structural validation. Returns list of error strings.""" errors: list[str] = [] errors += self._validate_space_types() errors += self._validate_wiring_blocks() errors += self._validate_mechanism_updates() errors += self._validate_param_references() errors += self._validate_admissibility_constraints() errors += self._validate_transition_signatures() errors += self._validate_state_metrics() return errors ``` Bases: `BaseModel` A named composition of blocks connected by wires. Source code in `packages/gds-framework/gds/spec.py` ``` class SpecWiring(BaseModel, frozen=True): """A named composition of blocks connected by wires.""" name: str block_names: list[str] = Field(default_factory=list) wires: list[Wire] = Field(default_factory=list) description: str = "" ``` Bases: `BaseModel` A connection between two blocks within a wiring. Source code in `packages/gds-framework/gds/spec.py` ``` class Wire(BaseModel, frozen=True): """A connection between two blocks within a wiring.""" source: str target: str space: str = "" optional: bool = False ``` # gds.state Bases: `Tagged` A named component of the system state. In GDS terms, the full state space X is the product of all entity state spaces. Entities correspond to actors, resources, registries — anything that persists across temporal boundaries. Source code in `packages/gds-framework/gds/state.py` ``` class Entity(Tagged): """A named component of the system state. In GDS terms, the full state space X is the product of all entity state spaces. Entities correspond to actors, resources, registries — anything that persists across temporal boundaries. """ name: str variables: dict[str, StateVariable] = Field(default_factory=dict) description: str = "" model_config = ConfigDict(frozen=True) def validate_state(self, data: dict[str, Any]) -> list[str]: """Validate a state snapshot for this entity. Returns a list of error strings (empty means valid). """ errors: list[str] = [] for vname, var in self.variables.items(): if vname not in data: errors.append(f"{self.name}.{vname}: missing") elif not var.check_value(data[vname]): errors.append(f"{self.name}.{vname}: type/constraint violation") return errors ``` ## `validate_state(data)` Validate a state snapshot for this entity. Returns a list of error strings (empty means valid). Source code in `packages/gds-framework/gds/state.py` ``` def validate_state(self, data: dict[str, Any]) -> list[str]: """Validate a state snapshot for this entity. Returns a list of error strings (empty means valid). """ errors: list[str] = [] for vname, var in self.variables.items(): if vname not in data: errors.append(f"{self.name}.{vname}: missing") elif not var.check_value(data[vname]): errors.append(f"{self.name}.{vname}: type/constraint violation") return errors ``` Bases: `BaseModel` A single typed variable within an entity's state. Each variable has a TypeDef (with runtime constraints), a human-readable description, and an optional math symbol. Source code in `packages/gds-framework/gds/state.py` ``` class StateVariable(BaseModel): """A single typed variable within an entity's state. Each variable has a TypeDef (with runtime constraints), a human-readable description, and an optional math symbol. """ model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) name: str typedef: TypeDef description: str = "" symbol: str = "" def check_value(self, value: Any) -> bool: """Check if a value satisfies this variable's type definition.""" return self.typedef.check_value(value) ``` ## `check_value(value)` Check if a value satisfies this variable's type definition. Source code in `packages/gds-framework/gds/state.py` ``` def check_value(self, value: Any) -> bool: """Check if a value satisfies this variable's type definition.""" return self.typedef.check_value(value) ``` # gds.types ## Interface Bases: `BaseModel` The directional-pair boundary of a Block. Four port slots organized by direction forward_in — domain inputs (covariant) forward_out — codomain outputs (covariant) backward_in — backward inputs (contravariant) backward_out — backward outputs (contravariant) Source code in `packages/gds-framework/gds/types/interface.py` ``` class Interface(BaseModel, frozen=True): """The directional-pair boundary of a Block. Four port slots organized by direction: forward_in — domain inputs (covariant) forward_out — codomain outputs (covariant) backward_in — backward inputs (contravariant) backward_out — backward outputs (contravariant) """ forward_in: tuple[Port, ...] = () forward_out: tuple[Port, ...] = () backward_in: tuple[Port, ...] = () backward_out: tuple[Port, ...] = () ``` Bases: `BaseModel` A named, typed connection point on a block's interface. Source code in `packages/gds-framework/gds/types/interface.py` ``` class Port(BaseModel, frozen=True): """A named, typed connection point on a block's interface.""" name: str type_tokens: frozenset[str] ``` ## TypeDef Bases: `BaseModel` A named, constrained type used in spaces and state. Each TypeDef wraps a Python type with an optional constraint predicate. `check_value()` checks both the Python type and the constraint at runtime. The optional `constraint_kind` field enables lossless round-tripping of common constraint patterns through OWL/SHACL export. When set, the OWL exporter emits SHACL property shapes instead of an opaque `hasConstraint: true` boolean, promoting the constraint from R3 (lossy) to R2 (structurally representable). Source code in `packages/gds-framework/gds/types/typedef.py` ``` class TypeDef(BaseModel): """A named, constrained type used in spaces and state. Each TypeDef wraps a Python type with an optional constraint predicate. ``check_value()`` checks both the Python type and the constraint at runtime. The optional ``constraint_kind`` field enables lossless round-tripping of common constraint patterns through OWL/SHACL export. When set, the OWL exporter emits SHACL property shapes instead of an opaque ``hasConstraint: true`` boolean, promoting the constraint from R3 (lossy) to R2 (structurally representable). """ model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) name: str python_type: type description: str = "" constraint: Callable[[Any], bool] | None = None units: str | None = None constraint_kind: ConstraintKind | None = None constraint_bounds: tuple[float, float] | None = None constraint_values: tuple[Any, ...] | None = None def check_value(self, value: Any) -> bool: """Check if a value satisfies this type definition.""" if not isinstance(value, self.python_type): return False if self.constraint is None: return True try: return bool(self.constraint(value)) except Exception: return False ``` ## `check_value(value)` Check if a value satisfies this type definition. Source code in `packages/gds-framework/gds/types/typedef.py` ``` def check_value(self, value: Any) -> bool: """Check if a value satisfies this type definition.""" if not isinstance(value, self.python_type): return False if self.constraint is None: return True try: return bool(self.constraint(value)) except Exception: return False ``` ## Tokens Tokenize a signature string into a normalized frozen set of tokens. Splitting rules (applied in order): 1. Apply Unicode NFC normalization (so that e.g. é as base+combining matches precomposed é). 1. Split on ' + ' (the compound-type joiner). 1. Split each part on ', ' (comma-space). 1. Strip whitespace and lowercase each token. 1. Discard empty strings. Source code in `packages/gds-framework/gds/types/tokens.py` ``` def tokenize(signature: str) -> frozenset[str]: """Tokenize a signature string into a normalized frozen set of tokens. Splitting rules (applied in order): 1. Apply Unicode NFC normalization (so that e.g. é as base+combining matches precomposed é). 2. Split on ' + ' (the compound-type joiner). 3. Split each part on ', ' (comma-space). 4. Strip whitespace and lowercase each token. 5. Discard empty strings. """ if not signature: return frozenset() signature = unicodedata.normalize("NFC", signature) tokens: set[str] = set() for plus_part in signature.split(" + "): for comma_part in plus_part.split(", "): normalized = comma_part.strip().lower() if normalized: tokens.add(normalized) return frozenset(tokens) ``` Return True if *a* and *b* share at least one token. Source code in `packages/gds-framework/gds/types/tokens.py` ``` def tokens_overlap(a: str, b: str) -> bool: """Return True if *a* and *b* share at least one token.""" a_tokens = tokenize(a) b_tokens = tokenize(b) if not a_tokens or not b_tokens: return False return bool(a_tokens & b_tokens) ``` Return True if every token in *child* appears in *parent*. Returns True if child is empty (vacuous truth). Source code in `packages/gds-framework/gds/types/tokens.py` ``` def tokens_subset(child: str, parent: str) -> bool: """Return True if every token in *child* appears in *parent*. Returns True if child is empty (vacuous truth). """ child_tokens = tokenize(child) if not child_tokens: return True return child_tokens <= tokenize(parent) ``` # gds.verification ## Engine Run verification checks against a SystemIR. Parameters: | Name | Type | Description | Default | | -------- | ----------------------------------------------- | --------------------- | ---------------------------------------------------------- | | `system` | `SystemIR` | The system to verify. | *required* | | `checks` | \`list\[Callable\[[SystemIR], list[Finding]\]\] | None\` | Optional subset of checks. Defaults to all generic checks. | Source code in `packages/gds-framework/gds/verification/engine.py` ``` def verify( system: SystemIR, checks: list[Callable[[SystemIR], list[Finding]]] | None = None, ) -> VerificationReport: """Run verification checks against a SystemIR. Args: system: The system to verify. checks: Optional subset of checks. Defaults to all generic checks. """ checks = checks or ALL_CHECKS findings: list[Finding] = [] for check_fn in checks: findings.extend(check_fn(system)) return VerificationReport(system_name=system.name, findings=findings) ``` ## Findings Bases: `BaseModel` A single verification check result — pass or fail with context. Source code in `packages/gds-framework/gds/verification/findings.py` ``` class Finding(BaseModel): """A single verification check result — pass or fail with context.""" check_id: str severity: Severity message: str source_elements: list[str] = Field(default_factory=list) passed: bool exportable_predicate: str = "" ``` Bases: `StrEnum` Severity level of a verification finding. Source code in `packages/gds-framework/gds/verification/findings.py` ``` class Severity(StrEnum): """Severity level of a verification finding.""" ERROR = "error" WARNING = "warning" INFO = "info" ``` Bases: `BaseModel` Aggregated verification results for a system. Source code in `packages/gds-framework/gds/verification/findings.py` ``` class VerificationReport(BaseModel): """Aggregated verification results for a system.""" system_name: str findings: list[Finding] = Field(default_factory=list) @property def errors(self) -> int: return sum( 1 for f in self.findings if not f.passed and f.severity == Severity.ERROR ) @property def warnings(self) -> int: return sum( 1 for f in self.findings if not f.passed and f.severity == Severity.WARNING ) @property def info_count(self) -> int: return sum( 1 for f in self.findings if not f.passed and f.severity == Severity.INFO ) @property def checks_passed(self) -> int: return sum(1 for f in self.findings if f.passed) @property def checks_total(self) -> int: return len(self.findings) ``` ## Generic Checks Generic verification checks G-001 through G-006. These checks operate on the domain-neutral SystemIR. They verify type consistency, structural completeness, and graph topology without referencing any domain-specific block types or semantics. ## `check_g001_domain_codomain_matching(system)` G-001: Domain/Codomain Matching. For every covariant block-to-block wiring, verify the label is consistent with source forward_out or target forward_in. Contravariant wirings are skipped (handled by G-003). Property: For every wiring w where w.direction = COVARIANT and both endpoints are blocks: tokens(w.label) is a subset of tokens(source.forward_out) OR tokens(target.forward_in). See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/generic_checks.py` ``` def check_g001_domain_codomain_matching(system: SystemIR) -> list[Finding]: """G-001: Domain/Codomain Matching. For every covariant block-to-block wiring, verify the label is consistent with source forward_out or target forward_in. Contravariant wirings are skipped (handled by G-003). Property: For every wiring w where w.direction = COVARIANT and both endpoints are blocks: tokens(w.label) is a subset of tokens(source.forward_out) OR tokens(target.forward_in). See: docs/framework/design/check-specifications.md """ findings = [] block_sigs = {b.name: b.signature for b in system.blocks} for wiring in system.wirings: if wiring.direction != FlowDirection.COVARIANT: continue if wiring.source not in block_sigs or wiring.target not in block_sigs: continue src_out = block_sigs[wiring.source][1] # forward_out tgt_in = block_sigs[wiring.target][0] # forward_in if not src_out or not tgt_in: findings.append( Finding( check_id="G-001", severity=Severity.ERROR, message=( f"Cannot verify domain/codomain: " f"{wiring.source} out={src_out!r}, " f"{wiring.target} in={tgt_in!r}" ), source_elements=[wiring.source, wiring.target], passed=False, ) ) continue compatible = tokens_subset(wiring.label, src_out) or tokens_subset( wiring.label, tgt_in ) findings.append( Finding( check_id="G-001", severity=Severity.ERROR, message=( f"Wiring {wiring.label!r}: " f"{wiring.source} out={src_out!r} -> {wiring.target} in={tgt_in!r}" + ("" if compatible else " — MISMATCH") ), source_elements=[wiring.source, wiring.target], passed=compatible, ) ) return findings ``` ## `check_g002_signature_completeness(system)` G-002: Signature Completeness. Every block must have at least one non-empty input slot and at least one non-empty output slot. BoundaryAction blocks (block_type == "boundary") are exempt from the input requirement -- they have no inputs by design, since they model exogenous signals entering the system from outside. Property: For every block b: has_output(b) is True, and (if b is not a BoundaryAction) has_input(b) is True, where has_input/has_output check that at least one of the forward/backward slots is non-empty. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/generic_checks.py` ``` def check_g002_signature_completeness(system: SystemIR) -> list[Finding]: """G-002: Signature Completeness. Every block must have at least one non-empty input slot and at least one non-empty output slot. BoundaryAction blocks (block_type == "boundary") are exempt from the input requirement -- they have no inputs by design, since they model exogenous signals entering the system from outside. Property: For every block b: has_output(b) is True, and (if b is not a BoundaryAction) has_input(b) is True, where has_input/has_output check that at least one of the forward/backward slots is non-empty. See: docs/framework/design/check-specifications.md """ findings = [] for block in system.blocks: fwd_in, fwd_out, bwd_in, bwd_out = block.signature has_input = bool(fwd_in) or bool(bwd_in) has_output = bool(fwd_out) or bool(bwd_out) # BoundaryAction blocks have no inputs by design — only check outputs is_boundary = block.block_type == "boundary" has_required = has_output if is_boundary else has_input and has_output missing = [] if not has_input: missing.append("no inputs") if not has_output: missing.append("no outputs") findings.append( Finding( check_id="G-002", severity=Severity.ERROR, message=( f"{block.name}: signature " f"({fwd_in!r}, {fwd_out!r}, " f"{bwd_in!r}, {bwd_out!r})" + (f" — {', '.join(missing)}" if missing else "") ), source_elements=[block.name], passed=has_required, ) ) return findings ``` ## `check_g003_direction_consistency(system)` G-003: Direction Consistency. Validate direction flag consistency and contravariant port-slot matching. Two validations: A) Flag consistency -- `direction`, `is_feedback`, `is_temporal` must not contradict: - COVARIANT + is_feedback -> ERROR (feedback implies contravariant) - CONTRAVARIANT + is_temporal -> ERROR (temporal implies covariant) B) Contravariant port-slot matching -- for CONTRAVARIANT wirings, the label must be a token-subset of the source's backward_out (signature[3]) or the target's backward_in (signature[2]). G-001 already covers the covariant side. Property: (A) NOT (COVARIANT AND is_feedback) and NOT (CONTRAVARIANT AND is_temporal). (B) For contravariant wirings: tokens(label) is a subset of tokens(source.backward_out) OR tokens(target.backward_in). See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/generic_checks.py` ``` def check_g003_direction_consistency(system: SystemIR) -> list[Finding]: """G-003: Direction Consistency. Validate direction flag consistency and contravariant port-slot matching. Two validations: A) Flag consistency -- ``direction``, ``is_feedback``, ``is_temporal`` must not contradict: - COVARIANT + is_feedback -> ERROR (feedback implies contravariant) - CONTRAVARIANT + is_temporal -> ERROR (temporal implies covariant) B) Contravariant port-slot matching -- for CONTRAVARIANT wirings, the label must be a token-subset of the source's backward_out (signature[3]) or the target's backward_in (signature[2]). G-001 already covers the covariant side. Property: (A) NOT (COVARIANT AND is_feedback) and NOT (CONTRAVARIANT AND is_temporal). (B) For contravariant wirings: tokens(label) is a subset of tokens(source.backward_out) OR tokens(target.backward_in). See: docs/framework/design/check-specifications.md """ findings = [] block_sigs = {b.name: b.signature for b in system.blocks} for wiring in system.wirings: # A) Flag consistency if wiring.direction == FlowDirection.COVARIANT and wiring.is_feedback: findings.append( Finding( check_id="G-003", severity=Severity.ERROR, message=( f"Wiring {wiring.label!r} " f"({wiring.source} -> {wiring.target}): " f"COVARIANT + is_feedback — contradiction" ), source_elements=[wiring.source, wiring.target], passed=False, ) ) continue if wiring.direction == FlowDirection.CONTRAVARIANT and wiring.is_temporal: findings.append( Finding( check_id="G-003", severity=Severity.ERROR, message=( f"Wiring {wiring.label!r} " f"({wiring.source} -> {wiring.target}): " f"CONTRAVARIANT + is_temporal — contradiction" ), source_elements=[wiring.source, wiring.target], passed=False, ) ) continue # B) Contravariant port-slot matching (G-001 covers covariant) if wiring.direction == FlowDirection.CONTRAVARIANT: if wiring.source not in block_sigs or wiring.target not in block_sigs: # Non-block endpoints — G-004 handles dangling references continue src_bwd_out = block_sigs[wiring.source][3] # backward_out tgt_bwd_in = block_sigs[wiring.target][2] # backward_in if not src_bwd_out and not tgt_bwd_in: findings.append( Finding( check_id="G-003", severity=Severity.ERROR, message=( f"Wiring {wiring.label!r} " f"({wiring.source} -> {wiring.target}): " f"CONTRAVARIANT but both backward " f"ports are empty" ), source_elements=[wiring.source, wiring.target], passed=False, ) ) continue compatible = tokens_subset(wiring.label, src_bwd_out) or tokens_subset( wiring.label, tgt_bwd_in ) findings.append( Finding( check_id="G-003", severity=Severity.ERROR, message=( f"Wiring {wiring.label!r}: " f"{wiring.source} bwd_out={src_bwd_out!r} -> " f"{wiring.target} bwd_in={tgt_bwd_in!r}" + ("" if compatible else " — MISMATCH") ), source_elements=[wiring.source, wiring.target], passed=compatible, ) ) return findings ``` ## `check_g004_dangling_wirings(system)` G-004: Dangling Wirings. Flag wirings whose source or target is not in the system's block or input set. A dangling reference indicates a typo or missing block. Property: For every wiring w: w.source in N and w.target in N, where N = {b.name for b in blocks} union {i.name for i in inputs}. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/generic_checks.py` ``` def check_g004_dangling_wirings(system: SystemIR) -> list[Finding]: """G-004: Dangling Wirings. Flag wirings whose source or target is not in the system's block or input set. A dangling reference indicates a typo or missing block. Property: For every wiring w: w.source in N and w.target in N, where N = {b.name for b in blocks} union {i.name for i in inputs}. See: docs/framework/design/check-specifications.md """ findings = [] known_names = {b.name for b in system.blocks} for inp in system.inputs: known_names.add(inp.name) for wiring in system.wirings: src_ok = wiring.source in known_names tgt_ok = wiring.target in known_names ok = src_ok and tgt_ok issues = [] if not src_ok: issues.append(f"source {wiring.source!r} unknown") if not tgt_ok: issues.append(f"target {wiring.target!r} unknown") findings.append( Finding( check_id="G-004", severity=Severity.ERROR, message=( f"Wiring {wiring.label!r} ({wiring.source} -> {wiring.target})" + (f" — {', '.join(issues)}" if issues else "") ), source_elements=[wiring.source, wiring.target], passed=ok, ) ) return findings ``` ## `check_g005_sequential_type_compatibility(system)` G-005: Sequential Type Compatibility. In stack (sequential) composition, the wiring label must be a token-subset of BOTH the source's forward_out AND the target's forward_in. This is stricter than G-001, which only requires matching one side. Property: For every covariant, non-temporal wiring w between blocks: tokens(w.label) is a subset of tokens(source.forward_out) AND tokens(w.label) is a subset of tokens(target.forward_in). See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/generic_checks.py` ``` def check_g005_sequential_type_compatibility(system: SystemIR) -> list[Finding]: """G-005: Sequential Type Compatibility. In stack (sequential) composition, the wiring label must be a token-subset of BOTH the source's forward_out AND the target's forward_in. This is stricter than G-001, which only requires matching one side. Property: For every covariant, non-temporal wiring w between blocks: tokens(w.label) is a subset of tokens(source.forward_out) AND tokens(w.label) is a subset of tokens(target.forward_in). See: docs/framework/design/check-specifications.md """ findings = [] block_sigs = {b.name: b.signature for b in system.blocks} block_names = set(block_sigs.keys()) for wiring in system.wirings: if wiring.direction != FlowDirection.COVARIANT: continue if wiring.is_temporal: continue if wiring.source not in block_names or wiring.target not in block_names: continue src_out = block_sigs[wiring.source][1] # forward_out tgt_in = block_sigs[wiring.target][0] # forward_in if not src_out or not tgt_in: continue label_in_out = tokens_subset(wiring.label, src_out) label_in_in = tokens_subset(wiring.label, tgt_in) compatible = label_in_out and label_in_in findings.append( Finding( check_id="G-005", severity=Severity.ERROR, message=( f"Stack {wiring.source} ; {wiring.target}: " f"out={src_out!r}, in={tgt_in!r}, wiring={wiring.label!r}" + ("" if compatible else " — type mismatch") ), source_elements=[wiring.source, wiring.target], passed=compatible, ) ) return findings ``` ## `check_g006_covariant_acyclicity(system)` G-006: Covariant Acyclicity. The covariant (forward) flow graph must be a directed acyclic graph (DAG). Temporal wirings and contravariant wirings are excluded because they do not create within-evaluation algebraic dependencies. Property: Let G_cov = (V, E_cov) where V = {b.name for b in blocks} and E_cov = {(w.source, w.target) for w in wirings if w.direction = COVARIANT and not w.is_temporal}. G_cov is acyclic. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/generic_checks.py` ``` def check_g006_covariant_acyclicity(system: SystemIR) -> list[Finding]: """G-006: Covariant Acyclicity. The covariant (forward) flow graph must be a directed acyclic graph (DAG). Temporal wirings and contravariant wirings are excluded because they do not create within-evaluation algebraic dependencies. Property: Let G_cov = (V, E_cov) where V = {b.name for b in blocks} and E_cov = {(w.source, w.target) for w in wirings if w.direction = COVARIANT and not w.is_temporal}. G_cov is acyclic. See: docs/framework/design/check-specifications.md """ block_names = {b.name for b in system.blocks} adj: dict[str, list[str]] = {name: [] for name in block_names} for wiring in system.wirings: if wiring.direction != FlowDirection.COVARIANT: continue if wiring.is_temporal: continue if wiring.source in block_names and wiring.target in block_names: adj[wiring.source].append(wiring.target) # DFS cycle detection WHITE, GRAY, BLACK = 0, 1, 2 color = {name: WHITE for name in block_names} cycle_path: list[str] = [] has_cycle = False def dfs(node: str) -> bool: nonlocal has_cycle color[node] = GRAY cycle_path.append(node) for neighbor in adj[node]: if color[neighbor] == GRAY: has_cycle = True return True if color[neighbor] == WHITE and dfs(neighbor): return True cycle_path.pop() color[node] = BLACK return False for node in block_names: if color[node] == WHITE and dfs(node): break if has_cycle: return [ Finding( check_id="G-006", severity=Severity.ERROR, message=( f"Covariant flow graph contains a cycle: {' -> '.join(cycle_path)}" ), source_elements=cycle_path, passed=False, ) ] return [ Finding( check_id="G-006", severity=Severity.ERROR, message="Covariant flow graph is acyclic (DAG)", source_elements=[], passed=True, ) ] ``` ## Spec Checks SC-001: Completeness. Every entity variable is updated by at least one mechanism. Detects orphan state variables that can never change -- a likely specification error. Property: Let U = {(e, v) for m in Mechanisms for (e, v) in m.updates}. For every entity e and variable v in e.variables: (e.name, v) in U. The mechanism update map is surjective onto the state variable set. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/spec_checks.py` ``` def check_completeness(spec: GDSSpec) -> list[Finding]: """SC-001: Completeness. Every entity variable is updated by at least one mechanism. Detects orphan state variables that can never change -- a likely specification error. Property: Let U = {(e, v) for m in Mechanisms for (e, v) in m.updates}. For every entity e and variable v in e.variables: (e.name, v) in U. The mechanism update map is surjective onto the state variable set. See: docs/framework/design/check-specifications.md """ findings: list[Finding] = [] all_updates: set[tuple[str, str]] = set() for block in spec.blocks.values(): if isinstance(block, Mechanism): for entity_name, var_name in block.updates: all_updates.add((entity_name, var_name)) orphans: list[str] = [] for entity in spec.entities.values(): for var_name in entity.variables: if (entity.name, var_name) not in all_updates: orphans.append(f"{entity.name}.{var_name}") if orphans: findings.append( Finding( check_id="SC-001", severity=Severity.WARNING, message=( f"Orphan state variables never updated by any mechanism: {orphans}" ), source_elements=orphans, passed=False, ) ) else: findings.append( Finding( check_id="SC-001", severity=Severity.INFO, message="All state variables are updated by at least one mechanism", passed=True, ) ) return findings ``` SC-002: Determinism. Within each wiring, no two mechanisms update the same variable. Detects write conflicts where multiple mechanisms try to modify the same state variable within the same composition. Property: For every wiring w and every (entity, variable) pair (e, v): |{m in w.block_names : m is Mechanism, (e, v) in m.updates}| \<= 1. The state transition f must be a function, not a multi-valued relation. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/spec_checks.py` ``` def check_determinism(spec: GDSSpec) -> list[Finding]: """SC-002: Determinism. Within each wiring, no two mechanisms update the same variable. Detects write conflicts where multiple mechanisms try to modify the same state variable within the same composition. Property: For every wiring w and every (entity, variable) pair (e, v): |{m in w.block_names : m is Mechanism, (e, v) in m.updates}| <= 1. The state transition f must be a function, not a multi-valued relation. See: docs/framework/design/check-specifications.md """ findings: list[Finding] = [] for wiring in spec.wirings.values(): update_map: dict[tuple[str, str], list[str]] = defaultdict(list) for bname in wiring.block_names: block = spec.blocks.get(bname) if block is not None and isinstance(block, Mechanism): for entity_name, var_name in block.updates: update_map[(entity_name, var_name)].append(bname) for (ename, vname), mechs in update_map.items(): if len(mechs) > 1: findings.append( Finding( check_id="SC-002", severity=Severity.ERROR, message=( f"Write conflict in wiring '{wiring.name}': " f"{ename}.{vname} updated by {mechs}" ), source_elements=mechs, passed=False, ) ) if not any(f.check_id == "SC-002" for f in findings): findings.append( Finding( check_id="SC-002", severity=Severity.INFO, message="No write conflicts detected", passed=True, ) ) return findings ``` SC-003: Reachability. Can signals reach from one block to another through the wiring graph? Maps to the GDS attainability correspondence. Property: There exists a directed path in the wire graph from from_block to to_block, where edges are (wire.source, wire.target) across all SpecWiring instances. Unlike other semantic checks, requires explicit from_block and to_block arguments. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/spec_checks.py` ``` def check_reachability(spec: GDSSpec, from_block: str, to_block: str) -> list[Finding]: """SC-003: Reachability. Can signals reach from one block to another through the wiring graph? Maps to the GDS attainability correspondence. Property: There exists a directed path in the wire graph from from_block to to_block, where edges are (wire.source, wire.target) across all SpecWiring instances. Unlike other semantic checks, requires explicit from_block and to_block arguments. See: docs/framework/design/check-specifications.md """ adj: dict[str, set[str]] = defaultdict(set) for wiring in spec.wirings.values(): for wire in wiring.wires: adj[wire.source].add(wire.target) visited: set[str] = set() queue = [from_block] reachable = False while queue: current = queue.pop(0) if current == to_block: reachable = True break if current in visited: continue visited.add(current) queue.extend(adj.get(current, set())) if reachable: return [ Finding( check_id="SC-003", severity=Severity.INFO, message=f"Block '{from_block}' can reach '{to_block}'", source_elements=[from_block, to_block], passed=True, ) ] return [ Finding( check_id="SC-003", severity=Severity.WARNING, message=f"Block '{from_block}' cannot reach '{to_block}'", source_elements=[from_block, to_block], passed=False, ) ] ``` SC-004: Type Safety. Wire spaces match source and target block expectations. Verifies that space references on wires correspond to registered spaces. Property: For every wire in every SpecWiring: if wire.space is non-empty, then wire.space is in spec.spaces. Referential integrity of space declarations on wiring channels. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/spec_checks.py` ``` def check_type_safety(spec: GDSSpec) -> list[Finding]: """SC-004: Type Safety. Wire spaces match source and target block expectations. Verifies that space references on wires correspond to registered spaces. Property: For every wire in every SpecWiring: if wire.space is non-empty, then wire.space is in spec.spaces. Referential integrity of space declarations on wiring channels. See: docs/framework/design/check-specifications.md """ findings: list[Finding] = [] for wiring in spec.wirings.values(): for wire in wiring.wires: if wire.space and wire.space not in spec.spaces: findings.append( Finding( check_id="SC-004", severity=Severity.ERROR, message=( f"Wire {wire.source} -> {wire.target} references " f"unregistered space '{wire.space}'" ), source_elements=[wire.source, wire.target], passed=False, ) ) if not any(f.check_id == "SC-004" for f in findings): findings.append( Finding( check_id="SC-004", severity=Severity.INFO, message="All wire space references are valid", passed=True, ) ) return findings ``` SC-005: Parameter References. All parameter references in blocks resolve to registered parameters. Validates that every `params_used` entry on blocks corresponds to a parameter definition in the spec's `parameter_schema`. Property: For every block b implementing HasParams: {p for p in b.params_used} is a subset of spec.parameter_schema.names(). See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/spec_checks.py` ``` def check_parameter_references(spec: GDSSpec) -> list[Finding]: """SC-005: Parameter References. All parameter references in blocks resolve to registered parameters. Validates that every ``params_used`` entry on blocks corresponds to a parameter definition in the spec's ``parameter_schema``. Property: For every block b implementing HasParams: {p for p in b.params_used} is a subset of spec.parameter_schema.names(). See: docs/framework/design/check-specifications.md """ findings: list[Finding] = [] param_names = spec.parameter_schema.names() unresolved: list[str] = [] for bname, block in spec.blocks.items(): if isinstance(block, HasParams): for param in block.params_used: if param not in param_names: unresolved.append(f"{bname} -> {param}") if unresolved: findings.append( Finding( check_id="SC-005", severity=Severity.ERROR, message=f"Unresolved parameter references: {unresolved}", source_elements=unresolved, passed=False, ) ) else: findings.append( Finding( check_id="SC-005", severity=Severity.INFO, message="All parameter references resolve to registered definitions", passed=True, ) ) return findings ``` SC-006/SC-007: Canonical Wellformedness. Canonical projection structural validity. Two sub-checks: - SC-006: At least one mechanism exists (f is non-empty). Property: |project_canonical(spec).mechanism_blocks| >= 1. - SC-007: State space X is non-empty (entities with variables exist). Property: |project_canonical(spec).state_variables| >= 1. Together these ensure the canonical form h = f . g is non-degenerate. See: docs/framework/design/check-specifications.md Source code in `packages/gds-framework/gds/verification/spec_checks.py` ``` def check_canonical_wellformedness(spec: GDSSpec) -> list[Finding]: """SC-006/SC-007: Canonical Wellformedness. Canonical projection structural validity. Two sub-checks: - SC-006: At least one mechanism exists (f is non-empty). Property: |project_canonical(spec).mechanism_blocks| >= 1. - SC-007: State space X is non-empty (entities with variables exist). Property: |project_canonical(spec).state_variables| >= 1. Together these ensure the canonical form h = f . g is non-degenerate. See: docs/framework/design/check-specifications.md """ findings: list[Finding] = [] canonical = project_canonical(spec) if not canonical.mechanism_blocks: findings.append( Finding( check_id="SC-006", severity=Severity.WARNING, message="No mechanisms found — state transition f is empty", passed=False, ) ) else: findings.append( Finding( check_id="SC-006", severity=Severity.INFO, message=( "State transition f has " f"{len(canonical.mechanism_blocks)} mechanism(s)" ), passed=True, ) ) if not canonical.state_variables: findings.append( Finding( check_id="SC-007", severity=Severity.WARNING, message="State space X is empty — no entity variables defined", passed=False, ) ) else: findings.append( Finding( check_id="SC-007", severity=Severity.INFO, message=( f"State space X has {len(canonical.state_variables)} variable(s)" ), passed=True, ) ) return findings ``` # Visualization (gds-viz) # gds-viz **Mermaid diagram renderers** for [gds-framework](https://blockscience.github.io/gds-framework) specifications. ## Six Views gds-viz provides six views — each a different projection of the GDS specification `{h, X}`: | View | Function | Input | Answers | | ------------------------ | ------------------------------- | -------------- | ------------------------------------------- | | 1. Structural | `system_to_mermaid()` | `SystemIR` | What blocks exist and how are they wired? | | 2. Canonical GDS | `canonical_to_mermaid()` | `CanonicalGDS` | What is the formal decomposition h = f ∘ g? | | 3. Architecture (role) | `spec_to_mermaid()` | `GDSSpec` | How do blocks group by GDS role? | | 4. Architecture (domain) | `spec_to_mermaid(group_by=...)` | `GDSSpec` | How do blocks group by domain/agent? | | 5. Parameter influence | `params_to_mermaid()` | `GDSSpec` | What does each parameter control? | | 6. Traceability | `trace_to_mermaid()` | `GDSSpec` | What can affect a specific state variable? | ## Quick Start ``` uv add gds-viz # or: pip install gds-viz ``` ``` from gds_viz import system_to_mermaid, canonical_to_mermaid, spec_to_mermaid # View 1: Structural mermaid = system_to_mermaid(system_ir) # View 2: Canonical from gds.canonical import project_canonical mermaid = canonical_to_mermaid(project_canonical(spec)) # View 3: Architecture by role mermaid = spec_to_mermaid(spec) ``` ## Sample Output Here is a canonical GDS diagram generated from the SIR epidemic model -- `canonical_to_mermaid(project_canonical(spec))`: ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 X_t(["X_t
Susceptible.count, Infected.count, Recovered.count"]):::state X_next(["X_{t+1}
Susceptible.count, Infected.count, Recovered.count"]):::state Theta{{"Θ
gamma, beta, contact_rate"}}:::param subgraph U ["Boundary (U)"] Contact_Process[Contact Process]:::boundary end subgraph g ["Policy (g)"] Infection_Policy[Infection Policy]:::policy end subgraph f ["Mechanism (f)"] Update_Susceptible[Update Susceptible]:::mechanism Update_Infected[Update Infected]:::mechanism Update_Recovered[Update Recovered]:::mechanism end X_t --> U U --> g g --> f Update_Susceptible -.-> |Susceptible.count| X_next Update_Infected -.-> |Infected.count| X_next Update_Recovered -.-> |Recovered.count| X_next Theta -.-> g Theta -.-> f style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style f fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` See all six views in the [Views Gallery](https://blockscience.github.io/gds-core/viz/guide/views/index.md). ## What gds-viz Does NOT Cover The six views exhaust what is **derivable from the GDS specification** `{h, X}`. Two views are deliberately excluded: - **State Machine View** — requires discrete states and transition guards. GDS defines a continuous state space X, not a finite set of named states. - **Simulation / Execution Order View** — requires operational semantics. GDS specifies structure, not runtime. ## Credits **Author:** [Rohan Mehta](https://github.com/rororowyourboat) — [BlockScience](https://block.science/) **Theoretical foundation:** [Dr. Michael Zargham](https://github.com/mzargham) and [Dr. Jamsheed Shorish](https://github.com/jshorish) **Lineage:** Part of the [cadCAD](https://github.com/cadCAD-org/cadCAD) ecosystem for Complex Adaptive Dynamics. # Getting Started ## Installation ``` pip install gds-viz ``` Or with [uv](https://docs.astral.sh/uv/): ``` uv add gds-viz ``` gds-viz imports as `gds_viz`: ``` from gds_viz import system_to_mermaid, canonical_to_mermaid, spec_to_mermaid ``` ## Requirements - Python 3.12 or later - [gds-framework](https://pypi.org/project/gds-framework/) >= 0.2.3 (installed automatically) ## How It Works gds-viz takes GDS objects (`SystemIR`, `GDSSpec`, `CanonicalGDS`) and returns **Mermaid markdown strings**. It does not render images directly -- you paste the output into any Mermaid-compatible renderer. ``` from gds_viz import system_to_mermaid mermaid_str = system_to_mermaid(system) print(mermaid_str) # paste into GitHub markdown, mermaid.live, etc. ``` ## Quick Start: SIR Epidemic Model This example uses the SIR epidemic model from `gds-examples` to demonstrate all six views. The model has three entities (Susceptible, Infected, Recovered), one boundary action, one policy, and three mechanisms. ### Step 1: Build the Model ``` from sir_epidemic.model import build_spec, build_system from gds.canonical import project_canonical spec = build_spec() system = build_system() canonical = project_canonical(spec) ``` ### Step 2: Generate Views ``` from gds_viz import ( system_to_mermaid, canonical_to_mermaid, spec_to_mermaid, params_to_mermaid, trace_to_mermaid, ) # View 1: Structural -- compiled block topology print(system_to_mermaid(system)) # View 2: Canonical -- h = f . g decomposition print(canonical_to_mermaid(canonical)) # View 3: Architecture by role -- blocks grouped by GDS role print(spec_to_mermaid(spec)) # View 5: Parameter influence -- Theta -> blocks -> entities print(params_to_mermaid(spec)) # View 6: Traceability -- what affects Susceptible.count? print(trace_to_mermaid(spec, "Susceptible", "count")) ``` ### Step 3: Render Paste the Mermaid output into any compatible renderer: - **GitHub / GitLab** -- native Mermaid support in markdown files - **VS Code** -- with a Mermaid extension - **Obsidian** -- built-in support - **[mermaid.live](https://mermaid.live)** -- online editor - **MkDocs** -- with `pymdownx.superfences` Mermaid fence (used by this documentation) - **marimo** -- `mo.mermaid(mermaid_str)` for interactive notebooks ## Rendered Output Here is the actual output from the SIR epidemic model, rendered inline. ### View 1: Structural The compiled block graph from `SystemIR`. Shows composition topology with role-based shapes and wiring types. ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b Contact_Process([Contact Process]):::boundary Infection_Policy[Infection Policy]:::generic Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism Contact_Process --Contact Signal--> Infection_Policy Infection_Policy --Susceptible Delta--> Update_Susceptible Infection_Policy --Infected Delta--> Update_Infected Infection_Policy --Recovered Delta--> Update_Recovered ``` ### View 2: Canonical GDS The mathematical decomposition: X_t --> U --> g --> f --> X\_{t+1}. Shows the abstract dynamical system with state (X), input (U), policy (g), mechanism (f), and parameter space (Theta). ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 X_t(["X_t
Susceptible.count, Infected.count, Recovered.count"]):::state X_next(["X_{t+1}
Susceptible.count, Infected.count, Recovered.count"]):::state Theta{{"Θ
gamma, beta, contact_rate"}}:::param subgraph U ["Boundary (U)"] Contact_Process[Contact Process]:::boundary end subgraph g ["Policy (g)"] Infection_Policy[Infection Policy]:::policy end subgraph f ["Mechanism (f)"] Update_Susceptible[Update Susceptible]:::mechanism Update_Infected[Update Infected]:::mechanism Update_Recovered[Update Recovered]:::mechanism end X_t --> U U --> g g --> f Update_Susceptible -.-> |Susceptible.count| X_next Update_Infected -.-> |Infected.count| X_next Update_Recovered -.-> |Recovered.count| X_next Theta -.-> g Theta -.-> f style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style f fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` ### View 3: Architecture by Role Blocks grouped by GDS role. Entity cylinders show which state variables each mechanism writes. ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 subgraph boundary ["Boundary (U)"] Contact_Process([Contact Process]):::boundary end subgraph policy ["Policy (g)"] Infection_Policy[Infection Policy]:::policy end subgraph mechanism ["Mechanism (f)"] Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism end entity_Susceptible[("Susceptible
count: S")]:::entity entity_Infected[("Infected
count: I")]:::entity entity_Recovered[("Recovered
count: R")]:::entity Update_Susceptible -.-> entity_Susceptible Update_Infected -.-> entity_Infected Update_Recovered -.-> entity_Recovered Contact_Process --ContactSignalSpace--> Infection_Policy Infection_Policy --DeltaSpace--> Update_Infected Infection_Policy --DeltaSpace--> Update_Recovered Infection_Policy --DeltaSpace--> Update_Susceptible style boundary fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style policy fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style mechanism fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` ## Where Visualization Fits Visualization is a **post-compilation** concern. It operates on the same compiled artifacts that verification uses: ``` Define model → build_spec() / build_system() ↓ Compile → GDSSpec, SystemIR, CanonicalGDS ↓ ┌─────────┴──────────┐ ↓ ↓ Verify (gds) Visualize (gds-viz) ``` The six views are different projections of the same specification -- they never modify it, only read it. You can generate views at any point after compilation. ## Next Steps - **[Views Guide](https://blockscience.github.io/gds-core/viz/guide/views/index.md)** -- detailed gallery of all six view types with rendered output - **[Theming Guide](https://blockscience.github.io/gds-core/viz/guide/theming/index.md)** -- customizing diagram appearance with 5 built-in themes - **[API Reference](https://blockscience.github.io/gds-core/viz/api/init/index.md)** -- full function signatures and options - **[Visualization Guide](https://blockscience.github.io/gds-core/guides/visualization/index.md)** -- cross-DSL examples and interactive notebooks # Theming All gds-viz functions accept an optional `theme` parameter to control diagram appearance. The `MermaidTheme` type restricts values to Mermaid's five built-in themes. ## Usage ``` from gds_viz import system_to_mermaid, MermaidTheme # Pass any theme name mermaid = system_to_mermaid(system, theme="dark") # Type-safe with MermaidTheme literal theme: MermaidTheme = "forest" mermaid = system_to_mermaid(system, theme=theme) ``` All view functions support theming: ``` from gds_viz import ( system_to_mermaid, canonical_to_mermaid, spec_to_mermaid, block_to_mermaid, params_to_mermaid, trace_to_mermaid, ) system_to_mermaid(system, theme="neutral") canonical_to_mermaid(canonical, theme="dark") spec_to_mermaid(spec, theme="forest") block_to_mermaid(block, theme="neutral") params_to_mermaid(spec, theme="base") trace_to_mermaid(spec, "Entity", "var", theme="default") ``` ## Available Themes | Theme | Best For | Canvas | | --------- | --------------------------------- | ------------------------- | | `neutral` | Light backgrounds (GitHub, docs) | Muted gray -- **default** | | `default` | Mermaid's built-in Material style | Blue-toned | | `dark` | Dark-mode renderers | Dark canvas, light text | | `forest` | Nature-inspired presentations | Green-tinted | | `base` | Minimal chrome, very light fills | Near-white | ## Theme Comparison The same SIR epidemic structural diagram rendered with different themes. ### Neutral (default) ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b Contact_Process([Contact Process]):::boundary Infection_Policy[Infection Policy]:::generic Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism Contact_Process --Contact Signal--> Infection_Policy Infection_Policy --Susceptible Delta--> Update_Susceptible Infection_Policy --Infected Delta--> Update_Infected Infection_Policy --Recovered Delta--> Update_Recovered ``` ### Dark ``` %%{init:{"theme":"dark"}}%% flowchart TD classDef boundary fill:#1e40af,stroke:#60a5fa,stroke-width:2px,color:#dbeafe classDef policy fill:#92400e,stroke:#fbbf24,stroke-width:2px,color:#fef3c7 classDef mechanism fill:#166534,stroke:#4ade80,stroke-width:2px,color:#dcfce7 classDef control fill:#581c87,stroke:#c084fc,stroke-width:2px,color:#f3e8ff classDef generic fill:#334155,stroke:#94a3b8,stroke-width:1px,color:#e2e8f0 Contact_Process([Contact Process]):::boundary Infection_Policy[Infection Policy]:::generic Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism Contact_Process --Contact Signal--> Infection_Policy Infection_Policy --Susceptible Delta--> Update_Susceptible Infection_Policy --Infected Delta--> Update_Infected Infection_Policy --Recovered Delta--> Update_Recovered ``` ### Forest ``` %%{init:{"theme":"forest"}}%% flowchart TD classDef boundary fill:#a7f3d0,stroke:#059669,stroke-width:2px,color:#064e3b classDef policy fill:#fde68a,stroke:#b45309,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#15803d,stroke-width:2px,color:#14532d classDef control fill:#d9f99d,stroke:#65a30d,stroke-width:2px,color:#365314 classDef generic fill:#d1d5db,stroke:#6b7280,stroke-width:1px,color:#1f2937 Contact_Process([Contact Process]):::boundary Infection_Policy[Infection Policy]:::generic Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism Contact_Process --Contact Signal--> Infection_Policy Infection_Policy --Susceptible Delta--> Update_Susceptible Infection_Policy --Infected Delta--> Update_Infected Infection_Policy --Recovered Delta--> Update_Recovered ``` ## Color Scheme gds-viz uses a consistent color scheme across all views. Each role and element type has a dedicated palette that adapts to the selected theme. ### Neutral Theme Palette | Element | Fill | Stroke | CSS Class | | -------------- | ---------------------- | --------- | ----------- | | BoundaryAction | `#93c5fd` (light blue) | `#2563eb` | `boundary` | | Policy | `#fcd34d` (yellow) | `#d97706` | `policy` | | Mechanism | `#86efac` (green) | `#16a34a` | `mechanism` | | ControlAction | `#d8b4fe` (purple) | `#9333ea` | `control` | | Generic | `#cbd5e1` (gray) | `#64748b` | `generic` | | Entity | `#e2e8f0` (light gray) | `#475569` | `entity` | | Parameter | `#fdba74` (orange) | `#ea580c` | `param` | | State (X_t) | `#5eead4` (teal) | `#0d9488` | `state` | | Target | `#fca5a5` (red) | `#dc2626` | `target` | ## Rendering Targets Output is standard Mermaid markdown. It renders in: - **GitHub / GitLab** -- native Mermaid support in markdown files and issues - **VS Code** -- with a Mermaid extension - **Obsidian** -- built-in support - **[mermaid.live](https://mermaid.live)** -- online editor and playground - **MkDocs** -- with `pymdownx.superfences` Mermaid fence (used by this documentation) - **marimo** -- `mo.mermaid(mermaid_str)` for interactive notebooks # Views Gallery gds-viz provides six complementary views of a GDS specification. Each view is a different projection of the same compiled artifacts, answering a different question about the system. All examples on this page use the **SIR epidemic model** from `gds-examples`. ## View 1: Structural The compiled block graph from `SystemIR`. Shows composition topology -- sequential, parallel, feedback, temporal -- with role-based shapes and wiring types. ``` from gds_viz import system_to_mermaid mermaid = system_to_mermaid(system) ``` **Shape conventions:** | Shape | Meaning | Role | | ------------------------ | ------------------------------- | ------------------ | | Stadium `([...])` | Exogenous input (no forward_in) | BoundaryAction | | Double-bracket `[[...]]` | State sink (no forward_out) | Terminal Mechanism | | Rectangle `[...]` | Has both inputs and outputs | Policy / other | **Arrow conventions:** | Arrow | Meaning | | ------------- | ------------------------------------------ | | Solid `-->` | Covariant forward flow | | Thick `==>` | Contravariant feedback (within-evaluation) | | Dashed `-.->` | Temporal loop (cross-boundary) | ### Rendered Output ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b Contact_Process([Contact Process]):::boundary Infection_Policy[Infection Policy]:::generic Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism Contact_Process --Contact Signal--> Infection_Policy Infection_Policy --Susceptible Delta--> Update_Susceptible Infection_Policy --Infected Delta--> Update_Infected Infection_Policy --Recovered Delta--> Update_Recovered ``` **Reading this diagram:** The Contact Process boundary action (stadium shape) feeds into the Infection Policy, which fans out to three terminal mechanisms (double-bracket shapes). Arrow labels show the port names used for auto-wiring. ### Options | Parameter | Type | Default | Description | | ---------------- | -------------- | -------- | ------------------------------------------------------ | | `system` | `SystemIR` | required | The compiled system to visualize | | `show_hierarchy` | `bool` | `False` | If True, uses subgraphs for composition tree structure | | `theme` | `MermaidTheme` | `None` | Mermaid theme (`"neutral"`, `"dark"`, etc.) | ______________________________________________________________________ ## View 2: Canonical GDS The mathematical decomposition: X_t --> U --> g --> f --> X\_{t+1}. Derives from `CanonicalGDS` via `project_canonical()`. ``` from gds.canonical import project_canonical from gds_viz import canonical_to_mermaid canonical = project_canonical(spec) mermaid = canonical_to_mermaid(canonical) ``` Shows: - **X_t / X\_{t+1}** -- state variable nodes listing all entity variables - **U** -- boundary subgraph (exogenous inputs) - **g** -- policy subgraph (decision logic) - **f** -- mechanism subgraph (state dynamics) - **Theta** -- parameter space (hexagon) with dashed edges to g and f - **Update edges** -- labeled dashed arrows from mechanisms to X\_{t+1} ### Rendered Output ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 X_t(["X_t
Susceptible.count, Infected.count, Recovered.count"]):::state X_next(["X_{t+1}
Susceptible.count, Infected.count, Recovered.count"]):::state Theta{{"Θ
gamma, beta, contact_rate"}}:::param subgraph U ["Boundary (U)"] Contact_Process[Contact Process]:::boundary end subgraph g ["Policy (g)"] Infection_Policy[Infection Policy]:::policy end subgraph f ["Mechanism (f)"] Update_Susceptible[Update Susceptible]:::mechanism Update_Infected[Update Infected]:::mechanism Update_Recovered[Update Recovered]:::mechanism end X_t --> U U --> g g --> f Update_Susceptible -.-> |Susceptible.count| X_next Update_Infected -.-> |Infected.count| X_next Update_Recovered -.-> |Recovered.count| X_next Theta -.-> g Theta -.-> f style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style f fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` **Reading this diagram:** Left-to-right flow from state X_t through boundary inputs (U), decision logic (g), and state dynamics (f) to the next state X\_{t+1}. The Theta hexagon shows parameters feeding into g and f. Dashed arrows from mechanisms to X\_{t+1} are labeled with the specific entity.variable they update. ### Options | Parameter | Type | Default | Description | | ----------------- | -------------- | -------- | ----------------------------------------------- | | `canonical` | `CanonicalGDS` | required | The canonical projection to visualize | | `show_updates` | `bool` | `True` | Label mechanism-to-X edges with entity.variable | | `show_parameters` | `bool` | `True` | Show Theta node when parameters exist | | `theme` | `MermaidTheme` | `None` | Mermaid theme | ______________________________________________________________________ ## View 3: Architecture by Role Blocks grouped by GDS role: Boundary (U), Policy (g), Mechanism (f). Entity cylinders show state variables and which mechanisms write to them. ``` from gds_viz import spec_to_mermaid mermaid = spec_to_mermaid(spec) ``` ### Rendered Output ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 subgraph boundary ["Boundary (U)"] Contact_Process([Contact Process]):::boundary end subgraph policy ["Policy (g)"] Infection_Policy[Infection Policy]:::policy end subgraph mechanism ["Mechanism (f)"] Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism end entity_Susceptible[("Susceptible
count: S")]:::entity entity_Infected[("Infected
count: I")]:::entity entity_Recovered[("Recovered
count: R")]:::entity Update_Susceptible -.-> entity_Susceptible Update_Infected -.-> entity_Infected Update_Recovered -.-> entity_Recovered Contact_Process --ContactSignalSpace--> Infection_Policy Infection_Policy --DeltaSpace--> Update_Infected Infection_Policy --DeltaSpace--> Update_Recovered Infection_Policy --DeltaSpace--> Update_Susceptible style boundary fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style policy fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style mechanism fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` **Reading this diagram:** Blocks are organized into role subgraphs. Wire labels show the Space used for communication (e.g., `ContactSignalSpace`, `DeltaSpace`). Entity cylinders at the bottom show which state variables exist and which mechanisms write to them. ______________________________________________________________________ ## View 4: Architecture by Domain Blocks grouped by a tag key instead of GDS role. Useful for showing organizational ownership -- which team or subsystem owns each block. ``` from gds_viz import spec_to_mermaid mermaid = spec_to_mermaid(spec, group_by="domain") ``` ### Rendered Output ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 subgraph Observation ["Observation"] Contact_Process([Contact Process]):::boundary end subgraph Decision ["Decision"] Infection_Policy[Infection Policy]:::policy end subgraph State_Update ["State Update"] Update_Susceptible[[Update Susceptible]]:::mechanism Update_Infected[[Update Infected]]:::mechanism Update_Recovered[[Update Recovered]]:::mechanism end entity_Susceptible[("Susceptible
count: S")]:::entity entity_Infected[("Infected
count: I")]:::entity entity_Recovered[("Recovered
count: R")]:::entity Update_Susceptible -.-> entity_Susceptible Update_Infected -.-> entity_Infected Update_Recovered -.-> entity_Recovered Contact_Process --ContactSignalSpace--> Infection_Policy Infection_Policy --DeltaSpace--> Update_Infected Infection_Policy --DeltaSpace--> Update_Recovered Infection_Policy --DeltaSpace--> Update_Susceptible ``` **Reading this diagram:** Same blocks and wires as View 3, but grouped by the `"domain"` tag set on each block at definition time. The subgraph labels ("Observation", "Decision", "State Update") come from tag values, not GDS roles. Setting domain tags Tags are set when defining blocks: ``` sensor = BoundaryAction( name="Contact Process", ..., tags={"domain": "Observation"}, ) ``` ______________________________________________________________________ ## View 5: Parameter Influence Shows the causal map from parameters (Theta) through blocks to entities. Answers: "if I change parameter X, which state variables are affected?" ``` from gds_viz import params_to_mermaid mermaid = params_to_mermaid(spec) ``` ### Rendered Output ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 param_beta{{"beta"}}:::param param_contact_rate{{"contact_rate"}}:::param param_gamma{{"gamma"}}:::param Contact_Process[Contact Process] Infection_Policy[Infection Policy] entity_Infected[("Infected
I")]:::entity entity_Recovered[("Recovered
R")]:::entity entity_Susceptible[("Susceptible
S")]:::entity param_beta -.-> Infection_Policy param_contact_rate -.-> Contact_Process param_gamma -.-> Infection_Policy Update_Infected -.-> entity_Infected Update_Susceptible -.-> entity_Susceptible Update_Recovered -.-> entity_Recovered Contact_Process --> Infection_Policy Infection_Policy --> Update_Infected Infection_Policy --> Update_Recovered Infection_Policy --> Update_Susceptible ``` **Reading this diagram:** Parameter hexagons (orange) on the left feed into blocks via dashed arrows. Blocks flow through the dependency graph (solid arrows) to mechanisms, which update entity cylinders on the right. For example, `beta` feeds `Infection Policy`, which drives all three update mechanisms. ______________________________________________________________________ ## View 6: Traceability For a single entity variable, traces every block that can transitively affect it and every parameter feeding those blocks. Right-to-left layout. ``` from gds_viz import trace_to_mermaid mermaid = trace_to_mermaid(spec, "Susceptible", "count") ``` ### Rendered Output ``` %%{init:{"theme":"neutral"}}%% flowchart RL classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef control fill:#d8b4fe,stroke:#9333ea,stroke-width:2px,color:#3b0764 classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a classDef target fill:#fca5a5,stroke:#dc2626,stroke-width:2px,color:#7f1d1d classDef empty fill:#e2e8f0,stroke:#94a3b8,stroke-width:1px,color:#475569 target(["Susceptible.count (S)"]):::target Contact_Process[Contact Process] Infection_Policy[Infection Policy] Update_Susceptible[Update Susceptible] param_beta{{"beta"}}:::param param_contact_rate{{"contact_rate"}}:::param param_gamma{{"gamma"}}:::param Update_Susceptible ==> target Contact_Process --> Infection_Policy Infection_Policy --> Update_Susceptible param_contact_rate -.-> Contact_Process param_beta -.-> Infection_Policy param_gamma -.-> Infection_Policy ``` **Reading this diagram:** The red target node on the right is the variable being traced (`Susceptible.count`). Thick arrows (`==>`) show direct updates from mechanisms. Normal arrows show transitive dependencies. Dashed arrows show parameter influences. Reading right-to-left: `Susceptible.count` is directly updated by `Update Susceptible`, which depends on `Infection Policy`, which depends on `Contact Process` and parameters `beta`, `gamma`, and `contact_rate`. ### Options | Parameter | Type | Default | Description | | ---------- | -------------- | -------- | ----------------------------------- | | `spec` | `GDSSpec` | required | The GDS specification | | `entity` | `str` | required | Entity name (e.g., `"Susceptible"`) | | `variable` | `str` | required | Variable name (e.g., `"count"`) | | `theme` | `MermaidTheme` | `None` | Mermaid theme | ______________________________________________________________________ ## View Summary | # | View | Function | Input | Layout | Question | | --- | --------------------- | ------------------------------- | -------------- | ---------- | ------------------------------------------- | | 1 | Structural | `system_to_mermaid()` | `SystemIR` | Top-down | What blocks exist and how are they wired? | | 2 | Canonical | `canonical_to_mermaid()` | `CanonicalGDS` | Left-right | What is the formal h = f . g decomposition? | | 3 | Architecture (role) | `spec_to_mermaid()` | `GDSSpec` | Top-down | How do blocks group by GDS role? | | 4 | Architecture (domain) | `spec_to_mermaid(group_by=...)` | `GDSSpec` | Top-down | How do blocks group by domain/agent? | | 5 | Parameter influence | `params_to_mermaid()` | `GDSSpec` | Left-right | What does each parameter control? | | 6 | Traceability | `trace_to_mermaid()` | `GDSSpec` | Right-left | What can affect a specific state variable? | ## Cross-DSL Compatibility All view functions operate on `GDSSpec` and `SystemIR`, which every compilation path produces. The same functions work unchanged regardless of whether the model was built with raw GDS blocks, stockflow DSL, control DSL, or games DSL. ``` # All of these produce the same types -- gds-viz works with all of them: from gds_domains.stockflow.dsl.compile import compile_model, compile_to_system from gds_domains.control.dsl.compile import compile_model, compile_to_system from gds_domains.games.dsl.spec_bridge import compile_pattern_to_spec ``` # gds_viz.architecture Generate a Mermaid flowchart from a GDSSpec. Renders an architecture-level view with blocks grouped by role or tag, entity cylinders, and dependency wires. Parameters: | Name | Type | Description | Default | | --------------- | -------------- | ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------- | | `spec` | `GDSSpec` | The GDS specification to visualize. | *required* | | `group_by` | \`str | None\` | Tag key to group blocks by. None groups by GDS role. | | `show_entities` | `bool` | If True, render entity cylinders with state variables. | `True` | | `show_wires` | `bool` | If True, render dependency edges from wirings. | `True` | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/architecture.py` ``` def spec_to_mermaid( spec: GDSSpec, *, group_by: str | None = None, show_entities: bool = True, show_wires: bool = True, theme: MermaidTheme | None = None, ) -> str: """Generate a Mermaid flowchart from a GDSSpec. Renders an architecture-level view with blocks grouped by role or tag, entity cylinders, and dependency wires. Args: spec: The GDS specification to visualize. group_by: Tag key to group blocks by. None groups by GDS role. show_entities: If True, render entity cylinders with state variables. show_wires: If True, render dependency edges from wirings. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart TD"] query = SpecQuery(spec) # Class definitions lines.extend(classdefs_for_all(theme)) # Render grouped blocks if group_by is not None: sg_styles = _render_tag_groups(lines, spec, group_by) else: sg_styles = _render_role_groups(lines, query, spec) # Entity cylinders if show_entities: _render_entities(lines, spec, query) # Dependency wires if show_wires: _render_wires(lines, spec, query) # Subgraph background styling lines.extend(subgraph_style_lines(sg_styles, theme)) return "\n".join(lines) ``` # gds_viz.canonical Generate a Mermaid flowchart from a CanonicalGDS projection. Renders the formal GDS decomposition: X_t -> U -> g -> f -> X\_{t+1} with optional parameter space (Theta) and update map labels. Parameters: | Name | Type | Description | Default | | ----------------- | -------------- | ------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | `canonical` | `CanonicalGDS` | The canonical GDS projection to visualize. | *required* | | `show_updates` | `bool` | If True, label mechanism->X edges with entity.variable. | `True` | | `show_parameters` | `bool` | If True, show the Theta node when parameters exist. | `True` | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/canonical.py` ``` def canonical_to_mermaid( canonical: CanonicalGDS, *, show_updates: bool = True, show_parameters: bool = True, theme: MermaidTheme | None = None, ) -> str: """Generate a Mermaid flowchart from a CanonicalGDS projection. Renders the formal GDS decomposition: X_t -> U -> g -> f -> X_{t+1} with optional parameter space (Theta) and update map labels. Args: canonical: The canonical GDS projection to visualize. show_updates: If True, label mechanism->X edges with entity.variable. show_parameters: If True, show the Theta node when parameters exist. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart LR"] # Class definitions lines.extend(classdefs_for_all(theme)) # State variable listing for X_t / X_{t+1} # Use entity.var format to disambiguate variables with the same name var_names = [v for _, v in canonical.state_variables] has_dupes = len(var_names) != len(set(var_names)) if has_dupes: var_list = ", ".join(f"{e}.{v}" for e, v in canonical.state_variables) else: var_list = ", ".join(var_names) if var_list: x_label = f"X_t
{var_list}" x_next_label = f"X_{{t+1}}
{var_list}" else: x_label = "X_t" x_next_label = "X_{t+1}" lines.append(f' X_t(["{x_label}"]):::state') lines.append(f' X_next(["{x_next_label}"]):::state') # Parameter node (Theta) if show_parameters and canonical.has_parameters: param_names = ", ".join(canonical.parameter_schema.names()) lines.append(f' Theta{{{{"\u0398
{param_names}"}}}}:::param') # Role subgraphs — only render non-empty ones rendered_sgs: dict[str, str] = {} for sg_id, label, blocks, role in [ ("U", "Boundary (U)", canonical.boundary_blocks, "boundary"), ("g", "Policy (g)", canonical.policy_blocks, "policy"), ("f", "Mechanism (f)", canonical.mechanism_blocks, "mechanism"), ("ctrl", "Control", canonical.control_blocks, "control"), ]: if blocks: _render_subgraph(lines, sg_id, label, blocks, role) rendered_sgs[sg_id] = role # Edges between layers _render_flow_edges(lines, canonical) # Update edges: mechanism -> X_{t+1} _render_update_edges(lines, canonical, show_updates) # Control feedback edges if canonical.control_blocks: for cname in canonical.control_blocks: cid = sanitize_id(cname) # f -> ctrl (dashed) lines.append(f" f -.-> {cid}") # ctrl -> g (dashed) lines.append(f" {cid} -.-> g") # Parameter edges if show_parameters and canonical.has_parameters: if canonical.policy_blocks: lines.append(" Theta -.-> g") if canonical.mechanism_blocks: lines.append(" Theta -.-> f") # Subgraph background styling lines.extend(subgraph_style_lines(rendered_sgs, theme)) return "\n".join(lines) ``` # gds_viz Public API — all visualization functions. Visualization utilities for GDS specifications. ## `spec_to_mermaid(spec, *, group_by=None, show_entities=True, show_wires=True, theme=None)` Generate a Mermaid flowchart from a GDSSpec. Renders an architecture-level view with blocks grouped by role or tag, entity cylinders, and dependency wires. Parameters: | Name | Type | Description | Default | | --------------- | -------------- | ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------- | | `spec` | `GDSSpec` | The GDS specification to visualize. | *required* | | `group_by` | \`str | None\` | Tag key to group blocks by. None groups by GDS role. | | `show_entities` | `bool` | If True, render entity cylinders with state variables. | `True` | | `show_wires` | `bool` | If True, render dependency edges from wirings. | `True` | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/architecture.py` ``` def spec_to_mermaid( spec: GDSSpec, *, group_by: str | None = None, show_entities: bool = True, show_wires: bool = True, theme: MermaidTheme | None = None, ) -> str: """Generate a Mermaid flowchart from a GDSSpec. Renders an architecture-level view with blocks grouped by role or tag, entity cylinders, and dependency wires. Args: spec: The GDS specification to visualize. group_by: Tag key to group blocks by. None groups by GDS role. show_entities: If True, render entity cylinders with state variables. show_wires: If True, render dependency edges from wirings. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart TD"] query = SpecQuery(spec) # Class definitions lines.extend(classdefs_for_all(theme)) # Render grouped blocks if group_by is not None: sg_styles = _render_tag_groups(lines, spec, group_by) else: sg_styles = _render_role_groups(lines, query, spec) # Entity cylinders if show_entities: _render_entities(lines, spec, query) # Dependency wires if show_wires: _render_wires(lines, spec, query) # Subgraph background styling lines.extend(subgraph_style_lines(sg_styles, theme)) return "\n".join(lines) ``` ## `canonical_to_mermaid(canonical, *, show_updates=True, show_parameters=True, theme=None)` Generate a Mermaid flowchart from a CanonicalGDS projection. Renders the formal GDS decomposition: X_t -> U -> g -> f -> X\_{t+1} with optional parameter space (Theta) and update map labels. Parameters: | Name | Type | Description | Default | | ----------------- | -------------- | ------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | `canonical` | `CanonicalGDS` | The canonical GDS projection to visualize. | *required* | | `show_updates` | `bool` | If True, label mechanism->X edges with entity.variable. | `True` | | `show_parameters` | `bool` | If True, show the Theta node when parameters exist. | `True` | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/canonical.py` ``` def canonical_to_mermaid( canonical: CanonicalGDS, *, show_updates: bool = True, show_parameters: bool = True, theme: MermaidTheme | None = None, ) -> str: """Generate a Mermaid flowchart from a CanonicalGDS projection. Renders the formal GDS decomposition: X_t -> U -> g -> f -> X_{t+1} with optional parameter space (Theta) and update map labels. Args: canonical: The canonical GDS projection to visualize. show_updates: If True, label mechanism->X edges with entity.variable. show_parameters: If True, show the Theta node when parameters exist. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart LR"] # Class definitions lines.extend(classdefs_for_all(theme)) # State variable listing for X_t / X_{t+1} # Use entity.var format to disambiguate variables with the same name var_names = [v for _, v in canonical.state_variables] has_dupes = len(var_names) != len(set(var_names)) if has_dupes: var_list = ", ".join(f"{e}.{v}" for e, v in canonical.state_variables) else: var_list = ", ".join(var_names) if var_list: x_label = f"X_t
{var_list}" x_next_label = f"X_{{t+1}}
{var_list}" else: x_label = "X_t" x_next_label = "X_{t+1}" lines.append(f' X_t(["{x_label}"]):::state') lines.append(f' X_next(["{x_next_label}"]):::state') # Parameter node (Theta) if show_parameters and canonical.has_parameters: param_names = ", ".join(canonical.parameter_schema.names()) lines.append(f' Theta{{{{"\u0398
{param_names}"}}}}:::param') # Role subgraphs — only render non-empty ones rendered_sgs: dict[str, str] = {} for sg_id, label, blocks, role in [ ("U", "Boundary (U)", canonical.boundary_blocks, "boundary"), ("g", "Policy (g)", canonical.policy_blocks, "policy"), ("f", "Mechanism (f)", canonical.mechanism_blocks, "mechanism"), ("ctrl", "Control", canonical.control_blocks, "control"), ]: if blocks: _render_subgraph(lines, sg_id, label, blocks, role) rendered_sgs[sg_id] = role # Edges between layers _render_flow_edges(lines, canonical) # Update edges: mechanism -> X_{t+1} _render_update_edges(lines, canonical, show_updates) # Control feedback edges if canonical.control_blocks: for cname in canonical.control_blocks: cid = sanitize_id(cname) # f -> ctrl (dashed) lines.append(f" f -.-> {cid}") # ctrl -> g (dashed) lines.append(f" {cid} -.-> g") # Parameter edges if show_parameters and canonical.has_parameters: if canonical.policy_blocks: lines.append(" Theta -.-> g") if canonical.mechanism_blocks: lines.append(" Theta -.-> f") # Subgraph background styling lines.extend(subgraph_style_lines(rendered_sgs, theme)) return "\n".join(lines) ``` ## `block_to_mermaid(block, *, theme=None)` Generate a Mermaid flowchart from a Block composition tree. This is a convenience wrapper that flattens the block and creates a minimal diagram showing the composition structure. Parameters: | Name | Type | Description | Default | | ------- | -------------- | ------------------------------------- | --------------------------------------------------------------------------------------------------------- | | `block` | `Block` | The root block (atomic or composite). | *required* | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Example ``` from gds.blocks.roles import BoundaryAction, Policy, Mechanism from gds.types.interface import Interface, port from gds_viz import block_to_mermaid observe = BoundaryAction( name="Observe", interface=Interface(forward_out=(port("Signal"),)) ) decide = Policy( name="Decide", interface=Interface( forward_in=(port("Signal"),), forward_out=(port("Action"),) ) ) update = Mechanism( name="Update", interface=Interface(forward_in=(port("Action"),)), updates=[("Entity", "state")] ) pipeline = observe >> decide >> update print(block_to_mermaid(pipeline)) ``` Source code in `packages/gds-viz/gds_viz/mermaid.py` ```` def block_to_mermaid(block: Block, *, theme: MermaidTheme | None = None) -> str: """Generate a Mermaid flowchart from a Block composition tree. This is a convenience wrapper that flattens the block and creates a minimal diagram showing the composition structure. Args: block: The root block (atomic or composite). theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. Example: ```python from gds.blocks.roles import BoundaryAction, Policy, Mechanism from gds.types.interface import Interface, port from gds_viz import block_to_mermaid observe = BoundaryAction( name="Observe", interface=Interface(forward_out=(port("Signal"),)) ) decide = Policy( name="Decide", interface=Interface( forward_in=(port("Signal"),), forward_out=(port("Action"),) ) ) update = Mechanism( name="Update", interface=Interface(forward_in=(port("Action"),)), updates=[("Entity", "state")] ) pipeline = observe >> decide >> update print(block_to_mermaid(pipeline)) ``` """ from gds.compiler.compile import compile_system # Compile with default settings system = compile_system(name=block.name, root=block) return system_to_mermaid(system, show_hierarchy=False, theme=theme) ```` ## `system_to_mermaid(system, show_hierarchy=False, *, theme=None)` Generate a Mermaid flowchart from a SystemIR. Parameters: | Name | Type | Description | Default | | ---------------- | -------------- | ----------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | `system` | `SystemIR` | The compiled system to visualize. | *required* | | `show_hierarchy` | `bool` | If True, uses the hierarchy tree to organize subgraphs. If False, renders a flat graph of all blocks. | `False` | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Example ``` from examples.sir_epidemic.model import build_system from gds_viz import system_to_mermaid system = build_system() mermaid = system_to_mermaid(system) print(mermaid) ``` Source code in `packages/gds-viz/gds_viz/mermaid.py` ```` def system_to_mermaid( system: SystemIR, show_hierarchy: bool = False, *, theme: MermaidTheme | None = None, ) -> str: """Generate a Mermaid flowchart from a SystemIR. Args: system: The compiled system to visualize. show_hierarchy: If True, uses the hierarchy tree to organize subgraphs. If False, renders a flat graph of all blocks. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. Example: ```python from examples.sir_epidemic.model import build_system from gds_viz import system_to_mermaid system = build_system() mermaid = system_to_mermaid(system) print(mermaid) ``` """ lines = [theme_directive(theme), "flowchart TD"] # Class definitions for role-based styling lines.extend(classdefs_for_roles(theme)) if show_hierarchy and system.hierarchy: lines.append(_hierarchy_to_mermaid(system.hierarchy, indent=1)) else: # Flat block diagram with role-based classes block_shapes = _get_block_shapes(system) block_roles = _get_block_roles(system) for block in system.blocks: shape_open, shape_close = block_shapes.get(block.name, ("[", "]")) safe_name = sanitize_id(block.name) role = block_roles.get(block.name, "generic") lines.append( f" {safe_name}{shape_open}{block.name}{shape_close}:::{role}" ) # Add wirings for wiring in system.wirings: src = sanitize_id(wiring.source) tgt = sanitize_id(wiring.target) label = wiring.label if wiring.is_temporal: # Temporal loop: dashed line with arrow back lines.append(f" {src} -.{label}..-> {tgt}") elif wiring.is_feedback: # Feedback: thick arrow lines.append(f" {src} =={label}==> {tgt}") elif wiring.direction == FlowDirection.CONTRAVARIANT: # Contravariant: backward arrow lines.append(f" {tgt} <--{label}--- {src}") else: # Covariant forward: normal arrow lines.append(f" {src} --{label}--> {tgt}") return "\n".join(lines) ```` ## `params_to_mermaid(spec, *, theme=None)` Generate a parameter influence diagram from a GDSSpec. Shows Θ parameters → blocks that use them → entities they update. Only includes blocks that reference at least one parameter, and entities reachable from those blocks via the update map. Parameters: | Name | Type | Description | Default | | ------- | -------------- | ---------------------- | --------------------------------------------------------------------------------------------------------- | | `spec` | `GDSSpec` | The GDS specification. | *required* | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/traceability.py` ``` def params_to_mermaid(spec: GDSSpec, *, theme: MermaidTheme | None = None) -> str: """Generate a parameter influence diagram from a GDSSpec. Shows Θ parameters → blocks that use them → entities they update. Only includes blocks that reference at least one parameter, and entities reachable from those blocks via the update map. Args: spec: The GDS specification. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart LR"] query = SpecQuery(spec) # Class definitions lines.extend(classdefs_for_all(theme)) param_to_blocks = query.param_to_blocks() entity_update_map = query.entity_update_map() # Collect which params and blocks are actually connected active_params = {p for p, blocks in param_to_blocks.items() if blocks} if not active_params: lines.append(" no_params[No parameters defined]:::empty") return "\n".join(lines) # Parameter nodes (hexagons) for pname in sorted(active_params): pid = _param_id(pname) lines.append(f' {pid}{{{{"{pname}"}}}}:::param') # Block nodes — only those referenced by parameters param_blocks: set[str] = set() for blocks in param_to_blocks.values(): param_blocks.update(blocks) for bname in sorted(param_blocks): bid = sanitize_id(bname) lines.append(f" {bid}[{bname}]") # Entity nodes — only those updated by param-connected blocks # Build reverse map: mechanism -> [(entity, var)] mech_to_updates: dict[str, list[tuple[str, str]]] = {} for ename, var_map in entity_update_map.items(): for vname, mechs in var_map.items(): for mname in mechs: mech_to_updates.setdefault(mname, []).append((ename, vname)) active_entities: set[str] = set() for bname in param_blocks: if bname in mech_to_updates: for ename, _ in mech_to_updates[bname]: active_entities.add(ename) # Also include entities reachable via dependency chain from param blocks dep_graph = query.dependency_graph() visited: set[str] = set() frontier = list(param_blocks) while frontier: current = frontier.pop() if current in visited: continue visited.add(current) if current in mech_to_updates: for ename, _ in mech_to_updates[current]: active_entities.add(ename) for target in dep_graph.get(current, set()): frontier.append(target) for ename in sorted(active_entities): entity = spec.entities[ename] var_parts = [] for vname, var in entity.variables.items(): var_parts.append(var.symbol if var.symbol else vname) var_str = ", ".join(var_parts) eid = _entity_id(ename) lines.append(f' {eid}[("{ename}
{var_str}")]:::entity') # Edges: param -> block for pname in sorted(active_params): pid = _param_id(pname) for bname in param_to_blocks[pname]: bid = sanitize_id(bname) lines.append(f" {pid} -.-> {bid}") # Edges: block -> entity (for blocks in the param-reachable set) seen_edges: set[tuple[str, str]] = set() for bname in visited: if bname in mech_to_updates: bid = sanitize_id(bname) for ename, _vname in mech_to_updates[bname]: eid = _entity_id(ename) if (bid, eid) not in seen_edges: seen_edges.add((bid, eid)) lines.append(f" {bid} -.-> {eid}") # Edges: block -> block (dependency flow within param-reachable set) for source in sorted(visited): for target in sorted(dep_graph.get(source, set())): if target in visited: sid = sanitize_id(source) tid = sanitize_id(target) lines.append(f" {sid} --> {tid}") return "\n".join(lines) ``` ## `trace_to_mermaid(spec, entity, variable, *, theme=None)` Generate a traceability diagram for a single entity variable. Shows every block that can transitively affect the variable, the parameters feeding those blocks, and the causal chain. Parameters: | Name | Type | Description | Default | | ---------- | -------------- | --------------------------------- | --------------------------------------------------------------------------------------------------------- | | `spec` | `GDSSpec` | The GDS specification. | *required* | | `entity` | `str` | Entity name (e.g. "Susceptible"). | *required* | | `variable` | `str` | Variable name (e.g. "count"). | *required* | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/traceability.py` ``` def trace_to_mermaid( spec: GDSSpec, entity: str, variable: str, *, theme: MermaidTheme | None = None, ) -> str: """Generate a traceability diagram for a single entity variable. Shows every block that can transitively affect the variable, the parameters feeding those blocks, and the causal chain. Args: spec: The GDS specification. entity: Entity name (e.g. "Susceptible"). variable: Variable name (e.g. "count"). theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart RL"] query = SpecQuery(spec) # Class definitions lines.extend(classdefs_for_all(theme)) affecting = query.blocks_affecting(entity, variable) if not affecting: lines.append(f" target[{entity}.{variable}]:::target") lines.append(" none[No affecting blocks]:::empty") return "\n".join(lines) # Target node ent = spec.entities[entity] var = ent.variables[variable] symbol = var.symbol if var.symbol else variable lines.append(f' target(["{entity}.{variable} ({symbol})"]):::target') # Block nodes for bname in affecting: bid = sanitize_id(bname) lines.append(f" {bid}[{bname}]") # Parameter nodes for affecting blocks block_to_params = query.block_to_params() active_params: set[str] = set() for bname in affecting: for pname in block_to_params.get(bname, []): active_params.add(pname) for pname in sorted(active_params): pid = _param_id(pname) lines.append(f' {pid}{{{{"{pname}"}}}}:::param') # Edges: mechanism -> target entity_update_map = query.entity_update_map() direct_mechs = entity_update_map.get(entity, {}).get(variable, []) for mname in direct_mechs: mid = sanitize_id(mname) lines.append(f" {mid} ==> target") # Edges: block -> block (dependency within affecting set) dep_graph = query.dependency_graph() for source in affecting: sid = sanitize_id(source) for target in dep_graph.get(source, set()): if target in affecting: tid = sanitize_id(target) lines.append(f" {sid} --> {tid}") # Edges: param -> block for bname in affecting: bid = sanitize_id(bname) for pname in block_to_params.get(bname, []): pid = _param_id(pname) lines.append(f" {pid} -.-> {bid}") return "\n".join(lines) ``` ## `__getattr__(name)` Lazy import for optional phase portrait module. Source code in `packages/gds-viz/gds_viz/__init__.py` ``` def __getattr__(name: str) -> object: """Lazy import for optional phase portrait module.""" if name == "phase_portrait": from gds_viz.phase import phase_portrait return phase_portrait raise AttributeError(f"module 'gds_viz' has no attribute {name!r}") ``` # gds_viz.mermaid Core Mermaid syntax generation — flowchart and subgraph building utilities. Lightweight visualization utilities for GDS systems. Generates Mermaid flowchart diagrams from SystemIR or Block compositions. Mermaid diagrams can be rendered in: - GitHub markdown - GitLab markdown - VS Code markdown preview - mermaid.live - Any tool with Mermaid support The module provides two visualization strategies: 1. **Flat diagrams** (`system_to_mermaid()`) — show the compiled block structure with automatic shape/arrow styling based on block roles and wiring types. 1. **Architecture-aware diagrams** — domain-specific visualizations that encode semantic information (agent/environment boundaries, private/public data flow, stateful vs stateless components). See examples/prisoners_dilemma/visualize.py for a reference implementation. ## `system_to_mermaid(system, show_hierarchy=False, *, theme=None)` Generate a Mermaid flowchart from a SystemIR. Parameters: | Name | Type | Description | Default | | ---------------- | -------------- | ----------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | `system` | `SystemIR` | The compiled system to visualize. | *required* | | `show_hierarchy` | `bool` | If True, uses the hierarchy tree to organize subgraphs. If False, renders a flat graph of all blocks. | `False` | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Example ``` from examples.sir_epidemic.model import build_system from gds_viz import system_to_mermaid system = build_system() mermaid = system_to_mermaid(system) print(mermaid) ``` Source code in `packages/gds-viz/gds_viz/mermaid.py` ```` def system_to_mermaid( system: SystemIR, show_hierarchy: bool = False, *, theme: MermaidTheme | None = None, ) -> str: """Generate a Mermaid flowchart from a SystemIR. Args: system: The compiled system to visualize. show_hierarchy: If True, uses the hierarchy tree to organize subgraphs. If False, renders a flat graph of all blocks. theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. Example: ```python from examples.sir_epidemic.model import build_system from gds_viz import system_to_mermaid system = build_system() mermaid = system_to_mermaid(system) print(mermaid) ``` """ lines = [theme_directive(theme), "flowchart TD"] # Class definitions for role-based styling lines.extend(classdefs_for_roles(theme)) if show_hierarchy and system.hierarchy: lines.append(_hierarchy_to_mermaid(system.hierarchy, indent=1)) else: # Flat block diagram with role-based classes block_shapes = _get_block_shapes(system) block_roles = _get_block_roles(system) for block in system.blocks: shape_open, shape_close = block_shapes.get(block.name, ("[", "]")) safe_name = sanitize_id(block.name) role = block_roles.get(block.name, "generic") lines.append( f" {safe_name}{shape_open}{block.name}{shape_close}:::{role}" ) # Add wirings for wiring in system.wirings: src = sanitize_id(wiring.source) tgt = sanitize_id(wiring.target) label = wiring.label if wiring.is_temporal: # Temporal loop: dashed line with arrow back lines.append(f" {src} -.{label}..-> {tgt}") elif wiring.is_feedback: # Feedback: thick arrow lines.append(f" {src} =={label}==> {tgt}") elif wiring.direction == FlowDirection.CONTRAVARIANT: # Contravariant: backward arrow lines.append(f" {tgt} <--{label}--- {src}") else: # Covariant forward: normal arrow lines.append(f" {src} --{label}--> {tgt}") return "\n".join(lines) ```` ## `block_to_mermaid(block, *, theme=None)` Generate a Mermaid flowchart from a Block composition tree. This is a convenience wrapper that flattens the block and creates a minimal diagram showing the composition structure. Parameters: | Name | Type | Description | Default | | ------- | -------------- | ------------------------------------- | --------------------------------------------------------------------------------------------------------- | | `block` | `Block` | The root block (atomic or composite). | *required* | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Example ``` from gds.blocks.roles import BoundaryAction, Policy, Mechanism from gds.types.interface import Interface, port from gds_viz import block_to_mermaid observe = BoundaryAction( name="Observe", interface=Interface(forward_out=(port("Signal"),)) ) decide = Policy( name="Decide", interface=Interface( forward_in=(port("Signal"),), forward_out=(port("Action"),) ) ) update = Mechanism( name="Update", interface=Interface(forward_in=(port("Action"),)), updates=[("Entity", "state")] ) pipeline = observe >> decide >> update print(block_to_mermaid(pipeline)) ``` Source code in `packages/gds-viz/gds_viz/mermaid.py` ```` def block_to_mermaid(block: Block, *, theme: MermaidTheme | None = None) -> str: """Generate a Mermaid flowchart from a Block composition tree. This is a convenience wrapper that flattens the block and creates a minimal diagram showing the composition structure. Args: block: The root block (atomic or composite). theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. Example: ```python from gds.blocks.roles import BoundaryAction, Policy, Mechanism from gds.types.interface import Interface, port from gds_viz import block_to_mermaid observe = BoundaryAction( name="Observe", interface=Interface(forward_out=(port("Signal"),)) ) decide = Policy( name="Decide", interface=Interface( forward_in=(port("Signal"),), forward_out=(port("Action"),) ) ) update = Mechanism( name="Update", interface=Interface(forward_in=(port("Action"),)), updates=[("Entity", "state")] ) pipeline = observe >> decide >> update print(block_to_mermaid(pipeline)) ``` """ from gds.compiler.compile import compile_system # Compile with default settings system = compile_system(name=block.name, root=block) return system_to_mermaid(system, show_hierarchy=False, theme=theme) ```` # gds_viz.traceability Generate a traceability diagram for a single entity variable. Shows every block that can transitively affect the variable, the parameters feeding those blocks, and the causal chain. Parameters: | Name | Type | Description | Default | | ---------- | -------------- | --------------------------------- | --------------------------------------------------------------------------------------------------------- | | `spec` | `GDSSpec` | The GDS specification. | *required* | | `entity` | `str` | Entity name (e.g. "Susceptible"). | *required* | | `variable` | `str` | Variable name (e.g. "count"). | *required* | | `theme` | \`MermaidTheme | None\` | Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). | Returns: | Type | Description | | ----- | -------------------------------------- | | `str` | Mermaid flowchart diagram as a string. | Source code in `packages/gds-viz/gds_viz/traceability.py` ``` def trace_to_mermaid( spec: GDSSpec, entity: str, variable: str, *, theme: MermaidTheme | None = None, ) -> str: """Generate a traceability diagram for a single entity variable. Shows every block that can transitively affect the variable, the parameters feeding those blocks, and the causal chain. Args: spec: The GDS specification. entity: Entity name (e.g. "Susceptible"). variable: Variable name (e.g. "count"). theme: Mermaid theme — one of 'default', 'neutral', 'dark', 'forest', 'base'. None uses the default ('neutral'). Returns: Mermaid flowchart diagram as a string. """ lines = [theme_directive(theme), "flowchart RL"] query = SpecQuery(spec) # Class definitions lines.extend(classdefs_for_all(theme)) affecting = query.blocks_affecting(entity, variable) if not affecting: lines.append(f" target[{entity}.{variable}]:::target") lines.append(" none[No affecting blocks]:::empty") return "\n".join(lines) # Target node ent = spec.entities[entity] var = ent.variables[variable] symbol = var.symbol if var.symbol else variable lines.append(f' target(["{entity}.{variable} ({symbol})"]):::target') # Block nodes for bname in affecting: bid = sanitize_id(bname) lines.append(f" {bid}[{bname}]") # Parameter nodes for affecting blocks block_to_params = query.block_to_params() active_params: set[str] = set() for bname in affecting: for pname in block_to_params.get(bname, []): active_params.add(pname) for pname in sorted(active_params): pid = _param_id(pname) lines.append(f' {pid}{{{{"{pname}"}}}}:::param') # Edges: mechanism -> target entity_update_map = query.entity_update_map() direct_mechs = entity_update_map.get(entity, {}).get(variable, []) for mname in direct_mechs: mid = sanitize_id(mname) lines.append(f" {mid} ==> target") # Edges: block -> block (dependency within affecting set) dep_graph = query.dependency_graph() for source in affecting: sid = sanitize_id(source) for target in dep_graph.get(source, set()): if target in affecting: tid = sanitize_id(target) lines.append(f" {sid} --> {tid}") # Edges: param -> block for bname in affecting: bid = sanitize_id(bname) for pname in block_to_params.get(bname, []): pid = _param_id(pname) lines.append(f" {pid} -.-> {bid}") return "\n".join(lines) ``` # Games (gds-domains) # gds-games **Typed DSL for compositional game theory**, built on [gds-framework](https://blockscience.github.io/gds-framework). ## What is this? `gds-games` extends the GDS framework with game-theoretic vocabulary — open games, strategic interactions, and compositional game patterns. It provides: - **6 atomic game types** — DecisionGame, CovariantFunction, ContravariantFunction, DeletionGame, DuplicationGame, CounitGame - **Pattern composition** — Sequential, Parallel, Feedback, and Corecursive composition operators - **IR compilation** — Flatten game patterns into JSON-serializable intermediate representation - **13 verification checks** — Type matching (T-001..T-006) and structural validation (S-001..S-007) - **7 Markdown report templates** — System overview, verification summary, state machine, interface contracts, and more - **6 Mermaid diagram generators** — Structural, hierarchy, flow topology, architecture views - **CLI** — `ogs compile`, `ogs verify`, `ogs report` ## Architecture ``` gds-framework (pip install gds-framework) │ │ Domain-neutral composition algebra, typed spaces, │ state model, verification engine, flat IR compiler. │ └── gds-games (pip install gds-domains[games]) │ │ Game-theoretic DSL: OpenGame types, Pattern composition, │ compile_to_ir(), domain verification, reports, visualization. │ └── Your application │ │ Concrete pattern definitions, analysis notebooks, │ verification runners. ``` ## Quick Start ``` uv add gds-games # or: pip install gds-domains[games] ``` ``` from gds_domains.games.dsl.games import DecisionGame, CovariantFunction from gds_domains.games.dsl.pattern import Pattern from gds_domains.games import compile_to_ir, verify # Define atomic games with typed signatures sensor = CovariantFunction(name="Sensor", x="observation", y="signal") agent = DecisionGame(name="Agent", x="signal", y="action", r="reward", s="experience") # Compose sequentially (auto-wires by token matching) game = sensor >> agent # Wrap in a Pattern and compile to IR pattern = Pattern(name="Simple Decision", game=game) ir = compile_to_ir(pattern) # Run verification checks report = verify(ir) print(f"{report.checks_passed}/{report.checks_total} checks passed") ``` ## Credits **Author:** [Rohan Mehta](https://github.com/rororowyourboat) — [BlockScience](https://block.science/) **Theoretical foundation:** [Dr. Michael Zargham](https://github.com/mzargham) and [Dr. Jamsheed Shorish](https://github.com/jshorish) **Lineage:** Part of the [cadCAD](https://github.com/cadCAD-org/cadCAD) ecosystem for Complex Adaptive Dynamics. # Getting Started ## Installation ``` pip install gds-domains[games] ``` Or with [uv](https://docs.astral.sh/uv/): ``` uv add gds-games ``` ## Requirements - Python 3.12 or later - [gds-framework](https://pypi.org/project/gds-framework/) >= 0.1 (installed automatically) - [pydantic](https://docs.pydantic.dev/) >= 2.10 - [typer](https://typer.tiangolo.com/) >= 0.15 (for CLI) - [jinja2](https://jinja.palletsprojects.com/) >= 3.1 (for reports) ## Import The package is installed as `gds-domains[games]` and imported as `gds_domains.games`: ``` import gds_domains.games from gds_domains.games.dsl.games import DecisionGame from gds_domains.games import compile_to_ir, verify ``` ## Basic Workflow ``` from gds_domains.games.dsl.games import DecisionGame, CovariantFunction from gds_domains.games.dsl.pattern import Pattern from gds_domains.games import compile_to_ir, verify, generate_reports, save_ir # 1. Define atomic games sensor = CovariantFunction(name="Sensor", x="observation", y="signal") agent = DecisionGame(name="Agent", x="signal", y="action", r="reward", s="experience") # 2. Compose game = sensor >> agent # 3. Wrap in a Pattern pattern = Pattern(name="Simple Decision", game=game) # 4. Compile to IR ir = compile_to_ir(pattern) # 5. Verify report = verify(ir) # 6. Generate reports reports = generate_reports(ir) # 7. Save IR to JSON save_ir(ir, "simple_decision.json") ``` ## CLI ``` # Compile a pattern to IR ogs compile pattern.json -o output.json # Run verification ogs verify output.json # Generate reports ogs report output.json -o reports/ ``` # Equilibrium Analysis The `ogs.equilibrium` module computes Nash equilibria for two-player normal-form games using [Nashpy](https://nashpy.readthedocs.io/). ## Installation ``` uv add "gds-games[nash]" ``` The `[nash]` extra installs `nashpy` and `numpy`. ## Key Types and Functions | Name | Purpose | | ----------------------------- | ------------------------------------------------------------------------------ | | `extract_payoff_matrices(ir)` | Extract `(A, B)` payoff matrices from a two-player `PatternIR` | | `compute_nash(ir, method)` | Compute Nash equilibria from a compiled `PatternIR` | | `NashResult` | Container for equilibrium strategies with `support()` and `expected_payoffs()` | ## Solver Methods | Method | Algorithm | Notes | | --------------------------------- | --------------------- | -------------------------------- | | `"support_enumeration"` (default) | Support enumeration | Exact, finds all Nash equilibria | | `"vertex_enumeration"` | Vertex enumeration | Alternative exact enumeration | | `"lemke_howson"` | Lemke-Howson pivoting | Fast, returns one equilibrium | ## Example: Prisoner's Dilemma Define payoffs via `TerminalCondition` entries, then compute equilibria: ``` from gds_domains.games.dsl.pattern import TerminalCondition # Prisoner's Dilemma payoff structure terminal_conditions = [ TerminalCondition( name="CC", actions={"Player1": "Cooperate", "Player2": "Cooperate"}, outcome="mutual_cooperation", payoffs={"Player1": 3.0, "Player2": 3.0}, ), TerminalCondition( name="CD", actions={"Player1": "Cooperate", "Player2": "Defect"}, outcome="sucker", payoffs={"Player1": 0.0, "Player2": 5.0}, ), TerminalCondition( name="DC", actions={"Player1": "Defect", "Player2": "Cooperate"}, outcome="temptation", payoffs={"Player1": 5.0, "Player2": 0.0}, ), TerminalCondition( name="DD", actions={"Player1": "Defect", "Player2": "Defect"}, outcome="mutual_defection", payoffs={"Player1": 1.0, "Player2": 1.0}, ), ] ``` Extract matrices and solve: ``` from gds_domains.games.equilibrium import extract_payoff_matrices, compute_nash # Extract payoff matrices from a compiled PatternIR matrices = extract_payoff_matrices(pattern_ir) print(matrices.A) # Player 1's payoff matrix print(matrices.B) # Player 2's payoff matrix # Find Nash equilibria equilibria = compute_nash(pattern_ir) for ne in equilibria: print(f"Player 1: {dict(zip(ne.actions1, ne.sigma1))}") print(f"Player 2: {dict(zip(ne.actions2, ne.sigma2))}") print(f"Support: {ne.support()}") print(f"Expected payoffs: {ne.expected_payoffs(matrices)}") ``` ## NashResult Each `NashResult` contains: - **`sigma1`** / **`sigma2`** -- mixed strategy vectors (numpy arrays) - **`actions1`** / **`actions2`** -- action labels corresponding to each strategy index - **`support()`** -- returns the set of actions played with positive probability for each player - **`expected_payoffs(matrices)`** -- computes `(E[payoff1], E[payoff2])` under the equilibrium strategies ## Direct Matrix Input You can also bypass IR extraction and supply payoff matrices directly: ``` import numpy as np from gds_domains.games.equilibrium import compute_nash_from_matrices A = np.array([[3, 0], [5, 1]]) # Player 1 payoffs B = np.array([[3, 5], [0, 1]]) # Player 2 payoffs equilibria = compute_nash_from_matrices(A, B, method="support_enumeration") ``` ## Limitations - **2-player only** -- games with more than 2 action spaces raise `ValueError` - **Complete information** -- all joint action profiles must have numeric payoffs - **Normal form** -- extensive-form games must be converted to normal form first - **Numerical precision** -- mixed strategy equilibria may have floating-point rounding ## Next Steps - [Game Types](https://blockscience.github.io/gds-core/games/guide/game-types/index.md) -- all 6 atomic game types - [Patterns & Composition](https://blockscience.github.io/gds-core/games/guide/patterns/index.md) -- composing complex multi-player games - [Getting Started](https://blockscience.github.io/gds-core/games/getting-started/index.md) -- basic game definition workflow # Architecture ## Layered Design ``` GDS Framework ← core engine (generic blocks, IR, verification) ↑ OGS (this pkg) ← game-theory DSL extension ↑ Domain packages ← applications using OGS patterns ``` **GDS (dependency):** Provides Block, Interface, Port, composition operators (`>>`, `|`, `.feedback()`, `.loop()`), compiler, token-based type matching, generic IR models, and generic verification (G-001..G-006). **OGS (this package):** Adds the game-theoretic DSL on top: - `OpenGame(Block)` — abstract base mapping `Signature(x,y,r,s)` to `Interface` - 6 atomic game types with port-constraint validators - `Pattern` — groups games + flows + metadata - `compile_to_ir()` — flatten patterns into `PatternIR` - `PatternIR.to_system_ir()` — project to GDS `SystemIR` for interop - 13 verification checks (T-001..T-006, S-001..S-007) - 7 Markdown report generators via Jinja2 templates - 6 Mermaid diagram generators ## Compilation Pipeline ``` Pattern(games, flows) → compile_to_ir() → flatten games into AtomicBlocks → walk explicit flows + auto-wire sequential chains → extract hierarchy tree → flatten sequential chains in hierarchy → IRDocument(patterns=[PatternIR(...)], metadata=IRMetadata(...)) ``` ## IR Layering Domain enums and metadata models are canonically defined in the DSL layer and re-exported by `ogs/ir/models.py`: - Enums: `ogs.dsl.types` → `ogs.ir.models` (re-export) - Hierarchy: `gds.ir.models.HierarchyNodeIR` → `ogs.ir.models.HierarchyNodeIR` (subclass with CORECURSIVE) - Projection: `ogs.ir.models.PatternIR.to_system_ir()` → `gds.ir.models.SystemIR` # CLI gds-games provides the `ogs` command-line interface built with [Typer](https://typer.tiangolo.com/). ## Commands ### `ogs compile` Compile a pattern definition to IR. ``` ogs compile pattern.json -o output.json ``` ### `ogs verify` Run verification checks on compiled IR. ``` ogs verify output.json ``` Options: - `--include-gds-checks` — also run generic GDS verification checks (G-001..G-006) ### `ogs report` Generate Markdown reports from compiled IR. ``` ogs report output.json -o reports/ ``` Generates all 7 report templates to the specified output directory. # Game Types ## Signature Every game has a `Signature` — a mapping from game-theoretic ports `(x, y, r, s)` to GDS interface ports: | Game port | Direction | GDS port | Meaning | | --------- | ------------ | --------- | ------------------- | | `x` | forward_in | Input | Observation / state | | `y` | forward_out | Output | Action / decision | | `r` | backward_in | Utility | Reward / payoff | | `s` | backward_out | Coutility | Experience / cost | ## Six Atomic Game Types ### DecisionGame A strategic agent that observes state, chooses an action, receives utility, and emits coutility. ``` from gds_domains.games.dsl.games import DecisionGame agent = DecisionGame(name="Agent", x="signal", y="action", r="reward", s="experience") ``` Has all four ports: x, y, r, s. ### CovariantFunction A forward-only transformation — no backward ports. ``` from gds_domains.games.dsl.games import CovariantFunction sensor = CovariantFunction(name="Sensor", x="observation", y="signal") ``` Has x and y only. ### ContravariantFunction A backward-only transformation — no forward ports. ``` from gds_domains.games.dsl.games import ContravariantFunction cost = ContravariantFunction(name="Cost", r="total_cost", s="unit_cost") ``` Has r and s only. ### DeletionGame Discards a forward signal — has x but no y. ``` from gds_domains.games.dsl.games import DeletionGame sink = DeletionGame(name="Sink", x="unused_signal") ``` ### DuplicationGame Copies a forward signal — has x and produces two copies. ``` from gds_domains.games.dsl.games import DuplicationGame split = DuplicationGame(name="Split", x="signal", y="signal+signal") ``` ### CounitGame Terminal evaluation — has r but no s. ``` from gds_domains.games.dsl.games import CounitGame evaluate = CounitGame(name="Evaluate", r="final_utility") ``` # Patterns & Composition ## Pattern A `Pattern` groups games, flows, and metadata into a compilable unit. ``` from gds_domains.games.dsl.pattern import Pattern pattern = Pattern( name="Simple Decision", game=sensor >> agent, description="A sensor feeds an agent", ) ``` ## Composition Operators ### Sequential (`>>`) ``` pipeline = sensor >> agent >> evaluator ``` Auto-wires by token overlap between y (output) and x (input) ports. ### Parallel (`|`) ``` agents = alice | bob ``` Independent games — no auto-wiring. ### Feedback (`.feedback()`) ``` game = (sensor >> agent).feedback(wirings) ``` Within-timestep backward flow. Requires CONTRAVARIANT wirings. ### Corecursive (`.corecursive()`) ``` game = (sensor >> agent).corecursive(wirings) ``` Extends GDS with a CORECURSIVE composition type — cross-timestep feedback specific to game-theoretic patterns. ## Flow Types | Flow | Direction | Use Case | | ----------- | -------------- | ----------------- | | SEQUENTIAL | Forward | Pipeline stages | | PARALLEL | Independent | Concurrent agents | | FEEDBACK | Backward | Utility signals | | CORECURSIVE | Cross-timestep | Iterated games | ## Pattern Metadata Patterns can include domain-specific metadata: ``` from gds_domains.games.dsl.pattern import TerminalCondition, ActionSpace, StateInitialization pattern = Pattern( name="Iterated PD", game=game, terminal_conditions=[ TerminalCondition(name="max_rounds", description="Stop after N rounds"), ], action_spaces=[ ActionSpace(name="cooperate_defect", values=["C", "D"]), ], ) ``` # Reports gds-games includes 7 Markdown report generators powered by Jinja2 templates. ## Generating Reports ``` from gds_domains.games import compile_to_ir, generate_reports ir = compile_to_ir(pattern) reports = generate_reports(ir) for name, content in reports.items(): print(f"--- {name} ---") print(content) ``` ## Report Types | Report | Description | | -------------------- | --------------------------------------------------- | | System Overview | High-level summary of games, flows, and composition | | Verification Summary | Check results with pass/fail status | | State Machine | Terminal conditions and state transitions | | Interface Contracts | Port signatures for each game | | Domain Analysis | Cross-domain flow detection and coupling metrics | | Hierarchy | Composition tree structure | | Flow Topology | Covariant flow graph | ## Templates Reports are generated from Jinja2 templates in `ogs/reports/templates/`. Each template receives the full `IRDocument` as context. ## CLI ``` ogs report compiled.json -o reports/ ``` Generates all reports to the specified directory. # Verification ## Running Verification ``` from gds_domains.games import compile_to_ir, verify ir = compile_to_ir(pattern) report = verify(ir) print(f"{report.checks_passed}/{report.checks_total} checks passed") for finding in report.findings: print(f" [{finding.severity.value}] {finding.check_id}: {finding.message}") ``` ## Type Checks (T-001..T-006) | Check | Name | What It Validates | | ----- | ---------------------- | ----------------------------------------------- | | T-001 | Sequential type match | y tokens of left game overlap x tokens of right | | T-002 | Feedback type match | Backward port tokens match across feedback | | T-003 | Corecursive type match | Cross-timestep port compatibility | | T-004 | Signature completeness | All required ports are present | | T-005 | Port uniqueness | No duplicate port names | | T-006 | Token consistency | Port token sets are well-formed | ## Structural Checks (S-001..S-007) | Check | Name | What It Validates | | ----- | -------------------- | ------------------------------------------------ | | S-001 | Pattern completeness | All games are connected | | S-002 | Flow consistency | Flows match declared composition type | | S-003 | Hierarchy validity | Composition tree is well-formed | | S-004 | Terminal conditions | Terminal games are properly placed | | S-005 | Cycle detection | No unintended cycles (only feedback/corecursive) | | S-006 | Metadata consistency | Action spaces reference valid games | | S-007 | IR integrity | Compiled IR matches source pattern | ## GDS Check Delegation Include generic GDS checks alongside OGS checks: ``` report = verify(ir, include_gds_checks=True) ``` This projects the PatternIR to SystemIR and runs G-001..G-006 checks. # Visualization gds-games includes 6 Mermaid diagram generators in `ogs.viz`. ## Generating Diagrams ```` from gds_domains.games import compile_to_ir from gds_domains.games.viz import generate_all_views ir = compile_to_ir(pattern) views = generate_all_views(ir.patterns[0]) for name, mermaid in views.items(): print(f"## {name}\n```mermaid\n{mermaid}\n```\n") ```` ## Available Views | View | Function | Description | | ---------------------- | ------------------------------------- | --------------------------------- | | Structural | `structural_to_mermaid()` | Full game topology with all flows | | Architecture by Role | `architecture_by_role_to_mermaid()` | Games grouped by GameType | | Architecture by Domain | `architecture_by_domain_to_mermaid()` | Games grouped by domain tag | | Hierarchy | `hierarchy_to_mermaid()` | Composition tree nesting | | Flow Topology | `flow_topology_to_mermaid()` | Covariant flows only | | Terminal Conditions | `terminal_conditions_to_mermaid()` | State transition diagram | ## Individual Views ``` from gds_domains.games.viz import structural_to_mermaid, hierarchy_to_mermaid structural = structural_to_mermaid(pattern_ir) hierarchy = hierarchy_to_mermaid(pattern_ir) ``` All functions take `PatternIR` and return a Mermaid markdown string. # GDS Ecosystem Architecture ## The Three Layers ``` +------------------------------------------------------------------+ | CLIENT LAYER | | (Your application — the delivery layer) | | | | - Concrete pattern definitions (specifications) | | - Analysis notebooks and visualization | | - Runs verification, generates reports | | - Can fork DSL packages for custom needs | +------------------------------------------------------------------+ | depends on gds-games (which pulls in gds-framework) v +------------------------------------------------------------------+ | DSL LAYER | | (gds-games — game theory DSL) | | (future: msml-spec, cadcad-spec, etc.) | | | | - Domain-specific types and game vocabulary | | - Composition patterns, compilation to IR | | - Domain-specific verification checks | | - Mermaid visualization, Markdown reports | | - Projection back to GDS IR for generic tooling | +------------------------------------------------------------------+ | depends on gds-framework v +------------------------------------------------------------------+ | FRAMEWORK LAYER | | (gds-framework — the core engine) | | | | - Generic composition algebra (Block, >>, |, .feedback, .loop) | | - Bidirectional type system (Port, Interface, tokens) | | - IR data models (BlockIR, WiringIR, SystemIR, HierarchyNodeIR) | | - Compiler (flatten, wire, hierarchy extraction) | | - Generic verification checks (G-001 through G-006) | +------------------------------------------------------------------+ ``` ## Ownership and Delivery All three packages are **developed by BlockScience**. During delivery: - **GDS Framework** and **DSL packages** (like OGS) are made public and pip-installable - **Client repos** are delivered to the client team who can: - Use DSL packages as-is - Fork a DSL to customize game types, checks, or reports - Build entirely new DSL packages on top of GDS ``` BlockScience develops ┌─────────────────────────────────┐ │ gds-framework (open source) │ │ gds-games (open source) │ │ client-app (private) │ └─────────────────────────────────┘ │ delivery │ v ┌─────────────────────────────────┐ │ Client team receives: │ │ │ │ pip install gds-domains[games] │ │ + their own repo with patterns │ │ notebooks, and analysis │ │ │ │ Can fork OGS if needed. │ │ GDS stays stable underneath. │ └─────────────────────────────────┘ ``` ______________________________________________________________________ ## What Each Layer Owns ### GDS Framework — `gds-framework` The **core engine**. Domain-agnostic. Knows nothing about games, agents, or negotiation. | Owns | Details | | ------------------------ | ------------------------------------------------------------------------------------------------------- | | **Composition algebra** | `Block`, `AtomicBlock`, `StackComposition`, `ParallelComposition`, `FeedbackLoop`, `TemporalLoop` | | **Type system** | `Port`, `Interface`, token-based matching (`tokenize`, `tokens_overlap`, `tokens_subset`) | | **IR models** | `BlockIR`, `WiringIR`, `HierarchyNodeIR`, `SystemIR` — the canonical flat representation | | **Compiler** | `compile_system()` — flatten blocks, extract wirings, build hierarchy tree | | **Generic verification** | 6 checks (G-001..G-006): type matching, completeness, direction consistency, dangling wires, acyclicity | | **Specification layer** | `TypeDef`, `Space`, `Entity`, `GDSSpec`, `ParameterSchema`, `CanonicalGDS` | | **Mixins** | `Tagged` — inert annotation tags for domain grouping | **Does not own:** Any domain vocabulary. No game types, no flow semantics, no patterns. ### DSL Packages — e.g. `gds-games` **Domain extensions** built on top of GDS. Each DSL is an independent package. A DSL package: - **Subclasses** `AtomicBlock` → domain-specific block types - **Defines** domain enums (game types, flow types, etc.) - **Extends** GDS IR with domain fields (OGS adds `game_type`, `constraints`, `tags`) - **Compiles** DSL constructs → domain IR - **Projects** domain IR → GDS `SystemIR` (enabling GDS generic checks) - **Adds** domain-specific verification, visualization, and reports **What OGS specifically owns:** ``` ogs/dsl/types.py → Domain enums: GameType, FlowType, CompositionType, InputType ogs/dsl/base.py → OpenGame(Block) — abstract base with Signature(x,y,r,s) ogs/dsl/games.py → 6 atomic game types (Decision, CovariantFunction, etc.) ogs/dsl/composition.py→ Flow, SequentialComposition, ParallelComposition, etc. ogs/dsl/pattern.py → Pattern container + metadata models (TerminalCondition, etc.) ogs/dsl/compile.py → compile_to_ir() — DSL tree → PatternIR ogs/dsl/library.py → Reusable factories (reactive_decision_agent, etc.) ogs/ir/models.py → OpenGameIR, FlowIR, PatternIR (with .to_system_ir()) HierarchyNodeIR (extends GDS HierarchyNodeIR) ogs/verification/ → 13 domain checks (T-001..T-006, S-001..S-007) ogs/viz.py → 6 Mermaid diagram generators ogs/reports/ → 7 Markdown report templates via Jinja2 ogs/cli.py → CLI: ogs compile, ogs verify, ogs report ``` **Does not own:** Concrete pattern definitions, analysis notebooks, application logic. ### Client Application **Application layer**. Defines concrete specifications and runs analysis. | Owns | Details | | ----------------------- | ------------------------------------------------------------------------------------------------------- | | **Pattern definitions** | Concrete game compositions (reactive_decision, bilateral_negotiation, multi_party_agreement, etc.) | | **Notebooks** | Interactive marimo notebooks for stakeholders (system_specification, pattern_explorer, msml_components) | | **Analysis scripts** | Verification runners, report generators, comparison tools | | **Configuration** | Which patterns to verify, report output locations, etc. | **Does not own:** The DSL itself, the framework, verification logic, or report templates. ______________________________________________________________________ ## Where IR Fits IR (Intermediate Representation) is the **contract between layers**. ``` DSL Layer Framework Layer (domain-specific) (generic) ┌─────────────────────┐ ┌──────────────────────┐ │ PatternIR │ │ SystemIR │ │ games: OpenGameIR │ ────────> │ blocks: BlockIR │ │ flows: FlowIR │ project │ wirings: WiringIR │ │ + terminal conds │ │ hierarchy │ │ + action spaces │ │ │ │ + hierarchy (OGS) │ │ │ └─────────────────────┘ └──────────────────────┘ ^ │ │ compile_to_ir() │ GDS generic checks │ │ (G-001..G-006) ┌───────┴─────────────┐ v │ Pattern │ ┌──────────────────────┐ │ game: OpenGame │ │ VerificationReport │ │ inputs │ └──────────────────────┘ │ metadata │ └─────────────────────┘ ``` **GDS owns** the IR concept — `BlockIR`, `WiringIR`, `SystemIR`, `HierarchyNodeIR`. **DSL packages extend** with domain fields: - `OpenGameIR` adds `game_type`, `constraints`, `tags` (not in `BlockIR`) - `FlowIR` adds `flow_type`, `is_corecursive` (not in `WiringIR`) - OGS `HierarchyNodeIR` subclasses GDS's, adding `CORECURSIVE` composition type **DSL packages project back** via `to_system_ir()`: - Maps `OpenGameIR` → `BlockIR` - Maps `FlowIR` → `WiringIR` - Maps `CORECURSIVE` → GDS `TEMPORAL` - Enables GDS generic verification on any DSL's output ______________________________________________________________________ ## Repo Structure After Separation ### Repo structure ``` BlockScience/gds-framework ← public, on PyPI as gds-framework BlockScience/gds-games ← public, on PyPI as gds-games pure library: ogs/, tests/ depends on gds-framework (PyPI) client-app/ ← private, delivered to client depends on gds-games (PyPI) ``` ### Target: GDS Framework repo ``` gds-framework/ ├── gds/ # the package ├── tests/ ├── docs/ ├── pyproject.toml # dependencies: pydantic>=2.10 └── README.md # "pip install gds-framework" ``` ### Target: Open Games Spec repo ``` gds-games/ ├── ogs/ # the package (dsl, ir, verification, viz, reports) ├── tests/ # DSL unit tests + IR tests + verification tests ├── docs/ ├── pyproject.toml # dependencies: gds-framework>=0.1, pydantic, typer, jinja2 └── README.md # "pip install gds-domains[games]" ``` No `examples/`, no `notebooks/` — those belong to the client. ### Target: Client application repo ``` client-app/ ├── patterns/ # the specifications │ ├── reactive_decision.py # Pattern definitions using OGS DSL │ ├── bilateral_negotiation.py │ └── multi_party_agreement.py ├── notebooks/ # interactive analysis ├── reports/ # generated output (gitignored) ├── tests/ │ └── test_patterns.py # verify patterns compile + pass checks ├── pyproject.toml # dependencies: gds-games>=0.1 └── README.md ``` ______________________________________________________________________ ## Separation Steps ### Step 1: Publish GDS to PyPI ``` # In gds-framework repo git tag v0.1.0 uv build # wheel already exists in dist/ uv publish # or: twine upload dist/* ``` Prerequisite: make the repo public (or use private PyPI index). ### Step 2: Decouple OGS from submodule ``` # In gds-games repo git submodule deinit gds-framework git rm gds-framework rm -rf .gitmodules ``` Update `pyproject.toml`: ``` # Remove this: [tool.uv.sources] gds-framework = { path = "gds-framework", editable = true } # The dependency line stays: dependencies = ["gds-framework>=0.1", ...] ``` ### Step 3: Move examples + notebooks to client repo ``` # Create client repo mkdir -p client-app/{patterns,notebooks,tests} mv gds-games/examples/*.py client-app/patterns/ mv gds-games/notebooks/*.py client-app/notebooks/ ``` ### Step 4: Publish OGS to PyPI ``` # In gds-games repo git tag v0.1.0 uv build && uv publish ``` ### Step 5: Verify client app works ``` # In client repo uv init && uv add gds-games uv run python patterns/reactive_decision.py # should work ``` ______________________________________________________________________ ## Forkability DSL packages are designed to be forkable by clients: ``` Option A: Use OGS as-is (recommended) pip install gds-domains[games] Option B: Fork and extend 1. Fork gds-games 2. Add custom game types, checks, or reports 3. Keep as private package or publish as your-games-spec 4. Still depends on gds-framework (unchanged) Option C: Build a new DSL from scratch 1. pip install gds-framework 2. Subclass AtomicBlock for your domain 3. Build your own compiler, verification, reports 4. GDS generic checks work automatically via SystemIR projection ``` The key guarantee: **GDS is stable infrastructure.** DSL packages can diverge, fork, or be replaced without affecting the core framework or each other. ______________________________________________________________________ ## Package Summary | Package | PyPI | Import | Layer | Visibility | | ------------- | -------------------- | ------------------- | --------- | ----------------------------- | | GDS Framework | `gds-framework` | `gds` | Framework | Public (open source) | | GDS Games | `gds-domains[games]` | `gds_domains.games` | DSL | Public (open source) | | Client App | — | — | Client | Private (delivered to client) | # gds_domains.games.cli CLI entry point for the open-games package. ## `compile_dsl(dsl_file, output=None)` Compile a Python DSL file into IR JSON. Source code in `packages/gds-domains/gds_domains/games/cli.py` ``` @app.command(name="compile") def compile_dsl( dsl_file: Annotated[ Path, typer.Argument(help="Path to a Python DSL file defining a 'pattern' variable"), ], output: Annotated[ Path | None, typer.Option("--output", "-o", help="Output IR JSON path") ] = None, ) -> None: """Compile a Python DSL file into IR JSON.""" from gds_domains.games.dsl.compile import compile_to_ir from gds_domains.games.ir.serialization import IRDocument, IRMetadata, save_ir if not dsl_file.exists(): typer.echo(f"Error: DSL file not found: {dsl_file}", err=True) raise typer.Exit(1) # Load the Python file as a module spec = importlib.util.spec_from_file_location("_dsl_input", dsl_file) if spec is None or spec.loader is None: typer.echo(f"Error: could not load {dsl_file}", err=True) raise typer.Exit(1) mod = importlib.util.module_from_spec(spec) spec.loader.exec_module(mod) if not hasattr(mod, "pattern"): typer.echo(f"Error: {dsl_file} must define a 'pattern' variable", err=True) raise typer.Exit(1) ir = compile_to_ir(mod.pattern) doc = IRDocument( patterns=[ir], metadata=IRMetadata(source_canvases=[str(dsl_file)]), ) out_path = output or Path(dsl_file.stem + ".json") save_ir(doc, out_path) typer.echo(f"IR written to {out_path}") typer.echo(f" Pattern: {ir.name}") typer.echo( f" Games: {len(ir.games)}, Flows: {len(ir.flows)}, Inputs: {len(ir.inputs)}" ) ``` ## `verify_cmd(ir_file)` Run verification checks against an IR file. Source code in `packages/gds-domains/gds_domains/games/cli.py` ``` @app.command(name="verify") def verify_cmd( ir_file: Annotated[Path, typer.Argument(help="Path to IR JSON file")], ) -> None: """Run verification checks against an IR file.""" from gds_domains.games.ir.serialization import load_ir from gds_domains.games.verification.engine import verify from gds_domains.games.verification.findings import Severity if not ir_file.exists(): typer.echo(f"Error: IR file not found: {ir_file}", err=True) raise typer.Exit(1) doc = load_ir(ir_file) for pattern in doc.patterns: report = verify(pattern) typer.echo(f"\nVerification: {report.pattern_name}") typer.echo(f" Checks: {report.checks_passed}/{report.checks_total} passed") typer.echo( f" Errors: {report.errors}, Warnings: {report.warnings}, " f"Info: {report.info_count}" ) failed = [f for f in report.findings if not f.passed] if failed: typer.echo("\nFindings:") for f in failed: marker = ( "ERROR" if f.severity == Severity.ERROR else f.severity.value.upper() ) typer.echo(f" [{marker}] {f.check_id}: {f.message}") else: typer.echo("\n All checks passed.") if report.errors > 0: raise typer.Exit(1) ``` ## `report(ir_file, output_dir=Path('reports'), report_type='all')` Generate Markdown specification reports from an IR file. Creates a subdirectory for each pattern under the output directory, organizing all reports by pattern name. Source code in `packages/gds-domains/gds_domains/games/cli.py` ``` @app.command() def report( ir_file: Annotated[Path, typer.Argument(help="Path to IR JSON file")], output_dir: Annotated[ Path, typer.Option("--output", "-o", help="Base output directory for reports") ] = Path("reports"), report_type: Annotated[ str, typer.Option( "--type", "-t", help="Report type: all, overview, contracts, schema, " "state_machine, checklist, or verification", ), ] = "all", ) -> None: """Generate Markdown specification reports from an IR file. Creates a subdirectory for each pattern under the output directory, organizing all reports by pattern name. """ from gds_domains.games.ir.serialization import load_ir from gds_domains.games.reports.generator import generate_reports if not ir_file.exists(): typer.echo(f"Error: IR file not found: {ir_file}", err=True) raise typer.Exit(1) doc = load_ir(ir_file) types = None if report_type == "all" else [report_type] for pattern in doc.patterns: paths = generate_reports(pattern, output_dir, report_types=types) slug = pattern.name.lower().replace(" ", "_") typer.echo(f"\nReports for {pattern.name} in {output_dir}/{slug}/:") for p in paths: typer.echo(f" {p.name}") ``` # gds_domains.games.dsl.base Bases: `Block` Abstract base for all open games — both atomic components and composites. Every open game has a `name` and a `signature` describing its boundary ports using the (X, Y, R, S) convention. The `signature` keyword is accepted at construction and stored in the GDS `interface` field. Source code in `packages/gds-domains/gds_domains/games/dsl/base.py` ``` class OpenGame(Block): """Abstract base for all open games — both atomic components and composites. Every open game has a ``name`` and a ``signature`` describing its boundary ports using the (X, Y, R, S) convention. The ``signature`` keyword is accepted at construction and stored in the GDS ``interface`` field. """ @model_validator(mode="before") @classmethod def _accept_signature_kwarg(cls, data: dict) -> dict: """Accept 'signature' as an alias for 'interface'.""" if isinstance(data, dict) and "signature" in data: data["interface"] = data.pop("signature") return data @property def signature(self) -> Signature: """Game-theory alias for self.interface with x/y/r/s accessors.""" iface = self.interface if isinstance(iface, Signature): return iface return Signature( forward_in=iface.forward_in, forward_out=iface.forward_out, backward_in=iface.backward_in, backward_out=iface.backward_out, ) @abstractmethod def flatten(self) -> list[AtomicGame]: # type: ignore[override] """Return all atomic games in evaluation order.""" def __rshift__(self, other: OpenGame) -> SequentialComposition: # type: ignore[override] """``g1 >> g2`` — sequential composition.""" from gds_domains.games.dsl.composition import SequentialComposition return SequentialComposition( name=f"{self.name} >> {other.name}", first=self, second=other, ) def __or__(self, other: OpenGame) -> ParallelComposition: # type: ignore[override] """``g1 | g2`` — parallel composition.""" from gds_domains.games.dsl.composition import ParallelComposition return ParallelComposition( name=f"{self.name} | {other.name}", left=self, right=other, ) def feedback(self, wiring: list[Flow]) -> FeedbackLoop: # type: ignore[override] """Wrap with contravariant S→R feedback within a single timestep.""" from gds_domains.games.dsl.composition import FeedbackLoop return FeedbackLoop( name=f"{self.name} [feedback]", inner=self, feedback_wiring=wiring, ) def corecursive( self, wiring: list[Flow], exit_condition: str = "" ) -> CorecursiveLoop: """Wrap with covariant Y→X temporal iteration across timesteps.""" from gds_domains.games.dsl.composition import CorecursiveLoop return CorecursiveLoop( name=f"{self.name} [corecursive]", inner=self, corecursive_wiring=wiring, exit_condition=exit_condition, ) ``` ## `signature` Game-theory alias for self.interface with x/y/r/s accessors. ## `flatten()` Return all atomic games in evaluation order. Source code in `packages/gds-domains/gds_domains/games/dsl/base.py` ``` @abstractmethod def flatten(self) -> list[AtomicGame]: # type: ignore[override] """Return all atomic games in evaluation order.""" ``` ## `__rshift__(other)` `g1 >> g2` — sequential composition. Source code in `packages/gds-domains/gds_domains/games/dsl/base.py` ``` def __rshift__(self, other: OpenGame) -> SequentialComposition: # type: ignore[override] """``g1 >> g2`` — sequential composition.""" from gds_domains.games.dsl.composition import SequentialComposition return SequentialComposition( name=f"{self.name} >> {other.name}", first=self, second=other, ) ``` ## `__or__(other)` `g1 | g2` — parallel composition. Source code in `packages/gds-domains/gds_domains/games/dsl/base.py` ``` def __or__(self, other: OpenGame) -> ParallelComposition: # type: ignore[override] """``g1 | g2`` — parallel composition.""" from gds_domains.games.dsl.composition import ParallelComposition return ParallelComposition( name=f"{self.name} | {other.name}", left=self, right=other, ) ``` ## `feedback(wiring)` Wrap with contravariant S→R feedback within a single timestep. Source code in `packages/gds-domains/gds_domains/games/dsl/base.py` ``` def feedback(self, wiring: list[Flow]) -> FeedbackLoop: # type: ignore[override] """Wrap with contravariant S→R feedback within a single timestep.""" from gds_domains.games.dsl.composition import FeedbackLoop return FeedbackLoop( name=f"{self.name} [feedback]", inner=self, feedback_wiring=wiring, ) ``` ## `corecursive(wiring, exit_condition='')` Wrap with covariant Y→X temporal iteration across timesteps. Source code in `packages/gds-domains/gds_domains/games/dsl/base.py` ``` def corecursive( self, wiring: list[Flow], exit_condition: str = "" ) -> CorecursiveLoop: """Wrap with covariant Y→X temporal iteration across timesteps.""" from gds_domains.games.dsl.composition import CorecursiveLoop return CorecursiveLoop( name=f"{self.name} [corecursive]", inner=self, corecursive_wiring=wiring, exit_condition=exit_condition, ) ``` # gds_domains.games.dsl.compile Compile a DSL Pattern into PatternIR. Source code in `packages/gds-domains/gds_domains/games/dsl/compile.py` ``` def compile_to_ir(pattern: Pattern) -> PatternIR: """Compile a DSL Pattern into PatternIR.""" # 1. Flatten games (GDS stage 1) game_irs = flatten_blocks(pattern.game, _compile_game) # 2. Extract flows (GDS stage 2 with OGS emitter) flows: list[FlowIR] = extract_wirings(pattern.game, _ogs_wiring_emitter) # 3. Map inputs and generate input flows input_irs = [] for inp in pattern.inputs: input_irs.append( InputIR( name=inp.name, input_type=inp.input_type, schema_hint=inp.schema_hint, ) ) if inp.target_game: flows.append( FlowIR( source=inp.name, target=inp.target_game, label=inp.flow_label or inp.name, flow_type=FlowType.OBSERVATION, direction=FlowDirection.COVARIANT, ) ) # 4. Extract composition hierarchy (OGS-specific for CORECURSIVE) counter = [0] hierarchy = _extract_hierarchy(pattern.game, counter) hierarchy = _flatten_sequential_chains(hierarchy) return PatternIR( name=pattern.name, games=game_irs, flows=flows, inputs=input_irs, composition_type=pattern.composition_type, terminal_conditions=pattern.terminal_conditions, action_spaces=pattern.action_spaces, initialization=pattern.initializations, hierarchy=hierarchy, source_canvas=pattern.source, ) ``` # gds_domains.games.dsl.composition Bases: `BaseModel` An explicit wiring between two games. Uses game-theory naming (`source_game`/`target_game`). Provides `source_block`/`target_block` properties for GDS interop (GDS composition validators access these attributes). `source_game` and `target_game` accept either a `str` (game name) or an `OpenGame` instance. When an `OpenGame` is provided it is coerced to `game.name` immediately at construction time, so the IR and verifier always receive plain strings. Source code in `packages/gds-domains/gds_domains/games/dsl/composition.py` ``` class Flow(BaseModel, frozen=True): """An explicit wiring between two games. Uses game-theory naming (``source_game``/``target_game``). Provides ``source_block``/``target_block`` properties for GDS interop (GDS composition validators access these attributes). ``source_game`` and ``target_game`` accept either a ``str`` (game name) or an ``OpenGame`` instance. When an ``OpenGame`` is provided it is coerced to ``game.name`` immediately at construction time, so the IR and verifier always receive plain strings. """ source_game: str source_port: str target_game: str target_port: str direction: FlowDirection = FlowDirection.COVARIANT @model_validator(mode="before") @classmethod def _resolve_game_refs(cls, data: Any) -> Any: """Coerce OpenGame instances to their name strings.""" if not isinstance(data, dict): return data for field in ("source_game", "target_game"): val = data.get(field) if isinstance(val, OpenGame): data[field] = val.name return data @property def source_block(self) -> str: """GDS-compatible alias for ``source_game``.""" return self.source_game @property def target_block(self) -> str: """GDS-compatible alias for ``target_game``.""" return self.target_game ``` ## `source_block` GDS-compatible alias for `source_game`. ## `target_block` GDS-compatible alias for `target_game`. Bases: `StackComposition`, `OpenGame` `g1 >> g2` — sequential composition where output of g1 feeds input of g2. Extends GDS `StackComposition` so `isinstance(seq, StackComposition)` is True. GDS's validator handles token-overlap checking and interface computation. The `OpenGame.signature` property provides x/y/r/s access. ### Mathematical Notation In category theory, written as G1 ; G2 (semicolon denotes composition). The output Y1 of the first game becomes the input X2 of the second:: ``` X1 -> G1 -> Y1 = X2 -> G2 -> Y2 ``` Or as a composite:: ``` X1 -> (G1 ; G2) -> Y2 ``` With contravariant feedback:: ``` R1 <- G1 <- S1 = R2 <- G2 <- S2 ``` ### Signature Transformation - X = X1 + X2 (observations from both games) - Y = Y1 + Y2 (choices from both games) - R = R1 + R2 (utilities to both games) - S = S1 + S2 (coutilities from both games) ### Type Matching Sequential composition requires type compatibility between Y1 and X2. If no explicit wiring is provided, the validator checks that the type tokens of Y1 overlap with X2 (at least one shared token). ### Example A policy game feeding into a decision game:: ``` policy >> decision ``` Where Policy.Y = "Latest Policy" and Decision.X = "Latest Policy" (automatic wiring via type token matching). ### See Also Specification Notes: Sequential composition via type matching Source code in `packages/gds-domains/gds_domains/games/dsl/composition.py` ``` class SequentialComposition(StackComposition, OpenGame): """``g1 >> g2`` — sequential composition where output of g1 feeds input of g2. Extends GDS ``StackComposition`` so ``isinstance(seq, StackComposition)`` is True. GDS's validator handles token-overlap checking and interface computation. The ``OpenGame.signature`` property provides x/y/r/s access. Mathematical Notation --------------------- In category theory, written as G1 ; G2 (semicolon denotes composition). The output Y1 of the first game becomes the input X2 of the second:: X1 -> G1 -> Y1 = X2 -> G2 -> Y2 Or as a composite:: X1 -> (G1 ; G2) -> Y2 With contravariant feedback:: R1 <- G1 <- S1 = R2 <- G2 <- S2 Signature Transformation ------------------------- - X = X1 + X2 (observations from both games) - Y = Y1 + Y2 (choices from both games) - R = R1 + R2 (utilities to both games) - S = S1 + S2 (coutilities from both games) Type Matching ------------- Sequential composition requires type compatibility between Y1 and X2. If no explicit wiring is provided, the validator checks that the type tokens of Y1 overlap with X2 (at least one shared token). Example ------- A policy game feeding into a decision game:: policy >> decision Where Policy.Y = "Latest Policy" and Decision.X = "Latest Policy" (automatic wiring via type token matching). See Also -------- Specification Notes: Sequential composition via type matching """ first: OpenGame # type: ignore[assignment] # narrower than Block second: OpenGame # type: ignore[assignment] # narrower than Block wiring: list[Flow] = Field(default_factory=list) # type: ignore[assignment] # narrower than list[Wiring] def flatten(self) -> list[AtomicGame]: # type: ignore[override] return self.first.flatten() + self.second.flatten() ``` Bases: `ParallelComposition`, `OpenGame` `g1 | g2` — parallel (tensor) composition: games run independently. Extends GDS `ParallelComposition` so `isinstance(par, GDSParallelComposition)` is True. GDS's validator handles interface computation. ### Mathematical Notation In category theory, written as G1 || G2 (parallel bar denotes tensor product). Games run side-by-side with no shared information flows:: ``` X1 -> G1 -> Y1 X2 -> G2 -> Y2 ``` As a composite:: ``` (X1 x X2) -> (G1 || G2) -> (Y1 x Y2) ``` ### Signature Transformation - X = X1 + X2 (concatenated observations) - Y = Y1 + Y2 (concatenated choices) - R = R1 + R2 (concatenated utilities) - S = S1 + S2 (concatenated coutilities) ### Independence No game-to-game flows allowed between left and right components. Each game operates independently with separate observations, choices, utilities, and coutilities. ### Example Two agents acting in parallel:: ``` agent1 | agent2 ``` Each agent has its own context builder, policy, and decision game. Their outputs feed into a shared decision router via separate wires. ### See Also Specification Notes: Parallel composition for multi-agent patterns Source code in `packages/gds-domains/gds_domains/games/dsl/composition.py` ``` class ParallelComposition(_GDSParallelComposition, OpenGame): """``g1 | g2`` — parallel (tensor) composition: games run independently. Extends GDS ``ParallelComposition`` so ``isinstance(par, GDSParallelComposition)`` is True. GDS's validator handles interface computation. Mathematical Notation --------------------- In category theory, written as G1 || G2 (parallel bar denotes tensor product). Games run side-by-side with no shared information flows:: X1 -> G1 -> Y1 X2 -> G2 -> Y2 As a composite:: (X1 x X2) -> (G1 || G2) -> (Y1 x Y2) Signature Transformation ------------------------- - X = X1 + X2 (concatenated observations) - Y = Y1 + Y2 (concatenated choices) - R = R1 + R2 (concatenated utilities) - S = S1 + S2 (concatenated coutilities) Independence ------------ No game-to-game flows allowed between left and right components. Each game operates independently with separate observations, choices, utilities, and coutilities. Example ------- Two agents acting in parallel:: agent1 | agent2 Each agent has its own context builder, policy, and decision game. Their outputs feed into a shared decision router via separate wires. See Also -------- Specification Notes: Parallel composition for multi-agent patterns """ left: OpenGame # type: ignore[assignment] # narrower than Block right: OpenGame # type: ignore[assignment] # narrower than Block def flatten(self) -> list[AtomicGame]: # type: ignore[override] return self.left.flatten() + self.right.flatten() @classmethod def from_list( cls, games: list[OpenGame], name: str | None = None, ) -> ParallelComposition: """Compose a list of games in parallel. Equivalent to ``games[0] | games[1] | ... | games[N-1]`` but accepts a dynamic list, enabling N-agent patterns without manually enumerating the ``|`` chain. Args: games: At least 2 ``OpenGame`` instances. name: Optional name override for the resulting composition. Defaults to ``" | ".join(g.name for g in games)``. Raises: ValueError: If fewer than 2 games are provided. Example:: agents = [reactive_decision_agent(f"Agent {i}") for i in range(1, 4)] agents_parallel = ParallelComposition.from_list(agents) """ if len(games) < 2: raise ValueError( f"ParallelComposition.from_list() requires at least 2 games, got {len(games)}" ) result: ParallelComposition = games[0] | games[1] # type: ignore[assignment] for g in games[2:]: result = result | g # type: ignore[assignment] if name is not None: result = result.model_copy(update={"name": name}) return result ``` ## `from_list(games, name=None)` Compose a list of games in parallel. Equivalent to `games[0] | games[1] | ... | games[N-1]` but accepts a dynamic list, enabling N-agent patterns without manually enumerating the `|` chain. Parameters: | Name | Type | Description | Default | | ------- | ---------------- | ------------------------------ | ------------------------------------------------------------------- | | `games` | `list[OpenGame]` | At least 2 OpenGame instances. | *required* | | `name` | \`str | None\` | Optional name override for the resulting composition. Defaults to " | Raises: | Type | Description | | ------------ | ----------------------------------- | | `ValueError` | If fewer than 2 games are provided. | Example:: ``` agents = [reactive_decision_agent(f"Agent {i}") for i in range(1, 4)] agents_parallel = ParallelComposition.from_list(agents) ``` Source code in `packages/gds-domains/gds_domains/games/dsl/composition.py` ``` @classmethod def from_list( cls, games: list[OpenGame], name: str | None = None, ) -> ParallelComposition: """Compose a list of games in parallel. Equivalent to ``games[0] | games[1] | ... | games[N-1]`` but accepts a dynamic list, enabling N-agent patterns without manually enumerating the ``|`` chain. Args: games: At least 2 ``OpenGame`` instances. name: Optional name override for the resulting composition. Defaults to ``" | ".join(g.name for g in games)``. Raises: ValueError: If fewer than 2 games are provided. Example:: agents = [reactive_decision_agent(f"Agent {i}") for i in range(1, 4)] agents_parallel = ParallelComposition.from_list(agents) """ if len(games) < 2: raise ValueError( f"ParallelComposition.from_list() requires at least 2 games, got {len(games)}" ) result: ParallelComposition = games[0] | games[1] # type: ignore[assignment] for g in games[2:]: result = result | g # type: ignore[assignment] if name is not None: result = result.model_copy(update={"name": name}) return result ``` Bases: `FeedbackLoop`, `OpenGame` Wraps a game with contravariant S->R feedback within a single timestep. Extends GDS `FeedbackLoop` so `isinstance(fb, GDSFeedbackLoop)` is True. GDS's validator sets the interface. ### Mathematical Notation In category theory, written as feedback(G) or with a feedback loop symbol. Creates backward information flow within a single game execution:: ``` X -> G -> Y ^ | (feedback) v R <- S ``` The coutility S of the inner game feeds back as utility R within the same timestep (before the game "completes"). ### Information Flow Contravariant (dashed arrows): S -> R - S: Coutility produced by inner game - R: Utility received by inner game - Direction: Right-to-left (backward) This enables learning within a single decision cycle: 1. Inner game produces choice Y 1. Choice generates outcome (external) 1. Outcome feeds back as utility R 1. Inner game produces coutility S (experience) 1. S feeds back to R via feedback_wiring ### Example A reactive decision agent with learning:: ``` agent = (cb >> hist >> pol >> rd >> out).feedback([ Flow("Outcome", "Outcome", "Reactive Decision", "Outcome", CONTRAVARIANT), Flow("Experience", "Experience", "Policy", "Experience", CONTRAVARIANT), Flow( "History Update", "History Update", "History", "History Update", CONTRAVARIANT, ), ]) ``` ### See Also Specification Notes: Feedback within Reactive Decision Pattern Source code in `packages/gds-domains/gds_domains/games/dsl/composition.py` ``` class FeedbackLoop(_GDSFeedbackLoop, OpenGame): """Wraps a game with contravariant S->R feedback within a single timestep. Extends GDS ``FeedbackLoop`` so ``isinstance(fb, GDSFeedbackLoop)`` is True. GDS's validator sets the interface. Mathematical Notation --------------------- In category theory, written as feedback(G) or with a feedback loop symbol. Creates backward information flow within a single game execution:: X -> G -> Y ^ | (feedback) v R <- S The coutility S of the inner game feeds back as utility R within the same timestep (before the game "completes"). Information Flow ---------------- Contravariant (dashed arrows): S -> R - S: Coutility produced by inner game - R: Utility received by inner game - Direction: Right-to-left (backward) This enables learning within a single decision cycle: 1. Inner game produces choice Y 2. Choice generates outcome (external) 3. Outcome feeds back as utility R 4. Inner game produces coutility S (experience) 5. S feeds back to R via feedback_wiring Example ------- A reactive decision agent with learning:: agent = (cb >> hist >> pol >> rd >> out).feedback([ Flow("Outcome", "Outcome", "Reactive Decision", "Outcome", CONTRAVARIANT), Flow("Experience", "Experience", "Policy", "Experience", CONTRAVARIANT), Flow( "History Update", "History Update", "History", "History Update", CONTRAVARIANT, ), ]) See Also -------- Specification Notes: Feedback within Reactive Decision Pattern """ inner: OpenGame # type: ignore[assignment] # narrower than Block feedback_wiring: list[Flow] # type: ignore[assignment] # narrower than list[Wiring] if TYPE_CHECKING: def __init__( self, *, name: str, inner: OpenGame, feedback_wiring: list[Flow], signature: Signature | None = None, ) -> None: ... def flatten(self) -> list[AtomicGame]: # type: ignore[override] return self.inner.flatten() ``` Bases: `TemporalLoop`, `OpenGame` Wraps a game with temporal corecursion: covariant Y->X across timesteps. Extends GDS `TemporalLoop` so `isinstance(cl, TemporalLoop)` is True. Accepts `corecursive_wiring` as an alias for `temporal_wiring`. GDS's validator enforces COVARIANT-only wiring and sets the interface. ### Mathematical Notation In category theory, written as corec(G) or with a temporal loop symbol. Creates forward information flow across multiple timesteps (iterations):: ``` X -> G -> Y ^ | | | (corecursive) | v ---(loop) ``` The choice Y of one iteration becomes the observation X of the next iteration. This creates a temporal loop that continues until an exit_condition is satisfied. ### Information Flow Covariant (solid arrows): Y -> X - Y: Choice produced by inner game in iteration n - X: Observation received by inner game in iteration n+1 - Direction: Forward across time All corecursive_wiring must be COVARIANT direction. CONTRAVARIANT wiring in corecursive loops is prohibited. ### Temporal Structure 1. Iteration n: Inner game observes X_n, produces Y_n 1. Y_n propagates through corecursive_wiring to become X\_{n+1} 1. Iteration n+1: Inner game observes X\_{n+1} (= Y_n), produces Y\_{n+1} 1. Loop continues until exit_condition is True ### Exit Conditions The exit_condition is a string description of when the loop terminates. Common conditions: - "Agreement reached" (bilateral negotiation) - "Consensus threshold met" (multi-party agreement) - "Maximum iterations exceeded" (timeout) - "Both agents reject" (failure state) ### Example Bilateral negotiation with corecursive message passing:: ``` negotiation = feedback_loop.corecursive( wiring=[ Flow("Decision", "Decision", "Agent 2 Context Builder", "Decision"), Flow("Decision", "Decision", "Agent 1 Context Builder", "Decision"), ], exit_condition="Agreement reached or timeout", ) ``` ### See Also Specification Notes: Corecursive loops in Cyclic Interaction Pattern Source code in `packages/gds-domains/gds_domains/games/dsl/composition.py` ``` class CorecursiveLoop(TemporalLoop, OpenGame): """Wraps a game with temporal corecursion: covariant Y->X across timesteps. Extends GDS ``TemporalLoop`` so ``isinstance(cl, TemporalLoop)`` is True. Accepts ``corecursive_wiring`` as an alias for ``temporal_wiring``. GDS's validator enforces COVARIANT-only wiring and sets the interface. Mathematical Notation --------------------- In category theory, written as corec(G) or with a temporal loop symbol. Creates forward information flow across multiple timesteps (iterations):: X -> G -> Y ^ | | | (corecursive) | v ---(loop) The choice Y of one iteration becomes the observation X of the next iteration. This creates a temporal loop that continues until an exit_condition is satisfied. Information Flow ---------------- Covariant (solid arrows): Y -> X - Y: Choice produced by inner game in iteration n - X: Observation received by inner game in iteration n+1 - Direction: Forward across time All corecursive_wiring must be COVARIANT direction. CONTRAVARIANT wiring in corecursive loops is prohibited. Temporal Structure ------------------ 1. Iteration n: Inner game observes X_n, produces Y_n 2. Y_n propagates through corecursive_wiring to become X_{n+1} 3. Iteration n+1: Inner game observes X_{n+1} (= Y_n), produces Y_{n+1} 4. Loop continues until exit_condition is True Exit Conditions --------------- The exit_condition is a string description of when the loop terminates. Common conditions: - "Agreement reached" (bilateral negotiation) - "Consensus threshold met" (multi-party agreement) - "Maximum iterations exceeded" (timeout) - "Both agents reject" (failure state) Example ------- Bilateral negotiation with corecursive message passing:: negotiation = feedback_loop.corecursive( wiring=[ Flow("Decision", "Decision", "Agent 2 Context Builder", "Decision"), Flow("Decision", "Decision", "Agent 1 Context Builder", "Decision"), ], exit_condition="Agreement reached or timeout", ) See Also -------- Specification Notes: Corecursive loops in Cyclic Interaction Pattern """ inner: OpenGame # type: ignore[assignment] # narrower than Block temporal_wiring: list[Flow] # type: ignore[assignment] # narrower than list[Wiring] if TYPE_CHECKING: def __init__( self, *, name: str, inner: OpenGame, corecursive_wiring: list[Flow] | None = None, temporal_wiring: list[Flow] | None = None, exit_condition: str = "", ) -> None: ... @model_validator(mode="before") @classmethod def _map_corecursive_to_temporal(cls, data: dict) -> dict: """Accept corecursive_wiring as alias for temporal_wiring.""" if isinstance(data, dict) and "corecursive_wiring" in data: data["temporal_wiring"] = data.pop("corecursive_wiring") return data @property def corecursive_wiring(self) -> list[Flow]: """Game-theory alias for ``temporal_wiring``.""" return self.temporal_wiring def flatten(self) -> list[AtomicGame]: # type: ignore[override] return self.inner.flatten() ``` ## `corecursive_wiring` Game-theory alias for `temporal_wiring`. # gds_domains.games.dsl.games Bases: `AtomicGame` A strategic decision game -- a player who chooses an action. Has all four port categories: X, Y, R, S. ### Mathematical Definition A decision game has no transmission value s in S output--it simply outputs a choice y in Y based upon x in X and a context k, resulting in an outcome r in R:: ``` X -> ? -> Y R <- ? ``` Where: - X: Set of input observations (may be empty, singleton, or multi-element) - Y: Set of output choices (may include "NO-OP", "NO ACTION", or other "non-choice" choices) - R: Set of input resolved outcomes/utilities (may be numbers, vectors, etc.) - S = {}: No coutility transmission (decision game endpoint) The decision game ? contains the logic for selecting a choice from Y (which may be the empty set). The logic defines Sigma\_\_? (strategies), ?*? (play function), and ?*? (best response). ### Diagram :: ``` X["Observations X"] --> D{"Decision ?"} D --> Y["Choices Y"] D ~~~ R["Utility R"] R -.-> D R ~~~ D ``` ### Example A "Reactive Decision" game observes context and policy, then decides on an action that generates an outcome:: ``` DecisionGame( name="Reactive Decision", signature=Signature( x=(port("Observation, Context"), port("Latest Policy")), y=(port("Decision"),), r=(port("Outcome"),), s=(port("Experience"),), # Coutility for learning ), ) ``` ### See Also Specification Notes: "Decision Open Game" section Source code in `packages/gds-domains/gds_domains/games/dsl/games.py` ``` class DecisionGame(AtomicGame): """A strategic decision game -- a player who chooses an action. Has all four port categories: X, Y, R, S. Mathematical Definition ----------------------- A decision game has no transmission value s in S output--it simply outputs a choice y in Y based upon x in X and a context k, resulting in an outcome r in R:: X -> ? -> Y R <- ? Where: - X: Set of input observations (may be empty, singleton, or multi-element) - Y: Set of output choices (may include "NO-OP", "NO ACTION", or other "non-choice" choices) - R: Set of input resolved outcomes/utilities (may be numbers, vectors, etc.) - S = {}: No coutility transmission (decision game endpoint) The decision game ? contains the logic for selecting a choice from Y (which may be the empty set). The logic defines Sigma__? (strategies), ?_? (play function), and ?_? (best response). Diagram ------- :: X["Observations X"] --> D{"Decision ?"} D --> Y["Choices Y"] D ~~~ R["Utility R"] R -.-> D R ~~~ D Example ------- A "Reactive Decision" game observes context and policy, then decides on an action that generates an outcome:: DecisionGame( name="Reactive Decision", signature=Signature( x=(port("Observation, Context"), port("Latest Policy")), y=(port("Decision"),), r=(port("Outcome"),), s=(port("Experience"),), # Coutility for learning ), ) See Also -------- Specification Notes: "Decision Open Game" section """ game_type: GameType = GameType.DECISION if TYPE_CHECKING: def __init__( self, *, name: str, signature: Signature | None = None, logic: str = "", color_code: int = 1, tags: dict[str, str] | None = None, ) -> None: ... ``` Bases: `AtomicGame` A pure deterministic function with only forward ports: X -> Y. Also known as a "lifting" in category theory. Has no utility r in R or transmission value s in S--it simply associates to an x in X a choice y in Y. ### Mathematical Definition A covariant function (lifting) f: X -> Y:: ``` X -> f -> Y ``` Where: - X: Domain (input observations) - Y: Codomain (output choices) - R = {}: No utility input - S = {}: No coutility output This is the simplest atomic game type--pure functional transformation without feedback or strategic choice. ### Diagram :: ``` X["X"] --> D[/"Function f"\] D --> Y["Y"] ``` ### Example A "Context Builder" transforms trigger data into usable context:: ``` CovariantFunction( name="Context Builder", signature=Signature( x=(port("Event"), port("Constraint")), y=(port("Observation, Context"),), ), ) ``` ### Validation Must have empty R and S ports (enforced by validator). ### See Also Specification Notes: "Function Open Game" section Source code in `packages/gds-domains/gds_domains/games/dsl/games.py` ``` class CovariantFunction(AtomicGame): """A pure deterministic function with only forward ports: X -> Y. Also known as a "lifting" in category theory. Has no utility r in R or transmission value s in S--it simply associates to an x in X a choice y in Y. Mathematical Definition ----------------------- A covariant function (lifting) f: X -> Y:: X -> f -> Y Where: - X: Domain (input observations) - Y: Codomain (output choices) - R = {}: No utility input - S = {}: No coutility output This is the simplest atomic game type--pure functional transformation without feedback or strategic choice. Diagram ------- :: X["X"] --> D[/"Function f"\\] D --> Y["Y"] Example ------- A "Context Builder" transforms trigger data into usable context:: CovariantFunction( name="Context Builder", signature=Signature( x=(port("Event"), port("Constraint")), y=(port("Observation, Context"),), ), ) Validation ---------- Must have empty R and S ports (enforced by validator). See Also -------- Specification Notes: "Function Open Game" section """ game_type: GameType = GameType.FUNCTION_COVARIANT if TYPE_CHECKING: def __init__( self, *, name: str, signature: Signature | None = None, logic: str = "", color_code: int = 1, tags: dict[str, str] | None = None, ) -> None: ... @model_validator(mode="after") def _no_contravariant(self) -> Self: if self.signature.r or self.signature.s: raise DSLTypeError( f"CovariantFunction {self.name!r} cannot have contravariant ports " f"(R={self.signature.r}, S={self.signature.s})" ) return self ``` Bases: `AtomicGame` A pure backward function with only contravariant ports: R -> S. The "dual" or contravariant lifting f\*: R -> S. Associates a transmission value s in S with a utility r in R, with no observations x in X or choices y in Y. ### Mathematical Definition A contravariant function (dual lifting) f\*: R -> S:: ``` R <- f* <- S ``` Or equivalently, reading right-to-left:: ``` S -> f* -> R ``` Where: - R: Input utility/outcome - S: Output coutility/valuation - X = {}: No observation input - Y = {}: No choice output The f\* notation indicates this relationship may be thought of as a 'dual' or 'inverse' of a covariant lifting f: S -> R, providing sufficient structure to create the standard open game via the tensor product of one covariant and one contravariant lifting. ### Diagram :: ``` R["R"] -.-> D[/"Function f*"\] D -.-> S["S"] ``` ### Validation Must have empty X and Y ports (enforced by validator). ### See Also Specification Notes: "Function Open Game" section (contravariant subsection) Source code in `packages/gds-domains/gds_domains/games/dsl/games.py` ``` class ContravariantFunction(AtomicGame): """A pure backward function with only contravariant ports: R -> S. The "dual" or contravariant lifting f*: R -> S. Associates a transmission value s in S with a utility r in R, with no observations x in X or choices y in Y. Mathematical Definition ----------------------- A contravariant function (dual lifting) f*: R -> S:: R <- f* <- S Or equivalently, reading right-to-left:: S -> f* -> R Where: - R: Input utility/outcome - S: Output coutility/valuation - X = {}: No observation input - Y = {}: No choice output The f* notation indicates this relationship may be thought of as a 'dual' or 'inverse' of a covariant lifting f: S -> R, providing sufficient structure to create the standard open game via the tensor product of one covariant and one contravariant lifting. Diagram ------- :: R["R"] -.-> D[/"Function f*"\\] D -.-> S["S"] Validation ---------- Must have empty X and Y ports (enforced by validator). See Also -------- Specification Notes: "Function Open Game" section (contravariant subsection) """ game_type: GameType = GameType.FUNCTION_CONTRAVARIANT @model_validator(mode="after") def _no_covariant(self) -> Self: if self.signature.x or self.signature.y: raise DSLTypeError( f"ContravariantFunction {self.name!r} cannot have covariant ports " f"(X={self.signature.x}, Y={self.signature.y})" ) return self ``` Bases: `AtomicGame` Discards an input channel: X -> {}. Y must be empty. A special function game where x in X is discarded, returning Y := {}. ### Mathematical Definition A deletion game discards observations without producing choices:: ``` X -> f -> {} ``` Where Y = {} (empty set). This represents intentional information loss or filtering--receiving input but producing no output. ### Diagram (Full) :: ``` X["X"] --> D[/"Function f"\] --> Y["{}"] ``` ### Diagram (Shorthand) :: ``` X["X"] --> D@{shape: dbl-circ, label: " "} ``` The double-circle shorthand is commonly used in string diagram notation to represent deletion (discarding) of information. ### Validation Must have empty Y (enforced by validator). ### See Also Specification Notes: "Deletion Open Game" section Source code in `packages/gds-domains/gds_domains/games/dsl/games.py` ``` class DeletionGame(AtomicGame): """Discards an input channel: X -> {}. Y must be empty. A special function game where x in X is discarded, returning Y := {}. Mathematical Definition ----------------------- A deletion game discards observations without producing choices:: X -> f -> {} Where Y = {} (empty set). This represents intentional information loss or filtering--receiving input but producing no output. Diagram (Full) -------------- :: X["X"] --> D[/"Function f"\\] --> Y["{}"] Diagram (Shorthand) ------------------- :: X["X"] --> D@{shape: dbl-circ, label: " "} The double-circle shorthand is commonly used in string diagram notation to represent deletion (discarding) of information. Validation ---------- Must have empty Y (enforced by validator). See Also -------- Specification Notes: "Deletion Open Game" section """ game_type: GameType = GameType.DELETION @model_validator(mode="after") def _y_must_be_empty(self) -> Self: if self.signature.y: raise DSLTypeError( f"DeletionGame {self.name!r} must have empty Y (got {self.signature.y})" ) return self ``` Bases: `AtomicGame` Copies an input to multiple outputs: X -> X x X. Y must have 2+ ports. A special function game where x in X is copied, returning Y := X x X. ### Mathematical Definition A duplication game copies observations to multiple outputs:: ``` X -> f -> X x X ``` Where Y = X x X (cartesian product). This represents information broadcasting--receiving input and sending copies to multiple downstream consumers. Has a natural contravariant counterpart when r in R is copied. ### Diagram (Full) :: ``` X["X"] --> D[/"Function f"\] --> Y["X x X"] ``` ### Diagram (Shorthand) :: ``` X0["X"] --> X1["X"] X0 --> X2["X"] ``` The fork shorthand is commonly used in string diagram notation to represent duplication (copying) of information. ### Contravariant Counterpart When r in R is copied:: ``` R["R"] -.-> D[/"Function f*"\] -.-> Y["R x R"] ``` Or in shorthand:: ``` R0["R"] -.-> R1["R"] R0 -.-> R2["R"] ``` ### Validation Must have 2+ Y ports (enforced by validator). ### See Also Specification Notes: "Duplication Open Game" section Source code in `packages/gds-domains/gds_domains/games/dsl/games.py` ``` class DuplicationGame(AtomicGame): """Copies an input to multiple outputs: X -> X x X. Y must have 2+ ports. A special function game where x in X is copied, returning Y := X x X. Mathematical Definition ----------------------- A duplication game copies observations to multiple outputs:: X -> f -> X x X Where Y = X x X (cartesian product). This represents information broadcasting--receiving input and sending copies to multiple downstream consumers. Has a natural contravariant counterpart when r in R is copied. Diagram (Full) -------------- :: X["X"] --> D[/"Function f"\\] --> Y["X x X"] Diagram (Shorthand) ------------------- :: X0["X"] --> X1["X"] X0 --> X2["X"] The fork shorthand is commonly used in string diagram notation to represent duplication (copying) of information. Contravariant Counterpart ------------------------- When r in R is copied:: R["R"] -.-> D[/"Function f*"\\] -.-> Y["R x R"] Or in shorthand:: R0["R"] -.-> R1["R"] R0 -.-> R2["R"] Validation ---------- Must have 2+ Y ports (enforced by validator). See Also -------- Specification Notes: "Duplication Open Game" section """ game_type: GameType = GameType.DUPLICATION @model_validator(mode="after") def _y_must_have_multiple_ports(self) -> Self: if len(self.signature.y) < 2: raise DSLTypeError( f"DuplicationGame {self.name!r} must have 2+ Y ports " f"(got {len(self.signature.y)})" ) return self ``` Bases: `AtomicGame` Future-conditioned observation: X -> {}, with S = X. Y and R must be empty. The counit open game is mainly used for technical purposes--to specify future data that may be important to a present decision-maker. ### Mathematical Definition A counit game with special structure S = X and R = Y = {}:: ``` X -> ? ? -.-> X ``` Where: - X: Input observations (also output as coutility) - Y = {}: No choices - R = {}: No utility input - S = X: Coutility equals observations This defines how a future piece of observational data x in X is able to be conditioned upon in the present time. The observation flows through but also becomes available as coutility for upstream games to access. ### Diagram (Full) :: ``` S["Coutility X"] ~~~ G G -.-> S G ~~~ S X["Observations X"] --> G["Open Game ?"] ``` ### Diagram (Shorthand) :: ``` S["Coutility X"] ~~~ X["Observations X"] X -.-> S X ~~~ S ``` ### Use Cases - Propagating context from downstream to upstream games - Making future observations available for present decisions - Creating feedback loops that carry state information ### Validation Must have empty Y and R (enforced by validator). ### See Also Specification Notes: "Counit Open Game" section Source code in `packages/gds-domains/gds_domains/games/dsl/games.py` ``` class CounitGame(AtomicGame): """Future-conditioned observation: X -> {}, with S = X. Y and R must be empty. The counit open game is mainly used for technical purposes--to specify future data that may be important to a present decision-maker. Mathematical Definition ----------------------- A counit game with special structure S = X and R = Y = {}:: X -> ? ? -.-> X Where: - X: Input observations (also output as coutility) - Y = {}: No choices - R = {}: No utility input - S = X: Coutility equals observations This defines how a future piece of observational data x in X is able to be conditioned upon in the present time. The observation flows through but also becomes available as coutility for upstream games to access. Diagram (Full) -------------- :: S["Coutility X"] ~~~ G G -.-> S G ~~~ S X["Observations X"] --> G["Open Game ?"] Diagram (Shorthand) ------------------- :: S["Coutility X"] ~~~ X["Observations X"] X -.-> S X ~~~ S Use Cases --------- - Propagating context from downstream to upstream games - Making future observations available for present decisions - Creating feedback loops that carry state information Validation ---------- Must have empty Y and R (enforced by validator). See Also -------- Specification Notes: "Counit Open Game" section """ game_type: GameType = GameType.COUNIT @model_validator(mode="after") def _validate_counit(self) -> Self: if self.signature.y: raise DSLTypeError( f"CounitGame {self.name!r} must have empty Y (got {self.signature.y})" ) if self.signature.r: raise DSLTypeError( f"CounitGame {self.name!r} must have empty R (got {self.signature.r})" ) return self ``` # gds_domains.games.dsl.library Reusable game factories for common patterns. Reusable component factories for the Reactive Decision Pattern. Each factory returns a pre-configured atomic game with the correct signature, encoding the shared structure found across negotiation and coalition patterns. The Reactive Decision Pattern implements a decision-with-learning cycle triggered by environmental events. The order of operations is: 1. Trigger Detection — sensors detect events (network, timer, market signals) 1. Context Building — event processor transforms triggers + resources into an observation x in X and feasible decision set Y' ⊆ Y 1. Reactive Decision — agent selects action y in Y' given observation, policy, and continuation context k: Y → R 1. Outcome Evaluation — action evaluated against external world → utility r in R 1. Learning — experience (coutility s in S) fed back to update policy and history State evolution per step g_0: X_T × X_C × P → U action decider (context + policy → action) g_1: H × U × R × P → P policy update (history + outcome → new policy) g_2: P × U × R × H → H history update (append (policy, action, outcome)) g_3: X_T × X_C × U → X_T × X_C trigger/resource update ## `context_builder(name='Context Builder', tags=None)` Context Builder — aggregates environmental inputs into a unified observation. Observes trigger events from the outside world (x_T) and available resources/constraints (x_C), then builds a unified observation x in X together with the feasible decision set Y' = U(x_T, x_C) ⊆ Y. Also constructs the continuation context k: Y → R that the Reactive Decision game uses to evaluate candidate actions. This is a covariant lifting — a pure function with no utility or coutility. Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def context_builder( name: str = "Context Builder", tags: dict[str, str] | None = None ) -> CovariantFunction: """Context Builder — aggregates environmental inputs into a unified observation. Observes trigger events from the outside world (x_T) and available resources/constraints (x_C), then builds a unified observation x in X together with the feasible decision set Y' = U(x_T, x_C) ⊆ Y. Also constructs the continuation context k: Y → R that the Reactive Decision game uses to evaluate candidate actions. This is a covariant lifting — a pure function with no utility or coutility. """ game = CovariantFunction( name=name, signature=Signature( x=(port("Event"), port("Constraint"), port("Primitive")), y=(port("Observation, Context"),), ), logic=( "Aggregate trigger events (x_T) and resource constraints (x_C) " "into observation x and feasible decision set Y' = U(x_T, x_C) ⊆ Y. " "Construct continuation context k: Y → R for downstream decision." ), color_code=1, ) if tags: for key, value in tags.items(): game = game.with_tag(key, value) return game ``` ## `history(name='History', tags=None)` History — accumulates past observations and decisions over time. Maintains an append-only record of (policy, action, outcome) tuples. Initialized from h_0 and updated via the contravariant History Update port: h' = g_2(p, u, r, h) := (h, (p, u, r)). The latest history is forwarded to the Policy game so it can condition its strategy selection on past experience. Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def history(name: str = "History", tags: dict[str, str] | None = None) -> DecisionGame: """History — accumulates past observations and decisions over time. Maintains an append-only record of (policy, action, outcome) tuples. Initialized from h_0 and updated via the contravariant History Update port: h' = g_2(p, u, r, h) := (h, (p, u, r)). The latest history is forwarded to the Policy game so it can condition its strategy selection on past experience. """ game = DecisionGame( name=name, signature=Signature( x=(port("Primitive"),), y=(port("Latest History"),), r=(port("History Update"),), ), logic=( "Append-only record of (policy, action, outcome) tuples. " "Initialized from h_0 in H, updated each round via " "h' = g_2(p, u, r, h) := (h, (p, u, r)). " "Forwards latest history to Policy for strategy conditioning." ), color_code=1, ) if tags: for key, value in tags.items(): game = game.with_tag(key, value) return game ``` ## `policy(name='Policy', tags=None)` Policy — maps history to a strategy (policy function `p ∈ P`). Selects a strategy σ: X → Y from the policy space P, conditioned on the accumulated history. Receives experience feedback (coutility) from the Reactive Decision game and uses it to update the policy: p' = g_1(h, u, r; p). Emits a History Update (coutility s) back to the History game so the record includes the latest round. Initialized from p_0 (e.g., uniform over actions). Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def policy(name: str = "Policy", tags: dict[str, str] | None = None) -> DecisionGame: """Policy — maps history to a strategy (policy function ``p ∈ P``). Selects a strategy σ: X → Y from the policy space P, conditioned on the accumulated history. Receives experience feedback (coutility) from the Reactive Decision game and uses it to update the policy: p' = g_1(h, u, r; p). Emits a History Update (coutility s) back to the History game so the record includes the latest round. Initialized from p_0 (e.g., uniform over actions). """ game = DecisionGame( name=name, signature=Signature( x=(port("Latest History"), port("Primitive")), y=(port("Latest Policy"),), r=(port("Experience"),), s=(port("History Update"),), ), logic=( "Select strategy σ: X → Y from policy space P given " "history. Update policy via p' = g_1(h, u, r; p) using " "experience feedback. Emit history update s to record " "the latest (policy, action, outcome) tuple." ), color_code=1, ) if tags: for key, value in tags.items(): game = game.with_tag(key, value) return game ``` ## `outcome(name='Outcome', tags=None)` Outcome — evaluates decisions against the external world to compute payoff. Takes the agent's chosen action u and the external world state ¬u (the counterfactual — what would have happened under alternative actions) and computes the realized utility r = Q(u, ¬u). This outcome is fed back contravariantly to the Reactive Decision game as its resolved payoff, closing the decision-evaluation loop. Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def outcome(name: str = "Outcome", tags: dict[str, str] | None = None) -> DecisionGame: """Outcome — evaluates decisions against the external world to compute payoff. Takes the agent's chosen action u and the external world state ¬u (the counterfactual — what would have happened under alternative actions) and computes the realized utility r = Q(u, ¬u). This outcome is fed back contravariantly to the Reactive Decision game as its resolved payoff, closing the decision-evaluation loop. """ game = DecisionGame( name=name, signature=Signature( x=(port("Decision"), port("Primitive")), s=(port("Outcome"),), ), logic=( "Evaluate action u against external world state ¬u to compute " "realized utility r = Q(u, ¬u). Fed back contravariantly as the " "resolved outcome for the decision game." ), color_code=2, ) if tags: for key, value in tags.items(): game = game.with_tag(key, value) return game ``` ## `reactive_decision(name='Reactive Decision', tags=None)` Reactive Decision — the core decision game where the agent chooses an action. The central decision point. Observes the context (x, Y', k) built by the Context Builder and the current policy p from the Policy game. Selects an action y = σ(x) from the feasible set Y' ⊆ Y according to strategy σ: X → Y parameterized by policy p. Receives resolved outcome r in R (utility) from the Outcome game. Transmits experience s in S (coutility) back to the Policy game for learning. The best-response function B(x, k) identifies which strategies are rational given the continuation. Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def reactive_decision( name: str = "Reactive Decision", tags: dict[str, str] | None = None ) -> DecisionGame: """Reactive Decision — the core decision game where the agent chooses an action. The central decision point. Observes the context (x, Y', k) built by the Context Builder and the current policy p from the Policy game. Selects an action y = σ(x) from the feasible set Y' ⊆ Y according to strategy σ: X → Y parameterized by policy p. Receives resolved outcome r in R (utility) from the Outcome game. Transmits experience s in S (coutility) back to the Policy game for learning. The best-response function B(x, k) identifies which strategies are rational given the continuation. """ game = DecisionGame( name=name, signature=Signature( x=(port("Observation, Context"), port("Latest Policy")), y=(port("Decision"),), r=(port("Outcome"),), s=(port("Experience"),), ), logic=( "Select action y = σ(x) from feasible set Y' ⊆ Y " "using policy p. Receive resolved outcome r (utility) " "from evaluation. Transmit experience s (coutility) " "for policy learning. Best-response B(x, k) identifies " "rational strategies given continuation k: Y → R." ), color_code=1, ) if tags: for key, value in tags.items(): game = game.with_tag(key, value) return game ``` ## `reactive_decision_agent(name='Reactive Decision Agent', include_outcome=True, include_feedback=True)` ``` reactive_decision_agent( name: str = ..., include_outcome: Literal[True] = ..., include_feedback: Literal[True] = ..., ) -> FeedbackLoop ``` ``` reactive_decision_agent( name: str = ..., include_outcome: Literal[False] = ..., include_feedback: Literal[True] = ..., ) -> FeedbackLoop ``` ``` reactive_decision_agent( name: str = ..., include_outcome: bool = ..., include_feedback: Literal[False] = ..., ) -> SequentialComposition ``` Reactive decision agent — configurable single-agent decision loop. Builds a Reactive Decision Pattern chain from atomic games. The two boolean flags control which components are included: +------------------+------------------+------------------------------+ | `include_outcome` | `include_feedback` | Returns | +==================+==================+==============================+ | `True` (default) | `True` (default) | `FeedbackLoop` — full | | | | 5-game loop (CB→Hist→Pol | | | | →RD→Out + 3 feedback flows) | +------------------+------------------+------------------------------+ | `False` | `True` | `FeedbackLoop` — 4-game | | | | loop without Outcome game | +------------------+------------------+------------------------------+ | `True` | `False` | `SequentialComposition` | | | | — 5-game open chain, | | | | no feedback wrap | +------------------+------------------+------------------------------+ | `False` | `False` | `SequentialComposition` | | | | — 4-game open-loop chain | | | | (CB→Hist→Pol→RD), suited | | | | for multi-agent patterns | | | | where Outcome and feedback | | | | are wired at pattern level | +------------------+------------------+------------------------------+ Parameters: | Name | Type | Description | Default | | ------------------ | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------- | | `name` | `str` | Base name for the agent; used as the composition/loop name and as the domain tag on each atomic game ({"domain": name}). | `'Reactive Decision Agent'` | | `include_outcome` | `bool` | When True (default), appends the Outcome game and wires Reactive Decision → Outcome. When False, the chain stops at Reactive Decision — useful in multi-agent patterns where a shared Decision Router owns the Outcome game. | `True` | | `include_feedback` | `bool` | When True (default), wraps the sequential chain in a FeedbackLoop with contravariant flows for outcome, experience, and history-update feedback. When False, returns the raw SequentialComposition chain. | `True` | Returns: | Type | Description | | -------------- | ----------------------- | | \`FeedbackLoop | SequentialComposition\` | | \`FeedbackLoop | SequentialComposition\` | Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def reactive_decision_agent( name: str = "Reactive Decision Agent", include_outcome: bool = True, include_feedback: bool = True, ) -> FeedbackLoop | SequentialComposition: """Reactive decision agent — configurable single-agent decision loop. Builds a Reactive Decision Pattern chain from atomic games. The two boolean flags control which components are included: +------------------+------------------+------------------------------+ | ``include_outcome`` | ``include_feedback`` | Returns | +==================+==================+==============================+ | ``True`` (default) | ``True`` (default) | ``FeedbackLoop`` — full | | | | 5-game loop (CB→Hist→Pol | | | | →RD→Out + 3 feedback flows) | +------------------+------------------+------------------------------+ | ``False`` | ``True`` | ``FeedbackLoop`` — 4-game | | | | loop without Outcome game | +------------------+------------------+------------------------------+ | ``True`` | ``False`` | ``SequentialComposition`` | | | | — 5-game open chain, | | | | no feedback wrap | +------------------+------------------+------------------------------+ | ``False`` | ``False`` | ``SequentialComposition`` | | | | — 4-game open-loop chain | | | | (CB→Hist→Pol→RD), suited | | | | for multi-agent patterns | | | | where Outcome and feedback | | | | are wired at pattern level | +------------------+------------------+------------------------------+ Args: name: Base name for the agent; used as the composition/loop name and as the domain tag on each atomic game (``{"domain": name}``). include_outcome: When ``True`` (default), appends the ``Outcome`` game and wires ``Reactive Decision → Outcome``. When ``False``, the chain stops at ``Reactive Decision`` — useful in multi-agent patterns where a shared Decision Router owns the Outcome game. include_feedback: When ``True`` (default), wraps the sequential chain in a ``FeedbackLoop`` with contravariant flows for outcome, experience, and history-update feedback. When ``False``, returns the raw ``SequentialComposition`` chain. Returns: ``FeedbackLoop`` when ``include_feedback=True``, ``SequentialComposition`` when ``include_feedback=False``. """ tags = {"domain": name} cb = context_builder(tags=tags) hist = history(tags=tags) pol = policy(tags=tags) rd = reactive_decision(tags=tags) # innermost: Policy >> Reactive Decision pol_rd = SequentialComposition( name=f"{name} Policy+RD", first=pol, second=rd, wiring=[ Flow( source_game=pol, source_port="Latest Policy", target_game=rd, target_port="Latest Policy", ), ], ) # History >> (Policy >> RD) hist_pol_rd = SequentialComposition( name=f"{name} Core", first=hist, second=pol_rd, wiring=[ Flow( source_game=hist, source_port="Latest History", target_game=pol, target_port="Latest History", ), ], ) if include_outcome: out = outcome(tags=tags) # (Policy >> RD) >> Outcome — reuse pol_rd as first rd_out = SequentialComposition( name=f"{name} RD+Outcome", first=rd, second=out, wiring=[ Flow( source_game=rd, source_port="Decision", target_game=out, target_port="Decision", ), ], ) # History >> (Policy >> RD >> Outcome) hist_chain = SequentialComposition( name=f"{name} Core", first=hist, second=SequentialComposition( name=f"{name} Policy+RD+Outcome", first=pol, second=rd_out, wiring=[ Flow( source_game=pol, source_port="Latest Policy", target_game=rd, target_port="Latest Policy", ), ], ), wiring=[ Flow( source_game=hist, source_port="Latest History", target_game=pol, target_port="Latest History", ), ], ) chain = SequentialComposition( name=name, first=cb, second=hist_chain, wiring=[ Flow( source_game=cb, source_port="Observation, Context", target_game=rd, target_port="Observation, Context", ), ], ) if not include_feedback: return chain return FeedbackLoop( name=name, inner=chain, feedback_wiring=[ FeedbackFlow( source_game=out, source_port="Outcome", target_game=rd, target_port="Outcome", ), FeedbackFlow( source_game=rd, source_port="Experience", target_game=pol, target_port="Experience", ), FeedbackFlow( source_game=pol, source_port="History Update", target_game=hist, target_port="History Update", ), ], signature=Signature(), ) # include_outcome=False — 4-game chain: CB >> Hist >> Pol >> RD chain = SequentialComposition( name=name, first=cb, second=hist_pol_rd, wiring=[ Flow( source_game=cb, source_port="Observation, Context", target_game=rd, target_port="Observation, Context", ), ], ) if not include_feedback: return chain # include_outcome=False, include_feedback=True — wrap 4-game chain return FeedbackLoop( name=name, inner=chain, feedback_wiring=[ FeedbackFlow( source_game=rd, source_port="Experience", target_game=pol, target_port="Experience", ), FeedbackFlow( source_game=pol, source_port="History Update", target_game=hist, target_port="History Update", ), ], signature=Signature(), ) ``` ## `parallel(games, name=None)` Compose a list of games in parallel. Convenience wrapper for `ParallelComposition.from_list()`. Use this when building N-agent patterns where the number of agents may vary:: ``` agents = [ reactive_decision_agent(f"Agent {i}", include_outcome=False, include_feedback=False) for i in range(1, n + 1) ] agents_parallel = parallel(agents) ``` Parameters: | Name | Type | Description | Default | | ------- | ---------------- | ------------------------------ | ------------------------------------- | | `games` | `list[OpenGame]` | At least 2 OpenGame instances. | *required* | | `name` | \`str | None\` | Optional name override. Defaults to " | Raises: | Type | Description | | ------------ | ----------------------------------- | | `ValueError` | If fewer than 2 games are provided. | Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def parallel(games: list[OpenGame], name: str | None = None) -> ParallelComposition: """Compose a list of games in parallel. Convenience wrapper for ``ParallelComposition.from_list()``. Use this when building N-agent patterns where the number of agents may vary:: agents = [ reactive_decision_agent(f"Agent {i}", include_outcome=False, include_feedback=False) for i in range(1, n + 1) ] agents_parallel = parallel(agents) Args: games: At least 2 ``OpenGame`` instances. name: Optional name override. Defaults to ``" | ".join(g.name for g in games)``. Raises: ValueError: If fewer than 2 games are provided. """ return ParallelComposition.from_list(games, name=name) ``` ## `multi_agent_composition(agents, router, feedback_port_map, wiring=None, name=None)` Compose N open-loop agents in parallel, wire them into a router, and generate all feedback flows automatically. This helper encodes the three-step structure that every multi-agent pattern follows: 1. **Parallel composition** — all agents run side-by-side 1. **Sequential composition** — agents feed into the shared `router` 1. **FeedbackLoop** — `N × K` contravariant flows (one per agent per feedback channel) route the router's outputs back into each agent Parameters: | Name | Type | Description | Default | | ------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------- | | `agents` | `list[OpenGame]` | Open-loop agent games (typically built with reactive_decision_agent(..., include_outcome=False, include_feedback=False)). Must contain at least 2 agents. | *required* | | `router` | `OpenGame` | The shared game that receives all agent decisions and produces per-agent outcomes/feedback signals (e.g. a Decision Router). | *required* | | `feedback_port_map` | `dict[str, tuple[str, str]]` | Maps a semantic label to a (source_port, target_port) pair. For each entry and each agent, a FeedbackFlow is generated from router.source_port → agent.target_port. Port names are used verbatim; they are NOT prefixed with the agent name. Example:: feedback_port_map={ "outcome": ("Outcome", "Outcome"), "experience": ("Experience", "Experience"), "history": ("History Update", "History Update"), } | *required* | | `wiring` | \`list[Flow] | None\` | Optional explicit Flow overrides for the sequential composition step (agents_parallel >> router). If omitted, relies on token-overlap auto-wiring. | | `name` | \`str | None\` | Name for the resulting FeedbackLoop. Defaults to f"{router.name} [multi-agent feedback]". | Returns: | Type | Description | | -------------- | ------------------------------------------------------------ | | `FeedbackLoop` | A FeedbackLoop wrapping (agents_parallel >> router) with all | | `FeedbackLoop` | len(agents) × len(feedback_port_map) contravariant flows. | Raises: | Type | Description | | ------------ | ------------------------------------ | | `ValueError` | If fewer than 2 agents are provided. | Example:: ``` agent1 = reactive_decision_agent("Agent 1", include_outcome=False, include_feedback=False) agent2 = reactive_decision_agent("Agent 2", include_outcome=False, include_feedback=False) router = my_decision_router() game = multi_agent_composition( agents=[agent1, agent2], router=router, feedback_port_map={ "outcome": ("Outcome", "Outcome"), "experience": ("Experience", "Experience"), "history": ("History Update", "History Update"), }, ) ``` Source code in `packages/gds-domains/gds_domains/games/dsl/library.py` ``` def multi_agent_composition( agents: list[OpenGame], router: OpenGame, feedback_port_map: dict[str, tuple[str, str]], wiring: list[Flow] | None = None, name: str | None = None, ) -> FeedbackLoop: """Compose N open-loop agents in parallel, wire them into a router, and generate all feedback flows automatically. This helper encodes the three-step structure that every multi-agent pattern follows: 1. **Parallel composition** — all agents run side-by-side 2. **Sequential composition** — agents feed into the shared ``router`` 3. **FeedbackLoop** — ``N × K`` contravariant flows (one per agent per feedback channel) route the router's outputs back into each agent Args: agents: Open-loop agent games (typically built with ``reactive_decision_agent(..., include_outcome=False, include_feedback=False)``). Must contain at least 2 agents. router: The shared game that receives all agent decisions and produces per-agent outcomes/feedback signals (e.g. a Decision Router). feedback_port_map: Maps a semantic label to a ``(source_port, target_port)`` pair. For each entry and each agent, a ``FeedbackFlow`` is generated from ``router.source_port`` → ``agent.target_port``. Port names are used verbatim; they are NOT prefixed with the agent name. Example:: feedback_port_map={ "outcome": ("Outcome", "Outcome"), "experience": ("Experience", "Experience"), "history": ("History Update", "History Update"), } wiring: Optional explicit ``Flow`` overrides for the sequential composition step (agents_parallel >> router). If omitted, relies on token-overlap auto-wiring. name: Name for the resulting ``FeedbackLoop``. Defaults to ``f"{router.name} [multi-agent feedback]"``. Returns: A ``FeedbackLoop`` wrapping ``(agents_parallel >> router)`` with all ``len(agents) × len(feedback_port_map)`` contravariant flows. Raises: ValueError: If fewer than 2 agents are provided. Example:: agent1 = reactive_decision_agent("Agent 1", include_outcome=False, include_feedback=False) agent2 = reactive_decision_agent("Agent 2", include_outcome=False, include_feedback=False) router = my_decision_router() game = multi_agent_composition( agents=[agent1, agent2], router=router, feedback_port_map={ "outcome": ("Outcome", "Outcome"), "experience": ("Experience", "Experience"), "history": ("History Update", "History Update"), }, ) """ if len(agents) < 2: raise ValueError( f"multi_agent_composition() requires at least 2 agents, got {len(agents)}" ) # Step 1: parallel composition of all agents agents_parallel = ParallelComposition.from_list(agents) # Step 2: sequential into router inner = SequentialComposition( name=f"{agents_parallel.name} >> {router.name}", first=agents_parallel, second=router, wiring=wiring or [], ) # Step 3: generate N × K contravariant feedback flows feedback_wiring: list[Flow] = [] for agent in agents: for _label, (source_port, target_port) in feedback_port_map.items(): feedback_wiring.append( FeedbackFlow( source_game=router, source_port=source_port, target_game=agent, target_port=target_port, ) ) loop_name = name or f"{router.name} [multi-agent feedback]" return FeedbackLoop( name=loop_name, inner=inner, feedback_wiring=feedback_wiring, signature=Signature(), ) ``` # gds_domains.games.dsl.pattern Bases: `BaseModel` A complete named composite pattern — the top-level specification unit. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` class Pattern(BaseModel): """A complete named composite pattern — the top-level specification unit.""" name: str game: OpenGame inputs: list[PatternInput] = [] composition_type: CompositionType = CompositionType.FEEDBACK terminal_conditions: list[TerminalCondition] | None = None action_spaces: list[ActionSpace] | None = None initializations: list[StateInitialization] | None = None source: str = "dsl" def specialize( self, name: str, terminal_conditions: list[TerminalCondition] | None = None, action_spaces: list[ActionSpace] | None = None, initializations: list[StateInitialization] | None = None, inputs: list[PatternInput] | None = None, composition_type: CompositionType | None = None, source: str | None = None, ) -> Pattern: """Create a derived pattern that inherits this pattern's game tree. Produces a new ``Pattern`` with the same ``game`` composition tree, overriding only the fields explicitly provided. ``inputs`` are inherited from the base pattern unless a replacement list is supplied, preventing the input-drift problem where derived patterns silently fall out of sync with their base. Args: name: Required name for the derived pattern. terminal_conditions: Domain-specific terminal conditions. Replaces the base value if provided; otherwise inherits. action_spaces: Domain-specific action spaces. Replaces the base value if provided; otherwise inherits. initializations: Domain-specific state initializations. Replaces the base value if provided; otherwise inherits. inputs: If provided, replaces the inherited ``PatternInput`` list entirely. If omitted, a copy of the base pattern's inputs is used. composition_type: Override the composition type. Defaults to the base pattern's ``composition_type``. source: Override the provenance tag. Defaults to the base pattern's ``source``. Returns: A new ``Pattern`` instance. The ``game`` object is shared (not deep-copied) — modifications to the game tree after calling ``specialize()`` will affect both patterns. Example:: from patterns.multi_party_agreement_zoomed_in import pattern as base resource_exchange = base.specialize( name="Multi-Party Resource Exchange", terminal_conditions=[ TerminalCondition(name="Agreement", actions={...}, outcome="..."), ], action_spaces=[ ActionSpace(game="Agent 1 Reactive Decision", actions=["accept", "reject"]), ], # inputs inherited from base automatically ) """ return Pattern( name=name, game=self.game, inputs=list(inputs) if inputs is not None else list(self.inputs), composition_type=composition_type if composition_type is not None else self.composition_type, terminal_conditions=terminal_conditions if terminal_conditions is not None else self.terminal_conditions, action_spaces=action_spaces if action_spaces is not None else self.action_spaces, initializations=initializations if initializations is not None else self.initializations, source=source if source is not None else self.source, ) # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this pattern to a GDS specification.""" from gds_domains.games.dsl.spec_bridge import compile_pattern_to_spec return compile_pattern_to_spec(self) def compile_system(self) -> SystemIR: """Compile this pattern to a flat SystemIR for verification + visualization.""" from gds_domains.games.dsl.compile import compile_to_ir return compile_to_ir(self).to_system_ir() ``` ## `specialize(name, terminal_conditions=None, action_spaces=None, initializations=None, inputs=None, composition_type=None, source=None)` Create a derived pattern that inherits this pattern's game tree. Produces a new `Pattern` with the same `game` composition tree, overriding only the fields explicitly provided. `inputs` are inherited from the base pattern unless a replacement list is supplied, preventing the input-drift problem where derived patterns silently fall out of sync with their base. Parameters: | Name | Type | Description | Default | | --------------------- | --------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ | | `name` | `str` | Required name for the derived pattern. | *required* | | `terminal_conditions` | \`list[TerminalCondition] | None\` | Domain-specific terminal conditions. Replaces the base value if provided; otherwise inherits. | | `action_spaces` | \`list[ActionSpace] | None\` | Domain-specific action spaces. Replaces the base value if provided; otherwise inherits. | | `initializations` | \`list[StateInitialization] | None\` | Domain-specific state initializations. Replaces the base value if provided; otherwise inherits. | | `inputs` | \`list[PatternInput] | None\` | If provided, replaces the inherited PatternInput list entirely. If omitted, a copy of the base pattern's inputs is used. | | `composition_type` | \`CompositionType | None\` | Override the composition type. Defaults to the base pattern's composition_type. | | `source` | \`str | None\` | Override the provenance tag. Defaults to the base pattern's source. | Returns: | Type | Description | | --------- | ----------------------------------------------------------- | | `Pattern` | A new Pattern instance. The game object is shared (not | | `Pattern` | deep-copied) — modifications to the game tree after calling | | `Pattern` | specialize() will affect both patterns. | Example:: ``` from patterns.multi_party_agreement_zoomed_in import pattern as base resource_exchange = base.specialize( name="Multi-Party Resource Exchange", terminal_conditions=[ TerminalCondition(name="Agreement", actions={...}, outcome="..."), ], action_spaces=[ ActionSpace(game="Agent 1 Reactive Decision", actions=["accept", "reject"]), ], # inputs inherited from base automatically ) ``` Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` def specialize( self, name: str, terminal_conditions: list[TerminalCondition] | None = None, action_spaces: list[ActionSpace] | None = None, initializations: list[StateInitialization] | None = None, inputs: list[PatternInput] | None = None, composition_type: CompositionType | None = None, source: str | None = None, ) -> Pattern: """Create a derived pattern that inherits this pattern's game tree. Produces a new ``Pattern`` with the same ``game`` composition tree, overriding only the fields explicitly provided. ``inputs`` are inherited from the base pattern unless a replacement list is supplied, preventing the input-drift problem where derived patterns silently fall out of sync with their base. Args: name: Required name for the derived pattern. terminal_conditions: Domain-specific terminal conditions. Replaces the base value if provided; otherwise inherits. action_spaces: Domain-specific action spaces. Replaces the base value if provided; otherwise inherits. initializations: Domain-specific state initializations. Replaces the base value if provided; otherwise inherits. inputs: If provided, replaces the inherited ``PatternInput`` list entirely. If omitted, a copy of the base pattern's inputs is used. composition_type: Override the composition type. Defaults to the base pattern's ``composition_type``. source: Override the provenance tag. Defaults to the base pattern's ``source``. Returns: A new ``Pattern`` instance. The ``game`` object is shared (not deep-copied) — modifications to the game tree after calling ``specialize()`` will affect both patterns. Example:: from patterns.multi_party_agreement_zoomed_in import pattern as base resource_exchange = base.specialize( name="Multi-Party Resource Exchange", terminal_conditions=[ TerminalCondition(name="Agreement", actions={...}, outcome="..."), ], action_spaces=[ ActionSpace(game="Agent 1 Reactive Decision", actions=["accept", "reject"]), ], # inputs inherited from base automatically ) """ return Pattern( name=name, game=self.game, inputs=list(inputs) if inputs is not None else list(self.inputs), composition_type=composition_type if composition_type is not None else self.composition_type, terminal_conditions=terminal_conditions if terminal_conditions is not None else self.terminal_conditions, action_spaces=action_spaces if action_spaces is not None else self.action_spaces, initializations=initializations if initializations is not None else self.initializations, source=source if source is not None else self.source, ) ``` ## `compile()` Compile this pattern to a GDS specification. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` def compile(self) -> GDSSpec: """Compile this pattern to a GDS specification.""" from gds_domains.games.dsl.spec_bridge import compile_pattern_to_spec return compile_pattern_to_spec(self) ``` ## `compile_system()` Compile this pattern to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` def compile_system(self) -> SystemIR: """Compile this pattern to a flat SystemIR for verification + visualization.""" from gds_domains.games.dsl.compile import compile_to_ir return compile_to_ir(self).to_system_ir() ``` Bases: `BaseModel` An external input that crosses the pattern boundary. Links to a target game via `target_game` and `flow_label` so the compiler can generate input→game flows automatically. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` class PatternInput(BaseModel): """An external input that crosses the pattern boundary. Links to a target game via ``target_game`` and ``flow_label`` so the compiler can generate input→game flows automatically. """ name: str input_type: InputType schema_hint: str = "" target_game: str = "" flow_label: str = "" ``` Bases: `BaseModel` A condition under which a corecursive loop should terminate. Each terminal condition specifies a combination of actions from named games (or agents) that triggers termination, along with the resulting outcome and optional payoff description. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` class TerminalCondition(BaseModel): """A condition under which a corecursive loop should terminate. Each terminal condition specifies a combination of actions from named games (or agents) that triggers termination, along with the resulting outcome and optional payoff description. """ name: str actions: dict[str, str] outcome: str description: str = "" payoff_description: str = "" payoffs: dict[str, float] = Field(default_factory=dict) ``` Bases: `BaseModel` The set of available actions for a decision game, with optional constraints. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` class ActionSpace(BaseModel): """The set of available actions for a decision game, with optional constraints.""" game: str actions: list[str] constraints: list[str] = Field(default_factory=list) ``` Bases: `BaseModel` An initial state variable for simulation in mathematical notation. Source code in `packages/gds-domains/gds_domains/games/dsl/pattern.py` ``` class StateInitialization(BaseModel): """An initial state variable for simulation in mathematical notation.""" symbol: str space: str description: str = "" game: str = "" ``` # gds_domains.games.dsl.types Bases: `Interface` The `(X, Y, R, S)` 4-tuple boundary of an open game. Backwards-compatible constructor that maps game theory conventions to GDS directional pairs: - **x** → forward_in (observation inputs, covariant) - **y** → forward_out (decision outputs, covariant) - **r** → backward_in (utility inputs, contravariant) - **s** → backward_out (coutility outputs, contravariant) Source code in `packages/gds-domains/gds_domains/games/dsl/types.py` ``` class Signature(Interface, frozen=True): """The ``(X, Y, R, S)`` 4-tuple boundary of an open game. Backwards-compatible constructor that maps game theory conventions to GDS directional pairs: - **x** → forward_in (observation inputs, covariant) - **y** → forward_out (decision outputs, covariant) - **r** → backward_in (utility inputs, contravariant) - **s** → backward_out (coutility outputs, contravariant) """ if TYPE_CHECKING: def __init__( self, *, x: tuple[Port, ...] = (), y: tuple[Port, ...] = (), r: tuple[Port, ...] = (), s: tuple[Port, ...] = (), forward_in: tuple[Port, ...] = (), forward_out: tuple[Port, ...] = (), backward_in: tuple[Port, ...] = (), backward_out: tuple[Port, ...] = (), ) -> None: ... @model_validator(mode="before") @classmethod def _map_xyrs(cls, data: dict) -> dict: if isinstance(data, dict): mapping = { "x": "forward_in", "y": "forward_out", "r": "backward_in", "s": "backward_out", } for old, new in mapping.items(): if old in data: data[new] = data.pop(old) return data @property def x(self) -> tuple[Port, ...]: return self.forward_in @property def y(self) -> tuple[Port, ...]: return self.forward_out @property def r(self) -> tuple[Port, ...]: return self.backward_in @property def s(self) -> tuple[Port, ...]: return self.backward_out ``` Bases: `str`, `Enum` Classification of an open game component by its port structure. Source code in `packages/gds-domains/gds_domains/games/dsl/types.py` ``` class GameType(str, Enum): """Classification of an open game component by its port structure.""" DECISION = "decision" FUNCTION_COVARIANT = "function_covariant" FUNCTION_CONTRAVARIANT = "function_contravariant" DELETION = "deletion" DUPLICATION = "duplication" COUNIT = "counit" ``` Bases: `str`, `Enum` Semantic classification of an information flow between components. Source code in `packages/gds-domains/gds_domains/games/dsl/types.py` ``` class FlowType(str, Enum): """Semantic classification of an information flow between components.""" OBSERVATION = "observation" CHOICE_OBSERVATION = "choice_observation" UTILITY_COUTILITY = "utility_coutility" PRIMITIVE = "primitive" ``` Bases: `str`, `Enum` How games are composed within a pattern. Extends GDS composition types with game-theory naming. Source code in `packages/gds-domains/gds_domains/games/dsl/types.py` ``` class CompositionType(str, Enum): """How games are composed within a pattern. Extends GDS composition types with game-theory naming. """ SEQUENTIAL = "sequential" PARALLEL = "parallel" FEEDBACK = "feedback" CORECURSIVE = "corecursive" ``` Bases: `str`, `Enum` Classification of external inputs that cross the pattern boundary. Source code in `packages/gds-domains/gds_domains/games/dsl/types.py` ``` class InputType(str, Enum): """Classification of external inputs that cross the pattern boundary.""" SENSOR = "sensor" RESOURCE = "resource" INITIALIZATION = "initialization" EXTERNAL_WORLD = "external_world" ``` # gds_domains.games Public API — top-level exports. Open Games — Typed DSL for Compositional Game Theory. # gds_domains.games.ir ## Models Bases: `BaseModel` A single open game component in the flat IR representation. Source code in `packages/gds-domains/gds_domains/games/ir/models.py` ``` class OpenGameIR(BaseModel): """A single open game component in the flat IR representation.""" name: str game_type: GameType signature: tuple[str, str, str, str] # (X, Y, R, S) logic: str = "" gds_function: str | None = None constraints: list[str] = Field(default_factory=list) parent_pattern: str | None = None color_code: int contained_nodes: list[str] = Field(default_factory=list) tags: dict[str, str] = Field(default_factory=dict) ``` Bases: `BaseModel` A directed information flow (edge) between components in the IR. Source code in `packages/gds-domains/gds_domains/games/ir/models.py` ``` class FlowIR(BaseModel): """A directed information flow (edge) between components in the IR.""" source: str target: str label: str flow_type: FlowType direction: FlowDirection is_feedback: bool = False is_corecursive: bool = False ``` Bases: `BaseModel` A complete composite pattern — the top-level unit of specification. Source code in `packages/gds-domains/gds_domains/games/ir/models.py` ``` class PatternIR(BaseModel): """A complete composite pattern — the top-level unit of specification.""" name: str games: list[OpenGameIR] = Field(default_factory=list) flows: list[FlowIR] = Field(default_factory=list) inputs: list[InputIR] = Field(default_factory=list) composition_type: CompositionType terminal_conditions: list[TerminalCondition] | None = None action_spaces: list[ActionSpace] | None = None initialization: list[StateInitialization] | None = None hierarchy: HierarchyNodeIR | None = None source_canvas: str source_spec_notes: str | None = None def to_system_ir(self) -> SystemIR: """Project this OGS PatternIR to a GDS SystemIR. Enables interop with any GDS tool that accepts SystemIR, including GDS generic verification checks (G-001 through G-006). Mapping: - OpenGameIR → BlockIR (game_type → block_type, constraints/tags → metadata) - FlowIR → WiringIR (flow_type → wiring_type, is_corecursive → is_temporal) - OGS CORECURSIVE → GDS TEMPORAL """ from gds.ir.models import BlockIR, SystemIR, WiringIR from gds.ir.models import CompositionType as GDSCompositionType from gds.ir.models import InputIR as GDSInputIR blocks = [ BlockIR( name=g.name, block_type=g.game_type.value, signature=g.signature, logic=g.logic, color_code=g.color_code, metadata={"constraints": g.constraints, "tags": g.tags}, ) for g in self.games ] wirings = [ WiringIR( source=f.source, target=f.target, label=f.label, wiring_type=f.flow_type.value, direction=f.direction, is_feedback=f.is_feedback, is_temporal=f.is_corecursive, ) for f in self.flows ] # Map OGS composition types to GDS (CORECURSIVE → TEMPORAL) comp_map = { "sequential": "SEQUENTIAL", "parallel": "PARALLEL", "feedback": "FEEDBACK", "corecursive": "TEMPORAL", } gds_comp = GDSCompositionType[comp_map[self.composition_type.value]] inputs = [ GDSInputIR( name=i.name, metadata={ "input_type": i.input_type.value, "schema_hint": i.schema_hint, "shape": i.shape, }, ) for i in self.inputs ] return SystemIR( name=self.name, blocks=blocks, wirings=wirings, inputs=inputs, composition_type=gds_comp, source=self.source_canvas, ) ``` ## `to_system_ir()` Project this OGS PatternIR to a GDS SystemIR. Enables interop with any GDS tool that accepts SystemIR, including GDS generic verification checks (G-001 through G-006). Mapping: - OpenGameIR → BlockIR (game_type → block_type, constraints/tags → metadata) - FlowIR → WiringIR (flow_type → wiring_type, is_corecursive → is_temporal) - OGS CORECURSIVE → GDS TEMPORAL Source code in `packages/gds-domains/gds_domains/games/ir/models.py` ``` def to_system_ir(self) -> SystemIR: """Project this OGS PatternIR to a GDS SystemIR. Enables interop with any GDS tool that accepts SystemIR, including GDS generic verification checks (G-001 through G-006). Mapping: - OpenGameIR → BlockIR (game_type → block_type, constraints/tags → metadata) - FlowIR → WiringIR (flow_type → wiring_type, is_corecursive → is_temporal) - OGS CORECURSIVE → GDS TEMPORAL """ from gds.ir.models import BlockIR, SystemIR, WiringIR from gds.ir.models import CompositionType as GDSCompositionType from gds.ir.models import InputIR as GDSInputIR blocks = [ BlockIR( name=g.name, block_type=g.game_type.value, signature=g.signature, logic=g.logic, color_code=g.color_code, metadata={"constraints": g.constraints, "tags": g.tags}, ) for g in self.games ] wirings = [ WiringIR( source=f.source, target=f.target, label=f.label, wiring_type=f.flow_type.value, direction=f.direction, is_feedback=f.is_feedback, is_temporal=f.is_corecursive, ) for f in self.flows ] # Map OGS composition types to GDS (CORECURSIVE → TEMPORAL) comp_map = { "sequential": "SEQUENTIAL", "parallel": "PARALLEL", "feedback": "FEEDBACK", "corecursive": "TEMPORAL", } gds_comp = GDSCompositionType[comp_map[self.composition_type.value]] inputs = [ GDSInputIR( name=i.name, metadata={ "input_type": i.input_type.value, "schema_hint": i.schema_hint, "shape": i.shape, }, ) for i in self.inputs ] return SystemIR( name=self.name, blocks=blocks, wirings=wirings, inputs=inputs, composition_type=gds_comp, source=self.source_canvas, ) ``` Bases: `HierarchyNodeIR` OGS hierarchy node — extends GDS with CORECURSIVE composition type. Inherits `id`, `name`, `block_name`, `exit_condition`, and `children` from GDS. Overrides `composition_type` to use the OGS enum which includes CORECURSIVE (mapped to GDS TEMPORAL). The `game_name` property is a backwards-compatible alias for `block_name` (inherited from GDS). Source code in `packages/gds-domains/gds_domains/games/ir/models.py` ``` class HierarchyNodeIR(_GDSHierarchyNodeIR): """OGS hierarchy node — extends GDS with CORECURSIVE composition type. Inherits ``id``, ``name``, ``block_name``, ``exit_condition``, and ``children`` from GDS. Overrides ``composition_type`` to use the OGS enum which includes CORECURSIVE (mapped to GDS TEMPORAL). The ``game_name`` property is a backwards-compatible alias for ``block_name`` (inherited from GDS). """ composition_type: CompositionType | None = None # type: ignore[assignment] # OGS enum (has CORECURSIVE) children: list[HierarchyNodeIR] = Field(default_factory=list) # type: ignore[assignment] @property def game_name(self) -> str | None: """Backwards-compatible alias for ``block_name``.""" return self.block_name ``` ## `game_name` Backwards-compatible alias for `block_name`. ## Serialization Bases: `BaseModel` Top-level IR document containing one or more patterns. Source code in `packages/gds-domains/gds_domains/games/ir/serialization.py` ``` class IRDocument(BaseModel): """Top-level IR document containing one or more patterns.""" version: str = "1.0" patterns: list[PatternIR] metadata: IRMetadata ``` Serialize an IR document to JSON. Source code in `packages/gds-domains/gds_domains/games/ir/serialization.py` ``` def save_ir(doc: IRDocument, path: Path) -> None: """Serialize an IR document to JSON.""" path.write_text(doc.model_dump_json(indent=2)) ``` Deserialize an IR document from JSON. Source code in `packages/gds-domains/gds_domains/games/ir/serialization.py` ``` def load_ir(path: Path) -> IRDocument: """Deserialize an IR document from JSON.""" return IRDocument.model_validate_json(path.read_text()) ``` # gds_domains.games.reports ## Generator Generate all requested reports and write them to output_dir. Creates a subdirectory named after the pattern (slugified) and puts all reports in that subdirectory for organized storage. Parameters: | Name | Type | Description | Default | | --------------------- | -------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- | | `pattern` | `PatternIR` | The pattern to generate reports for. | *required* | | `output_dir` | `Path` | Base directory to write reports into. A subdirectory for the pattern will be created here. | *required* | | `report_types` | \`list[str] | None\` | List of report types to generate. Defaults to all. Options: "overview", "contracts", "schema", "state_machine", "checklist", "verification" | | `verification_report` | \`VerificationReport | None\` | Optional pre-computed verification report. | Returns: | Type | Description | | ------------ | ---------------------------------------- | | `list[Path]` | List of paths to generated report files. | Source code in `packages/gds-domains/gds_domains/games/reports/generator.py` ``` def generate_reports( pattern: PatternIR, output_dir: Path, report_types: list[str] | None = None, verification_report: VerificationReport | None = None, ) -> list[Path]: """Generate all requested reports and write them to output_dir. Creates a subdirectory named after the pattern (slugified) and puts all reports in that subdirectory for organized storage. Args: pattern: The pattern to generate reports for. output_dir: Base directory to write reports into. A subdirectory for the pattern will be created here. report_types: List of report types to generate. Defaults to all. Options: "overview", "contracts", "schema", "state_machine", "checklist", "verification" verification_report: Optional pre-computed verification report. Returns: List of paths to generated report files. """ all_types = [ "overview", "contracts", "schema", "state_machine", "checklist", "verification", "domain_analysis", ] if report_types is None: report_types = all_types needs_verification = any(t in report_types for t in ("overview", "verification")) if verification_report is None and needs_verification: verification_report = verify(pattern) # Create pattern-specific subdirectory slug = pattern.name.lower().replace(" ", "_") pattern_dir = output_dir / slug pattern_dir.mkdir(parents=True, exist_ok=True) written: list[Path] = [] if "overview" in report_types: content = generate_system_overview(pattern, verification_report) path = pattern_dir / f"{slug}_system_overview.md" path.write_text(content) written.append(path) if "contracts" in report_types: content = generate_interface_contracts(pattern) path = pattern_dir / f"{slug}_interface_contracts.md" path.write_text(content) written.append(path) if "schema" in report_types: content = generate_schema_catalog(pattern) path = pattern_dir / f"{slug}_schema_catalog.md" path.write_text(content) written.append(path) if "state_machine" in report_types: content = generate_state_machine(pattern) path = pattern_dir / f"{slug}_state_machine.md" path.write_text(content) written.append(path) if "checklist" in report_types: content = generate_implementation_checklist(pattern) path = pattern_dir / f"{slug}_implementation_checklist.md" path.write_text(content) written.append(path) if "verification" in report_types: content = generate_verification_summary(pattern, verification_report) path = pattern_dir / f"{slug}_verification_summary.md" path.write_text(content) written.append(path) if "domain_analysis" in report_types: content = generate_domain_analysis(pattern) path = pattern_dir / f"{slug}_domain_analysis.md" path.write_text(content) written.append(path) return written ``` ## Domain Analysis Domain analysis report generator with advanced tag-based insights. ## `generate_domain_analysis(pattern, tag_key='domain')` Generate a domain analysis report with tag-based insights. Source code in `packages/gds-domains/gds_domains/games/reports/domain_analysis.py` ``` def generate_domain_analysis(pattern: PatternIR, tag_key: str = "domain") -> str: """Generate a domain analysis report with tag-based insights.""" env = _get_jinja_env() template = env.get_template("domain_analysis.md.j2") analysis = _analyze_domains(pattern, tag_key) return template.render( pattern=pattern, tag_key=tag_key, **analysis, ) ``` # gds_domains.games.verification ## Engine Run verification checks against a PatternIR. Parameters: | Name | Type | Description | Default | | -------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- | | `pattern` | `PatternIR` | The pattern to verify. | *required* | | `domain_checks` | \`list\[Callable\[[PatternIR], list[Finding]\]\] | None\` | Optional subset of OGS domain checks to run. Defaults to ALL_CHECKS (8 OGS-specific checks). | | `include_gds_checks` | `bool` | Run GDS generic checks (G-001..G-006) via to_system_ir() projection. Defaults to True. | `True` | Returns: | Type | Description | | -------------------- | --------------------------------------- | | `VerificationReport` | A VerificationReport with all findings. | Source code in `packages/gds-domains/gds_domains/games/verification/engine.py` ``` def verify( pattern: PatternIR, domain_checks: list[Callable[[PatternIR], list[Finding]]] | None = None, include_gds_checks: bool = True, ) -> VerificationReport: """Run verification checks against a PatternIR. Args: pattern: The pattern to verify. domain_checks: Optional subset of OGS domain checks to run. Defaults to ``ALL_CHECKS`` (8 OGS-specific checks). include_gds_checks: Run GDS generic checks (G-001..G-006) via ``to_system_ir()`` projection. Defaults to True. Returns: A VerificationReport with all findings. """ checks = domain_checks or ALL_CHECKS findings: list[Finding] = [] for check_fn in checks: findings.extend(check_fn(pattern)) if include_gds_checks: system_ir = pattern.to_system_ir() for check_fn_gds in GDS_ALL_CHECKS: findings.extend(check_fn_gds(system_ir)) return VerificationReport(pattern_name=pattern.name, findings=findings) ``` ## Findings Bases: `VerificationReport` OGS-compatible verification report with pattern_name alias. Source code in `packages/gds-domains/gds_domains/games/verification/findings.py` ``` class VerificationReport(_GDSVerificationReport): """OGS-compatible verification report with pattern_name alias.""" def __init__(self, **data): # Map pattern_name to system_name for GDS base if "pattern_name" in data and "system_name" not in data: data["system_name"] = data["pattern_name"] super().__init__(**data) @property def pattern_name(self) -> str: """Alias for system_name — OGS calls it 'pattern_name'.""" return self.system_name ``` ## `pattern_name` Alias for system_name — OGS calls it 'pattern_name'. ## Type Checks Type consistency checks T-001 through T-006. These checks verify that individual flows and game signatures are internally consistent — that flow labels match game port types, that games have complete signatures, and that flow directions match their semantic types. All comparisons use token-based matching (see `tokens.py`): signature strings are split on `+` and `,` into normalized tokens, and compatibility is checked via set containment (subset) or overlap. Key distinction from structural checks (S-series): type checks validate per-flow / per-game properties, while structural checks validate global composition invariants (acyclicity, independence, etc.). ## `check_t001_domain_codomain_matching(pattern)` T-001: For every covariant game-to-game flow, verify the flow label is consistent with source codomain (Y) or target domain (X). Uses token-based comparison: flow label tokens must be a subset of the source Y tokens OR the target X tokens. Source code in `packages/gds-domains/gds_domains/games/verification/type_checks.py` ``` def check_t001_domain_codomain_matching(pattern: PatternIR) -> list[Finding]: """T-001: For every covariant game-to-game flow, verify the flow label is consistent with source codomain (Y) or target domain (X). Uses token-based comparison: flow label tokens must be a subset of the source Y tokens OR the target X tokens. """ findings = [] game_sigs = {g.name: g.signature for g in pattern.games} for flow in pattern.flows: if flow.direction != FlowDirection.COVARIANT: continue if flow.source not in game_sigs or flow.target not in game_sigs: continue # input-to-game flows handled by T-006 src_y = game_sigs[flow.source][1] # codomain (Y) tgt_x = game_sigs[flow.target][0] # domain (X) if not src_y or not tgt_x: findings.append( Finding( check_id="T-001", severity=Severity.ERROR, message=( f"Cannot verify domain/codomain: " f"{flow.source} Y={src_y!r}, {flow.target} X={tgt_x!r}" ), source_elements=[flow.source, flow.target], passed=False, ) ) continue compatible = tokens_subset(flow.label, src_y) or tokens_subset( flow.label, tgt_x ) findings.append( Finding( check_id="T-001", severity=Severity.ERROR, message=( f"Flow {flow.label!r}: {flow.source} " f"Y={src_y!r} -> {flow.target} X={tgt_x!r}" + ("" if compatible else " — MISMATCH") ), source_elements=[flow.source, flow.target], passed=compatible, ) ) return findings ``` ## `check_t002_signature_completeness(pattern)` T-002: Every OpenGameIR must have all four (X,Y,R,S) slots. Slots must be defined (even if empty set). A game is valid if it has at least one non-empty input slot (X or R) and at least one non-empty output slot (Y or S). Games that only produce contravariant output (utility computations) have empty Y but non-empty S, which is valid. Source code in `packages/gds-domains/gds_domains/games/verification/type_checks.py` ``` def check_t002_signature_completeness(pattern: PatternIR) -> list[Finding]: """T-002: Every OpenGameIR must have all four (X,Y,R,S) slots. Slots must be defined (even if empty set). A game is valid if it has at least one non-empty input slot (X or R) and at least one non-empty output slot (Y or S). Games that only produce contravariant output (utility computations) have empty Y but non-empty S, which is valid. """ findings = [] for game in pattern.games: x, y, r, s = game.signature has_input = bool(x) or bool(r) has_output = bool(y) or bool(s) has_required = has_input and has_output missing = [] if not has_input: missing.append("no inputs (X or R)") if not has_output: missing.append("no outputs (Y or S)") findings.append( Finding( check_id="T-002", severity=Severity.ERROR, message=( f"{game.name}: signature ({x!r}, {y!r}, {r!r}, {s!r})" + (f" — {', '.join(missing)}" if missing else "") ), source_elements=[game.name], passed=has_required, ) ) return findings ``` ## `check_t003_flow_type_consistency(pattern)` T-003: Covariant flows must not be utility/coutility; contravariant must be. Source code in `packages/gds-domains/gds_domains/games/verification/type_checks.py` ``` def check_t003_flow_type_consistency(pattern: PatternIR) -> list[Finding]: """T-003: Covariant flows must not be utility/coutility; contravariant must be.""" findings = [] for flow in pattern.flows: if flow.direction == FlowDirection.COVARIANT: ok = flow.flow_type != FlowType.UTILITY_COUTILITY findings.append( Finding( check_id="T-003", severity=Severity.ERROR, message=( f"Covariant flow {flow.label!r} " f"({flow.source} -> {flow.target})" f" has type {flow.flow_type.value}" + ("" if ok else " — should not be utility/coutility") ), source_elements=[flow.source, flow.target], passed=ok, ) ) else: ok = flow.flow_type == FlowType.UTILITY_COUTILITY findings.append( Finding( check_id="T-003", severity=Severity.ERROR, message=( f"Contravariant flow {flow.label!r} " f"({flow.source} -> {flow.target})" f" has type {flow.flow_type.value}" + ("" if ok else " — should be utility/coutility") ), source_elements=[flow.source, flow.target], passed=ok, ) ) return findings ``` ## `check_t004_input_type_resolution(pattern)` T-004: Every InputIR.schema_hint must resolve to a known type. Source code in `packages/gds-domains/gds_domains/games/verification/type_checks.py` ``` def check_t004_input_type_resolution(pattern: PatternIR) -> list[Finding]: """T-004: Every InputIR.schema_hint must resolve to a known type.""" findings = [] for inp in pattern.inputs: has_hint = bool(inp.schema_hint) findings.append( Finding( check_id="T-004", severity=Severity.WARNING, message=( f"Input {inp.name!r}: schema_hint={inp.schema_hint!r}" + ("" if has_hint else " — no schema hint") ), source_elements=[inp.name], passed=has_hint, ) ) return findings ``` ## `check_t005_unused_inputs(pattern)` T-005: Flag inputs with no outgoing flows. Source code in `packages/gds-domains/gds_domains/games/verification/type_checks.py` ``` def check_t005_unused_inputs(pattern: PatternIR) -> list[Finding]: """T-005: Flag inputs with no outgoing flows.""" findings = [] flow_sources = {f.source for f in pattern.flows} for inp in pattern.inputs: used = inp.name in flow_sources findings.append( Finding( check_id="T-005", severity=Severity.INFO, message=( f"Input {inp.name!r}" + ("" if used else " — unused (no outgoing flows)") ), source_elements=[inp.name], passed=used, ) ) return findings ``` ## `check_t006_dangling_flows(pattern)` T-006: Flag flows whose source or target is not in the pattern. Source code in `packages/gds-domains/gds_domains/games/verification/type_checks.py` ``` def check_t006_dangling_flows(pattern: PatternIR) -> list[Finding]: """T-006: Flag flows whose source or target is not in the pattern.""" findings = [] known_names = {g.name for g in pattern.games} | {i.name for i in pattern.inputs} for flow in pattern.flows: src_ok = flow.source in known_names tgt_ok = flow.target in known_names ok = src_ok and tgt_ok issues = [] if not src_ok: issues.append(f"source {flow.source!r} unknown") if not tgt_ok: issues.append(f"target {flow.target!r} unknown") findings.append( Finding( check_id="T-006", severity=Severity.ERROR, message=( f"Flow {flow.label!r} ({flow.source} -> {flow.target})" + (f" — {', '.join(issues)}" if issues else "") ), source_elements=[flow.source, flow.target], passed=ok, ) ) return findings ``` ## Structural Checks Structural composition checks S-001 through S-007. These checks verify that the composition structure of a pattern is well-formed — that games are wired correctly, flows respect their direction, and the overall graph has valid topology. They operate on the flat IR representation (`PatternIR`) after compilation. Unlike type checks (T-series), which verify individual flow labels against game signatures, structural checks verify global properties like acyclicity (S-004) and composition-specific invariants. ## `check_s001_sequential_type_compatibility(pattern)` S-001: In sequential composition G1;G2, verify Y1 = X2. For each covariant game-to-game flow, the flow label tokens must be a subset of BOTH the source Y and target X tokens. This verifies the structural composition requirement. Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s001_sequential_type_compatibility(pattern: PatternIR) -> list[Finding]: """S-001: In sequential composition G1;G2, verify Y1 = X2. For each covariant game-to-game flow, the flow label tokens must be a subset of BOTH the source Y and target X tokens. This verifies the structural composition requirement. """ findings = [] game_sigs = {g.name: g.signature for g in pattern.games} game_names = set(game_sigs.keys()) for flow in pattern.flows: if flow.direction != FlowDirection.COVARIANT: continue if flow.is_corecursive: continue # Corecursive flows are temporal Y→X, not within-step if flow.source not in game_names or flow.target not in game_names: continue src_y = game_sigs[flow.source][1] # Y (codomain) tgt_x = game_sigs[flow.target][0] # X (domain) if not src_y or not tgt_x: continue # T-002 handles missing signatures label_in_y = tokens_subset(flow.label, src_y) label_in_x = tokens_subset(flow.label, tgt_x) compatible = label_in_y and label_in_x findings.append( Finding( check_id="S-001", severity=Severity.ERROR, message=( f"Sequential {flow.source} ; {flow.target}: " f"Y={src_y!r}, X={tgt_x!r}, flow={flow.label!r}" + ("" if compatible else " — type mismatch") ), source_elements=[flow.source, flow.target], passed=compatible, ) ) return findings ``` ## `check_s002_parallel_independence(pattern)` S-002: Games in parallel composition should share no direct flows. Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s002_parallel_independence(pattern: PatternIR) -> list[Finding]: """S-002: Games in parallel composition should share no direct flows.""" findings = [] if pattern.composition_type != CompositionType.PARALLEL: findings.append( Finding( check_id="S-002", severity=Severity.WARNING, message="Pattern is not parallel composition — S-002 not applicable", source_elements=[], passed=True, ) ) return findings game_names = {g.name for g in pattern.games} violations = [ f for f in pattern.flows if f.source in game_names and f.target in game_names ] if violations: for flow in violations: findings.append( Finding( check_id="S-002", severity=Severity.WARNING, message=( f"Parallel independence violation: direct flow {flow.label!r} " f"from {flow.source} to {flow.target}" ), source_elements=[flow.source, flow.target], passed=False, ) ) else: findings.append( Finding( check_id="S-002", severity=Severity.WARNING, message=( "Parallel composition: no direct game-to-game flows (independent)" ), source_elements=[], passed=True, ) ) return findings ``` ## `check_s003_feedback_type_compatibility(pattern)` S-003: In feedback composition, verify the feedback flow label is consistent with the source's S (coutility) slot. For each feedback flow, the flow label tokens must be a subset of the source S tokens. Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s003_feedback_type_compatibility(pattern: PatternIR) -> list[Finding]: """S-003: In feedback composition, verify the feedback flow label is consistent with the source's S (coutility) slot. For each feedback flow, the flow label tokens must be a subset of the source S tokens. """ findings = [] game_sigs = {g.name: g.signature for g in pattern.games} for flow in pattern.flows: if not flow.is_feedback: continue if flow.source not in game_sigs or flow.target not in game_sigs: continue src_s = game_sigs[flow.source][3] # S (coutility output) tgt_x = game_sigs[flow.target][0] # X (observation input) if not src_s and not tgt_x: findings.append( Finding( check_id="S-003", severity=Severity.WARNING, message=( f"Feedback {flow.source} -> " f"{flow.target}: both S and X are empty" ), source_elements=[flow.source, flow.target], passed=True, ) ) continue compatible = tokens_subset(flow.label, src_s) findings.append( Finding( check_id="S-003", severity=Severity.ERROR, message=( f"Feedback {flow.source} -> {flow.target}: " f"S={src_s!r}, X={tgt_x!r}, flow={flow.label!r}" + ("" if compatible else " — type mismatch") ), source_elements=[flow.source, flow.target], passed=compatible, ) ) return findings ``` ## `check_s004_covariant_acyclicity(pattern)` S-004: Covariant flow graph must be a DAG (no cycles). Within a single timestep, covariant data must flow in one direction — cycles would create infinite loops. Corecursive flows (Y→X across timesteps) and feedback flows (S→R, contravariant) are excluded from this check since they represent legitimate temporal or backward links. Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s004_covariant_acyclicity(pattern: PatternIR) -> list[Finding]: """S-004: Covariant flow graph must be a DAG (no cycles). Within a single timestep, covariant data must flow in one direction — cycles would create infinite loops. Corecursive flows (Y→X across timesteps) and feedback flows (S→R, contravariant) are excluded from this check since they represent legitimate temporal or backward links. """ # Build adjacency list from covariant game-to-game flows game_names = {g.name for g in pattern.games} adj: dict[str, list[str]] = {name: [] for name in game_names} for flow in pattern.flows: if flow.direction != FlowDirection.COVARIANT: continue if flow.is_corecursive: continue # Corecursive Y→X flows are temporal, not within-step if flow.source in game_names and flow.target in game_names: adj[flow.source].append(flow.target) # DFS cycle detection with coloring WHITE, GRAY, BLACK = 0, 1, 2 color = {name: WHITE for name in game_names} cycle_path: list[str] = [] has_cycle = False def dfs(node: str) -> bool: nonlocal has_cycle color[node] = GRAY cycle_path.append(node) for neighbor in adj[node]: if color[neighbor] == GRAY: # Found cycle — trim path to show only the cycle idx = cycle_path.index(neighbor) cycle_path[:] = cycle_path[idx:] has_cycle = True return True if color[neighbor] == WHITE and dfs(neighbor): return True cycle_path.pop() color[node] = BLACK return False for node in game_names: if color[node] == WHITE and dfs(node): break if has_cycle: return [ Finding( check_id="S-004", severity=Severity.ERROR, message=( f"Covariant flow graph contains a cycle: {' -> '.join(cycle_path)}" ), source_elements=cycle_path, passed=False, ) ] return [ Finding( check_id="S-004", severity=Severity.ERROR, message="Covariant flow graph is acyclic (DAG)", source_elements=[], passed=True, ) ] ``` ## `check_s005_decision_space_validation(pattern)` S-005: Every decision game must have a non-empty Y (decision output) and at least one incoming contravariant flow (utility feedback). Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s005_decision_space_validation(pattern: PatternIR) -> list[Finding]: """S-005: Every decision game must have a non-empty Y (decision output) and at least one incoming contravariant flow (utility feedback). """ findings = [] contra_targets = set() for flow in pattern.flows: if flow.direction == FlowDirection.CONTRAVARIANT: contra_targets.add(flow.target) for game in pattern.games: if game.game_type != GameType.DECISION: continue y_slot = game.signature[1] has_y = bool(y_slot) has_contra = game.name in contra_targets issues = [] if not has_y: issues.append("empty Y slot (no decision output)") if not has_contra: issues.append("no incoming contravariant flow (no utility feedback)") passed = has_y and has_contra findings.append( Finding( check_id="S-005", severity=Severity.WARNING, message=( f"Decision game {game.name!r}: Y={y_slot!r}" + (f" — {'; '.join(issues)}" if issues else "") ), source_elements=[game.name], passed=passed, ) ) return findings ``` ## `check_s006_corecursive_wiring(pattern)` S-006: Validate corecursive (temporal Y→X) flow wiring. Corecursive flows must satisfy two invariants: 1. Direction must be covariant (they carry forward data across timesteps, not backward utility). 1. The flow label tokens must be a subset of the source game's Y tokens (the data being forwarded must actually exist in the source's output). Only runs when corecursive flows are present in the pattern. Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s006_corecursive_wiring(pattern: PatternIR) -> list[Finding]: """S-006: Validate corecursive (temporal Y→X) flow wiring. Corecursive flows must satisfy two invariants: 1. Direction must be covariant (they carry forward data across timesteps, not backward utility). 2. The flow label tokens must be a subset of the source game's Y tokens (the data being forwarded must actually exist in the source's output). Only runs when corecursive flows are present in the pattern. """ findings: list[Finding] = [] game_sigs = {g.name: g.signature for g in pattern.games} corecursive_flows = [f for f in pattern.flows if f.is_corecursive] if not corecursive_flows: return findings for flow in corecursive_flows: # Must be covariant direction if flow.direction != FlowDirection.COVARIANT: findings.append( Finding( check_id="S-006", severity=Severity.ERROR, message=( f"Corecursive flow {flow.source} → {flow.target}: " f"must be covariant (got {flow.direction.value})" ), source_elements=[flow.source, flow.target], passed=False, ) ) continue # Source Y tokens should overlap with target X tokens if flow.source not in game_sigs or flow.target not in game_sigs: continue src_y = game_sigs[flow.source][1] # Y tgt_x = game_sigs[flow.target][0] # X if not src_y or not tgt_x: findings.append( Finding( check_id="S-006", severity=Severity.WARNING, message=( f"Corecursive flow {flow.source} → {flow.target}: " f"Y={src_y!r}, X={tgt_x!r} — cannot verify token overlap" ), source_elements=[flow.source, flow.target], passed=True, ) ) continue compatible = tokens_subset(flow.label, src_y) findings.append( Finding( check_id="S-006", severity=Severity.ERROR, message=( f"Corecursive flow {flow.source} → {flow.target}: " f"Y={src_y!r}, X={tgt_x!r}, flow={flow.label!r}" + ("" if compatible else " — label not in source Y") ), source_elements=[flow.source, flow.target], passed=compatible, ) ) return findings ``` ## `check_s007_initialization_completeness(pattern)` S-007: Every initialization input must have at least one outgoing flow. Source code in `packages/gds-domains/gds_domains/games/verification/structural_checks.py` ``` def check_s007_initialization_completeness(pattern: PatternIR) -> list[Finding]: """S-007: Every initialization input must have at least one outgoing flow.""" findings = [] flow_sources = {f.source for f in pattern.flows} init_inputs = [ i for i in pattern.inputs if i.input_type == InputType.INITIALIZATION ] if not init_inputs: findings.append( Finding( check_id="S-007", severity=Severity.WARNING, message="No initialization inputs found", source_elements=[], passed=True, ) ) return findings for inp in init_inputs: connected = inp.name in flow_sources findings.append( Finding( check_id="S-007", severity=Severity.WARNING, message=( f"Initialization input {inp.name!r}" + ("" if connected else " — not connected to any game") ), source_elements=[inp.name], passed=connected, ) ) return findings ``` # gds_domains.games.viz Mermaid visualization generators for OGS patterns. Inspired by gds-viz, but adapted for game-theory-specific PatternIR. Views: 1. Structural - block topology with composition operators 1. Architecture by Role - games grouped by game_type 1. Architecture by Domain - games grouped by tags 1. Game Hierarchy - nested composition tree 1. Flow Topology - covariant flow graph 1. Terminal Conditions - state transitions ## `structural_to_mermaid(pattern)` View 1: Structural - compiled game graph with composition topology. Shows all games as nodes with their types, and all flows as edges. Role-based styling: decision games are rectangles, functions are stadiums. Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def structural_to_mermaid(pattern: PatternIR) -> str: """View 1: Structural - compiled game graph with composition topology. Shows all games as nodes with their types, and all flows as edges. Role-based styling: decision games are rectangles, functions are stadiums. """ lines = ["%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60}}}%%"] lines.append("flowchart TD") # Define nodes with shapes based on game type for game in pattern.games: node_id = _sanitize_id(game.name) if game.game_type == GameType.DECISION: # Rectangle for decision games lines.append(f' {node_id}["{game.name}"]') elif game.game_type == GameType.FUNCTION_COVARIANT: # Stadium for covariant functions lines.append(f' {node_id}(["{game.name}"])') elif game.game_type == GameType.FUNCTION_CONTRAVARIANT: # Cylinder for contravariant functions lines.append(f" {node_id}[({game.name})]") else: # Default rectangle lines.append(f' {node_id}["{game.name}"]') # Add flows as edges for flow in pattern.flows: source_id = _sanitize_id(flow.source) target_id = _sanitize_id(flow.target) # Style edges based on flow type if ( flow.is_feedback or flow.is_corecursive or flow.direction == FlowDirection.CONTRAVARIANT ): lines.append(f' {source_id} -.->|"{flow.label}"| {target_id}') else: lines.append(f' {source_id} -->|"{flow.label}"| {target_id}') return "\n".join(lines) ``` ## `architecture_by_role_to_mermaid(pattern)` View 2: Architecture by Role - games grouped by game_type. Groups games by their GameType (decision, function_covariant, etc.). Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def architecture_by_role_to_mermaid(pattern: PatternIR) -> str: """View 2: Architecture by Role - games grouped by game_type. Groups games by their GameType (decision, function_covariant, etc.). """ lines = ["%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 80}}}%%"] lines.append("flowchart TD") # Group games by type by_type: dict[GameType, list] = {} for game in pattern.games: by_type.setdefault(game.game_type, []).append(game) # Create subgraphs for each type for game_type, games in sorted(by_type.items(), key=lambda x: x[0].value): type_name = game_type.value.replace("_", " ").title() lines.append(f" subgraph {game_type.value} [{type_name}]") for game in games: node_id = _sanitize_id(game.name) lines.append(f' {node_id}["{game.name}"]') lines.append(" end") # Add flows between subgraphs for flow in pattern.flows: source_id = _sanitize_id(flow.source) target_id = _sanitize_id(flow.target) lines.append(f' {source_id} -->|"{flow.label}"| {target_id}') return "\n".join(lines) ``` ## `architecture_by_domain_to_mermaid(pattern, tag_key='domain')` View 3: Architecture by Domain - games grouped by tag. Groups games by a tag key (default: "domain"). Games without the tag go to "ungrouped". Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def architecture_by_domain_to_mermaid( pattern: PatternIR, tag_key: str = "domain" ) -> str: """View 3: Architecture by Domain - games grouped by tag. Groups games by a tag key (default: "domain"). Games without the tag go to "ungrouped". """ lines = ["%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 80}}}%%"] lines.append("flowchart TD") # Group games by tag value by_domain: dict[str, list] = {} ungrouped = [] for game in pattern.games: tag_value = game.tags.get(tag_key) if game.tags else None if tag_value: by_domain.setdefault(tag_value, []).append(game) else: ungrouped.append(game) # Create subgraphs for each domain # (prefix with "dom_" to avoid ID collisions with game nodes) for domain, games in sorted(by_domain.items()): safe_domain = "dom_" + _sanitize_id(domain) lines.append(f' subgraph {safe_domain} ["{domain}"]') for game in games: node_id = _sanitize_id(game.name) lines.append(f' {node_id}["{game.name}"]') lines.append(" end") # Ungrouped games if ungrouped: lines.append(' subgraph ungrouped ["Ungrouped"]') for game in ungrouped: node_id = _sanitize_id(game.name) lines.append(f' {node_id}["{game.name}"]') lines.append(" end") # Add flows for flow in pattern.flows: source_id = _sanitize_id(flow.source) target_id = _sanitize_id(flow.target) lines.append(f' {source_id} -->|"{flow.label}"| {target_id}') return "\n".join(lines) ``` ## `hierarchy_to_mermaid(pattern)` View 4: Game Hierarchy - nested composition tree. Shows the hierarchical composition structure (sequential, parallel, feedback, corecursive). Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def hierarchy_to_mermaid(pattern: PatternIR) -> str: """View 4: Game Hierarchy - nested composition tree. Shows the hierarchical composition structure (sequential, parallel, feedback, corecursive). """ lines = ["%%{init: {'flowchart': {'nodeSpacing': 40, 'rankSpacing': 50}}}%%"] lines.append("flowchart TD") if not pattern.hierarchy: return "\n".join([*lines, " No hierarchy information available"]) def render_node(node, parent_id: str | None = None, depth: int = 0) -> list[str]: node_lines = [] node_id = _sanitize_id(node.id) if node.composition_type: # Composite node type_label = ( node.composition_type.value if node.composition_type else "group" ) label = f"{node.name} ({type_label})" if node.exit_condition: label += f"
exit: {node.exit_condition[:30]}..." node_lines.append(f' {node_id}["{label}"]') if parent_id: node_lines.append(f" {parent_id} --> {node_id}") for child in node.children: node_lines.extend(render_node(child, node_id, depth + 1)) else: # Leaf node (atomic game) game_name = node.block_name or node.name node_lines.append(f' {node_id}["{game_name}"]') if parent_id: node_lines.append(f" {parent_id} --> {node_id}") return node_lines lines.extend(render_node(pattern.hierarchy)) return "\n".join(lines) ``` ## `flow_topology_to_mermaid(pattern)` View 5: Flow Topology - covariant flow graph. Shows only covariant (forward) flows, useful for understanding data flow. Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def flow_topology_to_mermaid(pattern: PatternIR) -> str: """View 5: Flow Topology - covariant flow graph. Shows only covariant (forward) flows, useful for understanding data flow. """ lines = ["%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60}}}%%"] lines.append("flowchart LR") # Only covariant flows covariant_flows = [ f for f in pattern.flows if f.direction == FlowDirection.COVARIANT ] # Collect all nodes that appear in covariant flows node_names = set() for flow in covariant_flows: node_names.add(flow.source) node_names.add(flow.target) # Define nodes for name in sorted(node_names): node_id = _sanitize_id(name) lines.append(f' {node_id}["{name}"]') # Add covariant flows only for flow in covariant_flows: source_id = _sanitize_id(flow.source) target_id = _sanitize_id(flow.target) style = "-.->" if flow.is_corecursive else "-->" lines.append(f' {source_id} {style}|"{flow.label}"| {target_id}') return "\n".join(lines) ``` ## `terminal_conditions_to_mermaid(pattern)` View 6: Terminal Conditions - state transitions. Shows terminal conditions as state transitions. Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def terminal_conditions_to_mermaid(pattern: PatternIR) -> str: """View 6: Terminal Conditions - state transitions. Shows terminal conditions as state transitions. """ lines = ["%%{init: {'flowchart': {'nodeSpacing': 50, 'rankSpacing': 60}}}%%"] lines.append("stateDiagram-v2") lines.append(" [*] --> Running") if not pattern.terminal_conditions: lines.append(" Running --> [*]") return "\n".join(lines) # Add terminal condition states for tc in pattern.terminal_conditions: tc_id = _sanitize_id(tc.name) lines.append(f" Running --> {tc_id} : {tc.outcome}") lines.append(f" {tc_id} : {tc.name}") if tc.description: lines.append(f" note right of {tc_id}") lines.append(f" {tc.description[:60]}") lines.append(" end note") return "\n".join(lines) ``` ## `generate_all_views(pattern)` Generate all 6 views and return as a dictionary. Source code in `packages/gds-domains/gds_domains/games/viz.py` ``` def generate_all_views(pattern: PatternIR) -> dict[str, str]: """Generate all 6 views and return as a dictionary.""" return { "structural": structural_to_mermaid(pattern), "architecture_by_role": architecture_by_role_to_mermaid(pattern), "architecture_by_domain": architecture_by_domain_to_mermaid(pattern), "hierarchy": hierarchy_to_mermaid(pattern), "flow_topology": flow_topology_to_mermaid(pattern), "terminal_conditions": terminal_conditions_to_mermaid(pattern), } ``` # Business (gds-domains) # gds-business **Business dynamics DSL over GDS semantics** — causal loop diagrams, supply chain networks, and value stream maps with formal verification. ## What is this? `gds-business` extends the GDS framework with business dynamics vocabulary — system dynamics diagrams, supply chain modeling, and lean manufacturing analysis. It provides: - **3 diagram types** — Causal Loop Diagrams (CLD), Supply Chain Networks (SCN), Value Stream Maps (VSM) - **Typed compilation** — Each diagram compiles to GDS role blocks, entities, and composition trees - **11 verification checks** — Domain-specific structural validation (CLD-001..003, SCN-001..004, VSM-001..004) - **Canonical decomposition** — Validated h = f ∘ g projection across all three diagram types - **Full GDS integration** — All downstream tooling works immediately (canonical projection, semantic checks, gds-viz) ## Architecture ``` gds-framework (pip install gds-framework) │ │ Domain-neutral composition algebra, typed spaces, │ state model, verification engine, flat IR compiler. │ └── gds-business (pip install gds-domains) │ │ Business dynamics DSL: CLD, SCN, VSM elements, │ compile_*(), domain verification, verify() dispatch. │ └── Your application │ │ Concrete business models, analysis notebooks, │ verification runners. ``` ## Quick Start ``` uv add gds-business # or: pip install gds-domains ``` ``` from gds_domains.business import ( # CLD Variable, CausalLink, CausalLoopModel, # Supply Chain SupplyNode, Shipment, DemandSource, OrderPolicy, SupplyChainModel, # VSM ProcessStep, InventoryBuffer, Supplier, Customer, MaterialFlow, ValueStreamModel, # Verification verify, ) # ── Causal Loop Diagram ───────────────────────────────── cld = CausalLoopModel( name="Population Dynamics", variables=[ Variable(name="Population"), Variable(name="Births"), Variable(name="Deaths"), ], links=[ CausalLink(source="Population", target="Births", polarity="+"), CausalLink(source="Births", target="Population", polarity="+"), CausalLink(source="Population", target="Deaths", polarity="+"), CausalLink(source="Deaths", target="Population", polarity="-"), ], ) # ── Supply Chain Network ──────────────────────────────── scn = SupplyChainModel( name="Beer Game", nodes=[ SupplyNode(name="Factory", initial_inventory=100), SupplyNode(name="Retailer", initial_inventory=100), ], shipments=[ Shipment(name="F->R", source="Factory", target="Retailer"), ], demand_sources=[ DemandSource(name="Customer", target="Retailer"), ], order_policies=[ OrderPolicy(name="Reorder", node="Retailer", inputs=["Retailer"]), ], ) # ── Compile & Verify ──────────────────────────────────── spec = cld.compile() # → GDSSpec ir = scn.compile_system() # → SystemIR report = verify(cld) # → VerificationReport ``` ## Canonical Spectrum All three diagram types map cleanly onto the GDS canonical form h = f ∘ g: | Diagram | |X| | |f| | Form | Character | |---------|-----|-----|------|-----------| | CLD | 0 | 0 | h = g | Stateless — pure signal relay | | SCN | n | n | h = f ∘ g | Full dynamical — inventory state | | VSM (no buffers) | 0 | 0 | h = g | Stateless process chain | | VSM (with buffers) | m | m | h = f ∘ g | Partially stateful | ## Diagram Types ### Causal Loop Diagrams (CLD) Model feedback structure in complex systems. Variables connected by causal links with polarity (reinforcing/balancing). Stateless — all variables map to Policy blocks. ### Supply Chain Networks (SCN) Model multi-echelon supply chains with inventory dynamics. Demand sources drive order policies that update inventory at supply nodes. Stateful — nodes carry inventory state via Mechanism + Entity. ### Value Stream Maps (VSM) Model lean manufacturing value streams with process steps, inventory buffers, and material/information flows. Partially stateful — buffers add state when present. ## Credits Built on [gds-framework](https://blockscience.github.io/gds-core/framework/index.md) by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add gds-business # or: pip install gds-domains ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First CLD A Causal Loop Diagram models feedback structure using variables and causal links: ``` from gds_domains.business import ( Variable, CausalLink, CausalLoopModel, verify ) model = CausalLoopModel( name="Population Dynamics", variables=[ Variable(name="Population"), Variable(name="Births"), Variable(name="Deaths"), ], links=[ CausalLink(source="Population", target="Births", polarity="+"), CausalLink(source="Births", target="Population", polarity="+"), CausalLink(source="Population", target="Deaths", polarity="+"), CausalLink(source="Deaths", target="Population", polarity="-"), ], ) # Compile to GDS spec = model.compile() print(f"Blocks: {len(spec.blocks)}") # 3 Policy blocks print(f"Entities: {len(spec.entities)}") # 0 (stateless) # Verify report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'✓' if f.passed else '✗'} {f.message}") ``` ## Your First Supply Chain A Supply Chain Network models inventory dynamics across nodes: ``` from gds_domains.business import ( SupplyNode, Shipment, DemandSource, OrderPolicy, SupplyChainModel, verify, ) model = SupplyChainModel( name="Beer Game", nodes=[ SupplyNode(name="Factory", initial_inventory=100), SupplyNode(name="Distributor", initial_inventory=100), SupplyNode(name="Retailer", initial_inventory=100), ], shipments=[ Shipment(name="F->D", source="Factory", target="Distributor"), Shipment(name="D->R", source="Distributor", target="Retailer"), ], demand_sources=[ DemandSource(name="Customer", target="Retailer"), ], order_policies=[ OrderPolicy(name="Retailer Policy", node="Retailer", inputs=["Retailer"]), OrderPolicy(name="Distributor Policy", node="Distributor", inputs=["Distributor"]), OrderPolicy(name="Factory Policy", node="Factory", inputs=["Factory"]), ], ) # Compile — stateful, with inventory entities spec = model.compile() print(f"Entities: {len(spec.entities)}") # 3 (one per node) # Verify report = verify(model, include_gds_checks=False) for f in report.findings: if not f.passed: print(f" [{f.check_id}] ✗ {f.message}") ``` ## Your First Value Stream Map A Value Stream Map models lean manufacturing flows: ``` from gds_domains.business import ( ProcessStep, InventoryBuffer, Supplier, Customer, MaterialFlow, ValueStreamModel, verify, ) model = ValueStreamModel( name="Assembly Line", steps=[ ProcessStep(name="Cutting", cycle_time=30.0, uptime=0.95), ProcessStep(name="Welding", cycle_time=45.0, uptime=0.90), ProcessStep(name="Assembly", cycle_time=25.0), ], buffers=[ InventoryBuffer(name="Cut WIP", between=("Cutting", "Welding"), quantity=10), InventoryBuffer(name="Weld WIP", between=("Welding", "Assembly"), quantity=5), ], suppliers=[Supplier(name="Steel Supplier")], customers=[Customer(name="End Customer", takt_time=50.0)], material_flows=[ MaterialFlow(source="Steel Supplier", target="Cutting"), MaterialFlow(source="Cutting", target="Cut WIP"), MaterialFlow(source="Cut WIP", target="Welding"), MaterialFlow(source="Welding", target="Weld WIP"), MaterialFlow(source="Weld WIP", target="Assembly"), MaterialFlow(source="Assembly", target="End Customer"), ], ) # With buffers → stateful (Mechanism + Entity) spec = model.compile() print(f"Entities: {len(spec.entities)}") # 2 buffers # Verify — check bottleneck vs takt time report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'✓' if f.passed else '✗'} {f.message}") ``` ## Next Steps - [Diagram Types Guide](https://blockscience.github.io/gds-core/business/guide/diagram-types/index.md) — detailed element reference and GDS mapping - [Verification Guide](https://blockscience.github.io/gds-core/business/guide/verification/index.md) — all 11 domain checks explained - [API Reference](https://blockscience.github.io/gds-core/business/api/index.md) — complete auto-generated API docs # Diagram Types `gds-business` supports three business dynamics diagram types, each with its own element vocabulary, GDS mapping, and composition structure. ## Causal Loop Diagram (CLD) Causal loop diagrams model **feedback structure** in complex systems using variables and directed causal links. ### Elements | Element | Description | GDS Role | | ------------ | --------------------------------------------------------- | -------- | | `Variable` | A system variable (e.g., Population, Revenue) | Policy | | `CausalLink` | Directed influence with polarity (+/-) and optional delay | Wiring | ### Polarity - **Positive (+)**: Source increases → target increases (same direction) - **Negative (-)**: Source increases → target decreases (opposite direction) ### Loop Classification Loops are classified by counting negative links: - **Even negatives** → **Reinforcing (R)** — amplifies change - **Odd negatives** → **Balancing (B)** — counteracts change ### GDS Mapping All variables map to `Policy` blocks (signal relays). No state, no entities. Single parallel tier composition. ``` Composition: (all_variables |) Canonical: h = g (stateless) ``` ### Example ``` from gds_domains.business import Variable, CausalLink, CausalLoopModel model = CausalLoopModel( name="Market Dynamics", variables=[ Variable(name="Price"), Variable(name="Demand"), Variable(name="Supply"), ], links=[ CausalLink(source="Price", target="Demand", polarity="-"), CausalLink(source="Price", target="Supply", polarity="+"), CausalLink(source="Demand", target="Price", polarity="+"), CausalLink(source="Supply", target="Price", polarity="-"), ], ) ``` ______________________________________________________________________ ## Supply Chain Network (SCN) Supply chain networks model **multi-echelon inventory dynamics** with demand signals, order policies, and material flows. ### Elements | Element | Description | GDS Role | | -------------- | -------------------------------------- | ------------------ | | `SupplyNode` | Warehouse/factory with inventory state | Mechanism + Entity | | `Shipment` | Directed flow link between nodes | Wiring | | `DemandSource` | Exogenous demand signal | BoundaryAction | | `OrderPolicy` | Reorder decision logic | Policy | ### GDS Mapping Three-tier composition with temporal feedback loop: ``` Composition: (demands |) >> (policies |) >> (node_mechanisms |) .loop([inventory → policies]) Canonical: h = f ∘ g (stateful — inventory stocks are state X) ``` ### Semantic Types | Type | Space | Description | | ------------------ | ------------------- | ------------------------- | | `InventoryType` | `InventorySpace` | Inventory level at a node | | `ShipmentRateType` | `ShipmentRateSpace` | Rate of material flow | | `DemandType` | `DemandSpace` | Exogenous demand signal | ### Example ``` from gds_domains.business import ( SupplyNode, Shipment, DemandSource, OrderPolicy, SupplyChainModel, ) model = SupplyChainModel( name="Two-Echelon Chain", nodes=[ SupplyNode(name="Warehouse", initial_inventory=200, capacity=500), SupplyNode(name="Retail", initial_inventory=50), ], shipments=[ Shipment(name="W->R", source="Warehouse", target="Retail", lead_time=2.0), ], demand_sources=[ DemandSource(name="Customer Demand", target="Retail"), ], order_policies=[ OrderPolicy(name="Retail Reorder", node="Retail", inputs=["Retail"]), ], ) ``` ______________________________________________________________________ ## Value Stream Map (VSM) Value stream maps model **lean manufacturing process flows** with process steps, inventory buffers, suppliers, customers, and material/information flows. ### Elements | Element | Description | GDS Role | | ----------------- | ---------------------------------------------- | ------------------ | | `ProcessStep` | Processing stage with cycle time, uptime, etc. | Policy | | `InventoryBuffer` | WIP buffer between stages | Mechanism + Entity | | `Supplier` | External material source | BoundaryAction | | `Customer` | External demand sink with takt time | BoundaryAction | | `MaterialFlow` | Material movement (push or pull) | Wiring | | `InformationFlow` | Signal/kanban flow | Wiring | ### GDS Mapping Three-tier composition with optional temporal loop: ``` Composition: (suppliers | customers) >> (steps |) >> (buffers |) .loop([buffer content → steps]) # if buffers exist Canonical: h = g (no buffers) or h = f ∘ g (with buffers) ``` ### Semantic Types | Type | Space | Description | | ------------------- | -------------------- | -------------------------- | | `MaterialType` | `MaterialSpace` | Material flow payload | | `ProcessSignalType` | `ProcessSignalSpace` | Process step signal/kanban | ### Push vs Pull Material flows carry a `flow_type` attribute: - **push** — material is pushed downstream based on production schedule - **pull** — material is pulled by downstream demand (kanban) VSM-002 identifies where the flow type transitions, marking the **push/pull boundary**. ### Example ``` from gds_domains.business import ( ProcessStep, InventoryBuffer, Supplier, Customer, MaterialFlow, InformationFlow, ValueStreamModel, ) model = ValueStreamModel( name="Assembly Line", steps=[ ProcessStep(name="Stamping", cycle_time=10.0, changeover_time=30.0, uptime=0.85), ProcessStep(name="Welding", cycle_time=45.0, uptime=0.90), ProcessStep(name="Assembly", cycle_time=25.0, operators=3), ], buffers=[ InventoryBuffer(name="Stamped Parts", between=("Stamping", "Welding"), quantity=100), ], suppliers=[Supplier(name="Coil Supplier")], customers=[Customer(name="Shipping", takt_time=60.0)], material_flows=[ MaterialFlow(source="Coil Supplier", target="Stamping"), MaterialFlow(source="Stamping", target="Stamped Parts"), MaterialFlow(source="Stamped Parts", target="Welding", flow_type="push"), MaterialFlow(source="Welding", target="Assembly", flow_type="pull"), MaterialFlow(source="Assembly", target="Shipping"), ], information_flows=[ InformationFlow(source="Shipping", target="Assembly"), ], ) ``` # Verification `gds-business` provides 11 domain-specific verification checks across three diagram types, plus access to the 6 GDS generic checks (G-001..G-006) via the unified `verify()` function. ## Using verify() The `verify()` function auto-dispatches to the correct domain checks based on model type: ``` from gds_domains.business import verify report = verify(model) # Domain + GDS checks report = verify(model, include_gds_checks=False) # Domain checks only report = verify(model, domain_checks=[my_check]) # Custom checks ``` The returned `VerificationReport` contains a list of `Finding` objects with: - `check_id` — e.g., "CLD-001", "SCN-002", "G-003" - `severity` — ERROR, WARNING, or INFO - `message` — human-readable description - `passed` — whether the check passed - `source_elements` — elements involved ## CLD Checks | ID | Name | Severity | What it checks | | ------- | ---------------------------- | -------- | ------------------------------------------------------------------------------------------- | | CLD-001 | Loop polarity classification | INFO | Finds all cycles, classifies as Reinforcing (R) or Balancing (B) by counting negative links | | CLD-002 | Variable reachability | WARNING | Every variable appears in at least one link | | CLD-003 | No self-loops | ERROR | No link has source == target | ### CLD-001: Loop Polarity This is an **informational** check — it doesn't fail, but reports the structure of feedback loops: ``` [CLD-001] ✓ Loop Population -> Births -> Population: Reinforcing (R) (0 negative link(s)) [CLD-001] ✓ Loop Population -> Deaths -> Population: Balancing (B) (1 negative link(s)) ``` ### CLD-002: Variable Reachability Flags isolated variables that don't participate in any causal relationship: ``` [CLD-002] ✗ Variable 'Unused' does NOT appear in any link ``` ### CLD-003: No Self-Loops Self-loops (a variable causing itself) are structurally invalid: ``` [CLD-003] ✗ Self-loop detected: 'X' -> 'X' ``` Note CLD-003 is also enforced at construction time by the `CausalLoopModel` validator. The check exists for completeness when running `verify()`. ## SCN Checks | ID | Name | Severity | What it checks | | ------- | ---------------------- | -------- | ---------------------------------------------------- | | SCN-001 | Network connectivity | WARNING | All nodes reachable via BFS from demand/supply paths | | SCN-002 | Shipment node validity | ERROR | source and target exist | | SCN-003 | Demand target validity | ERROR | target exists | | SCN-004 | No orphan nodes | WARNING | Every node in at least one shipment or demand | ### SCN-001: Network Connectivity Uses BFS from demand targets to check reachability: ``` [SCN-001] ✗ Node 'Isolated Warehouse' is NOT reachable in the supply network ``` ### SCN-004: No Orphan Nodes Nodes not connected to any shipment or demand are flagged: ``` [SCN-004] ✗ Node 'Unused DC' is NOT connected ``` ## VSM Checks | ID | Name | Severity | What it checks | | ------- | ----------------------- | -------- | -------------------------------------------------------- | | VSM-001 | Linear process flow | WARNING | Each step has ≤1 incoming, ≤1 outgoing material flow | | VSM-002 | Push/pull boundary | INFO | Identifies where flow_type transitions from push to pull | | VSM-003 | Flow reference validity | ERROR | All flow source/target are declared elements | | VSM-004 | Bottleneck vs takt | WARNING | Max cycle_time ≤ customer takt_time | ### VSM-001: Linear Process Flow Value streams are typically linear. Branching may indicate modeling issues: ``` [VSM-001] ✗ Step 'Sorting': 1 incoming, 2 outgoing material flow(s) — non-linear (branching detected) ``` ### VSM-002: Push/Pull Boundary Lean manufacturing distinguishes push (schedule-driven) from pull (demand-driven) flows. This check identifies transition points: ``` [VSM-002] ✓ Push/pull boundary at 'Welding': push->pull ``` ### VSM-004: Bottleneck vs Takt The slowest process step (bottleneck) must not exceed customer takt time: ``` [VSM-004] ✗ Bottleneck 'Welding' (cycle_time=45.0) > customer 'End User' takt_time=40.0 ``` ## GDS Generic Checks When `include_gds_checks=True` (default), the model is compiled to `SystemIR` and the 6 GDS generic checks run: | ID | Name | What it checks | | ----- | ----------------------------- | ---------------------------------- | | G-001 | Domain/codomain compatibility | Wiring type tokens match | | G-002 | Signature completeness | Every block has inputs and outputs | | G-003 | Unique block naming | No duplicate block names | | G-004 | Wiring source existence | Wired blocks exist | | G-005 | Wiring target existence | Wired blocks exist | | G-006 | Hierarchy consistency | Block tree is well-formed | Note G-002 will flag `BoundaryAction` blocks as having "no inputs" — this is expected since they are exogenous sources by design. # API Reference Complete API documentation for `gds-domains` (business), auto-generated from source docstrings. ## Common | Module | Description | | --------------------------------------------------------------------------------------------------- | ------------------------------------------- | | [gds_domains.business](https://blockscience.github.io/gds-core/business/api/init/index.md) | Package root — version, top-level imports | | [gds_domains.business.common](https://blockscience.github.io/gds-core/business/api/common/index.md) | Shared types, errors, compilation utilities | ## Causal Loop Diagrams | Module | Description | | --------------------------------------------------------------------------------------------------------------- | ------------------------------------ | | [gds_domains.business.cld.elements](https://blockscience.github.io/gds-core/business/api/cld-elements/index.md) | Variable, CausalLink declarations | | [gds_domains.business.cld.model](https://blockscience.github.io/gds-core/business/api/cld-model/index.md) | CausalLoopModel container | | [gds_domains.business.cld.compile](https://blockscience.github.io/gds-core/business/api/cld-compile/index.md) | CLD → GDSSpec / SystemIR compiler | | [gds_domains.business.cld.checks](https://blockscience.github.io/gds-core/business/api/cld-checks/index.md) | CLD-001..CLD-003 verification checks | ## Supply Chain Networks | Module | Description | | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- | | [gds_domains.business.supplychain.elements](https://blockscience.github.io/gds-core/business/api/scn-elements/index.md) | SupplyNode, Shipment, DemandSource, OrderPolicy | | [gds_domains.business.supplychain.model](https://blockscience.github.io/gds-core/business/api/scn-model/index.md) | SupplyChainModel container | | [gds_domains.business.supplychain.compile](https://blockscience.github.io/gds-core/business/api/scn-compile/index.md) | SCN → GDSSpec / SystemIR compiler | | [gds_domains.business.supplychain.checks](https://blockscience.github.io/gds-core/business/api/scn-checks/index.md) | SCN-001..SCN-004 verification checks | ## Value Stream Maps | Module | Description | | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------- | | [gds_domains.business.vsm.elements](https://blockscience.github.io/gds-core/business/api/vsm-elements/index.md) | ProcessStep, InventoryBuffer, Supplier, Customer, flows | | [gds_domains.business.vsm.model](https://blockscience.github.io/gds-core/business/api/vsm-model/index.md) | ValueStreamModel container | | [gds_domains.business.vsm.compile](https://blockscience.github.io/gds-core/business/api/vsm-compile/index.md) | VSM → GDSSpec / SystemIR compiler | | [gds_domains.business.vsm.checks](https://blockscience.github.io/gds-core/business/api/vsm-checks/index.md) | VSM-001..VSM-004 verification checks | ## Verification | Module | Description | | --------------------------------------------------------------------------------------------------------------- | ------------------------------ | | [gds_domains.business.verification](https://blockscience.github.io/gds-core/business/api/verification/index.md) | Union dispatch verify() engine | # gds_domains.business.cld.checks CLD verification checks (CLD-001..CLD-003). CLD-001: Loop polarity classification. Find all cycles in the CLD and classify them: - Even number of negative links = Reinforcing (R) - Odd number of negative links = Balancing (B) Source code in `packages/gds-domains/gds_domains/business/cld/checks.py` ``` def check_cld001_loop_polarity(model: CausalLoopModel) -> list[Finding]: """CLD-001: Loop polarity classification. Find all cycles in the CLD and classify them: - Even number of negative links = Reinforcing (R) - Odd number of negative links = Balancing (B) """ findings: list[Finding] = [] # Build adjacency list with polarity adj: dict[str, list[tuple[str, str]]] = {v.name: [] for v in model.variables} for link in model.links: if link.source in adj: adj[link.source].append((link.target, link.polarity)) # Find all simple cycles using DFS cycles: list[list[tuple[str, str]]] = [] visited: set[str] = set() def dfs( node: str, start: str, path: list[tuple[str, str]], in_path: set[str], ) -> None: for neighbor, polarity in adj[node]: if neighbor == start and len(path) > 1: cycles.append(path + [(neighbor, polarity)]) elif neighbor not in in_path and neighbor not in visited: dfs( neighbor, start, path + [(neighbor, polarity)], in_path | {neighbor} ) for var in model.variables: dfs(var.name, var.name, [(var.name, "")], {var.name}) visited.add(var.name) if not cycles: findings.append( Finding( check_id="CLD-001", severity=Severity.INFO, message="No feedback loops detected in the CLD", source_elements=[], passed=True, ) ) else: for cycle in cycles: nodes = [n for n, _ in cycle[:-1]] # Count negative links in the cycle (skip first tuple which has empty polarity) neg_count = sum(1 for _, p in cycle[1:] if p == "-") loop_type = "Balancing (B)" if neg_count % 2 == 1 else "Reinforcing (R)" findings.append( Finding( check_id="CLD-001", severity=Severity.INFO, message=( f"Loop {' -> '.join(nodes)} -> {nodes[0]}: " f"{loop_type} ({neg_count} negative link(s))" ), source_elements=nodes, passed=True, ) ) return findings ``` CLD-002: Every variable appears in at least one link. Source code in `packages/gds-domains/gds_domains/business/cld/checks.py` ``` def check_cld002_variable_reachability(model: CausalLoopModel) -> list[Finding]: """CLD-002: Every variable appears in at least one link.""" findings: list[Finding] = [] linked_vars: set[str] = set() for link in model.links: linked_vars.add(link.source) linked_vars.add(link.target) for var in model.variables: reachable = var.name in linked_vars findings.append( Finding( check_id="CLD-002", severity=Severity.WARNING, message=( f"Variable {var.name!r} " f"{'appears' if reachable else 'does NOT appear'} in any link" ), source_elements=[var.name], passed=reachable, ) ) return findings ``` CLD-003: No self-loops (source != target on all links). Source code in `packages/gds-domains/gds_domains/business/cld/checks.py` ``` def check_cld003_no_self_loops(model: CausalLoopModel) -> list[Finding]: """CLD-003: No self-loops (source != target on all links).""" findings: list[Finding] = [] for link in model.links: is_self_loop = link.source == link.target findings.append( Finding( check_id="CLD-003", severity=Severity.ERROR, message=( f"Self-loop detected: {link.source!r} -> {link.target!r}" if is_self_loop else f"Link {link.source!r} -> {link.target!r} is not a self-loop" ), source_elements=[link.source, link.target], passed=not is_self_loop, ) ) return findings ``` # gds_domains.business.cld.compile Compiler: CausalLoopModel → GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a CausalLoopModel into a GDSSpec. Registers: types, spaces, blocks, wirings. No entities (stateless). Source code in `packages/gds-domains/gds_domains/business/cld/compile.py` ``` def compile_cld(model: CausalLoopModel) -> GDSSpec: """Compile a CausalLoopModel into a GDSSpec. Registers: types, spaces, blocks, wirings. No entities (stateless). """ spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(SignalType) # 2. Register spaces spec.collect(SignalSpace) # 3. Register blocks (all Policy) for var in model.variables: spec.register_block(_build_variable_block(var, model)) # 4. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for link in model.links: wires.append( Wire(source=link.source, target=link.target, space="CLD SignalSpace") ) if wires: spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for CLD {model.name!r}", ) ) # CLDs model causal feedback — discrete/synchronous/Moore spec.execution_contract = ExecutionContract(time_domain="discrete") return spec ``` Compile a CausalLoopModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). Source code in `packages/gds-domains/gds_domains/business/cld/compile.py` ``` def compile_cld_to_system(model: CausalLoopModel) -> SystemIR: """Compile a CausalLoopModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). """ root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.business.cld.elements CLD element declarations — frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A system variable in a causal loop diagram. Maps to: GDS Policy (signal relay). Source code in `packages/gds-domains/gds_domains/business/cld/elements.py` ``` class Variable(BaseModel, frozen=True): """A system variable in a causal loop diagram. Maps to: GDS Policy (signal relay). """ name: str description: str = "" ``` Bases: `BaseModel` A directed causal influence between variables. Maps to: GDS Wiring. Polarity "+" means same-direction influence (reinforcing), "-" means opposite-direction influence (balancing). Source code in `packages/gds-domains/gds_domains/business/cld/elements.py` ``` class CausalLink(BaseModel, frozen=True): """A directed causal influence between variables. Maps to: GDS Wiring. Polarity "+" means same-direction influence (reinforcing), "-" means opposite-direction influence (balancing). """ source: str target: str polarity: Literal["+", "-"] delay: bool = False ``` # gds_domains.business.cld.model CausalLoopModel — declarative container for causal loop diagrams. Bases: `BaseModel` A complete causal loop diagram declaration. Validates at construction: 1. At least one variable 1. No duplicate variable names 1. Link source/target reference declared variables 1. No self-loops Source code in `packages/gds-domains/gds_domains/business/cld/model.py` ``` class CausalLoopModel(BaseModel): """A complete causal loop diagram declaration. Validates at construction: 1. At least one variable 2. No duplicate variable names 3. Link source/target reference declared variables 4. No self-loops """ name: str variables: list[Variable] links: list[CausalLink] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one variable if not self.variables: errors.append("CLD must have at least one variable") # 2. No duplicate names names: list[str] = [v.name for v in self.variables] seen: set[str] = set() for n in names: if n in seen: errors.append(f"Duplicate variable name: {n!r}") seen.add(n) var_names = set(names) # 3. Link source/target reference declared variables for link in self.links: if link.source not in var_names: errors.append(f"Link source {link.source!r} is not a declared variable") if link.target not in var_names: errors.append(f"Link target {link.target!r} is not a declared variable") # 4. No self-loops for link in self.links: if link.source == link.target: errors.append(f"Self-loop detected: {link.source!r} -> {link.target!r}") if errors: raise BizValidationError( f"CausalLoopModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def variable_names(self) -> set[str]: return {v.name for v in self.variables} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.business.cld.compile import compile_cld return compile_cld(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.business.cld.compile import compile_cld_to_system return compile_cld_to_system(self) ``` ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/business/cld/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.business.cld.compile import compile_cld return compile_cld(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/business/cld/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.business.cld.compile import compile_cld_to_system return compile_cld_to_system(self) ``` # gds_domains.business.common Shared types, errors, and compilation utilities. ## Diagram Kinds Bases: `StrEnum` The three business dynamics diagram types. Source code in `packages/gds-domains/gds_domains/business/common/types.py` ``` class BusinessDiagramKind(StrEnum): """The three business dynamics diagram types.""" CLD = "cld" SUPPLY_CHAIN = "supply_chain" VSM = "vsm" ``` ## Errors Bases: `GDSError` Base exception for business dynamics DSL errors. Source code in `packages/gds-domains/gds_domains/business/common/errors.py` ``` class BizError(GDSError): """Base exception for business dynamics DSL errors.""" ``` Bases: `BizError` Raised when a business dynamics model fails structural validation. Source code in `packages/gds-domains/gds_domains/business/common/errors.py` ``` class BizValidationError(BizError): """Raised when a business dynamics model fails structural validation.""" ``` Bases: `BizError` Raised when compilation of a business dynamics model fails. Source code in `packages/gds-domains/gds_domains/business/common/errors.py` ``` class BizCompilationError(BizError): """Raised when compilation of a business dynamics model fails.""" ``` ## Compilation Utilities Compose a list of blocks in parallel. Source code in `packages/gds-domains/gds_domains/business/common/compile_utils.py` ``` def parallel_tier(blocks: list[Block]) -> Block: """Compose a list of blocks in parallel.""" tier: Block = blocks[0] for b in blocks[1:]: tier = tier | b return tier ``` Build explicit wirings between two tiers based on port token overlap. For each output port in the first tier, find matching input ports in the second tier (by token intersection). This replaces auto-wiring so we can use explicit StackComposition and bypass the token overlap validator. Source code in `packages/gds-domains/gds_domains/business/common/compile_utils.py` ``` def build_inter_tier_wirings( first_tier_blocks: list[Block], second_tier_blocks: list[Block], ) -> list[Wiring]: """Build explicit wirings between two tiers based on port token overlap. For each output port in the first tier, find matching input ports in the second tier (by token intersection). This replaces auto-wiring so we can use explicit StackComposition and bypass the token overlap validator. """ wirings: list[Wiring] = [] for first_block in first_tier_blocks: for out_port in first_block.interface.forward_out: for second_block in second_tier_blocks: for in_port in second_block.interface.forward_in: if out_port.type_tokens & in_port.type_tokens: wirings.append( Wiring( source_block=first_block.name, source_port=out_port.name, target_block=second_block.name, target_port=in_port.name, ) ) return wirings ``` Compose two tiers sequentially with explicit wiring. Uses StackComposition directly to bypass the auto-wire token overlap check. If no wirings found, falls back to auto-wiring via >>. Source code in `packages/gds-domains/gds_domains/business/common/compile_utils.py` ``` def sequential_with_explicit_wiring( first: Block, second: Block, wiring: list[Wiring], ) -> Block: """Compose two tiers sequentially with explicit wiring. Uses StackComposition directly to bypass the auto-wire token overlap check. If no wirings found, falls back to auto-wiring via >>. """ if wiring: return StackComposition( name=f"{first.name} >> {second.name}", first=first, second=second, wiring=wiring, ) return first >> second ``` # gds_domains.business Public API — top-level exports. Business dynamics DSL over GDS semantics. Declare business dynamics diagrams — causal loop diagrams, supply chain networks, and value stream maps — as typed compositional specifications. The compiler maps them to GDS role blocks, entities, and composition trees. All downstream GDS tooling works immediately — canonical projection, semantic checks, SpecQuery, serialization, gds-viz. # gds_domains.business.supplychain.checks Supply chain verification checks (SCN-001..SCN-004). SCN-001: All nodes reachable via BFS from demand/supply paths. Source code in `packages/gds-domains/gds_domains/business/supplychain/checks.py` ``` def check_scn001_network_connectivity(model: SupplyChainModel) -> list[Finding]: """SCN-001: All nodes reachable via BFS from demand/supply paths.""" findings: list[Finding] = [] # Build undirected adjacency from shipments adj: dict[str, set[str]] = {n.name: set() for n in model.nodes} for s in model.shipments: if s.source in adj and s.target in adj: adj[s.source].add(s.target) adj[s.target].add(s.source) # Also connect demand source targets for d in model.demand_sources: if d.target in adj: adj[d.target].add(f"__demand_{d.name}") # BFS from each demand target reachable: set[str] = set() for d in model.demand_sources: if d.target not in adj: continue queue = [d.target] visited: set[str] = set() while queue: node = queue.pop(0) if node in visited or node not in adj: continue visited.add(node) reachable.add(node) queue.extend(adj[node] - visited) # If no demands, try from first node if not model.demand_sources and model.nodes: queue = [model.nodes[0].name] visited = set() while queue: node = queue.pop(0) if node in visited or node not in adj: continue visited.add(node) reachable.add(node) queue.extend(adj[node] - visited) for node in model.nodes: is_reachable = node.name in reachable findings.append( Finding( check_id="SCN-001", severity=Severity.WARNING, message=( f"Node {node.name!r} " f"{'is' if is_reachable else 'is NOT'} reachable " f"in the supply network" ), source_elements=[node.name], passed=is_reachable, ) ) return findings ``` SCN-002: Shipment source and target exist. Source code in `packages/gds-domains/gds_domains/business/supplychain/checks.py` ``` def check_scn002_shipment_node_validity(model: SupplyChainModel) -> list[Finding]: """SCN-002: Shipment source and target exist.""" findings: list[Finding] = [] for s in model.shipments: src_valid = s.source in model.node_names findings.append( Finding( check_id="SCN-002", severity=Severity.ERROR, message=( f"Shipment {s.name!r} source {s.source!r} " f"{'is' if src_valid else 'is NOT'} a declared node" ), source_elements=[s.name, s.source], passed=src_valid, ) ) tgt_valid = s.target in model.node_names findings.append( Finding( check_id="SCN-002", severity=Severity.ERROR, message=( f"Shipment {s.name!r} target {s.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared node" ), source_elements=[s.name, s.target], passed=tgt_valid, ) ) return findings ``` SCN-003: Demand target exists. Source code in `packages/gds-domains/gds_domains/business/supplychain/checks.py` ``` def check_scn003_demand_target_validity(model: SupplyChainModel) -> list[Finding]: """SCN-003: Demand target exists.""" findings: list[Finding] = [] for d in model.demand_sources: valid = d.target in model.node_names findings.append( Finding( check_id="SCN-003", severity=Severity.ERROR, message=( f"DemandSource {d.name!r} target {d.target!r} " f"{'is' if valid else 'is NOT'} a declared node" ), source_elements=[d.name, d.target], passed=valid, ) ) return findings ``` SCN-004: Every node appears in at least one shipment or demand. Source code in `packages/gds-domains/gds_domains/business/supplychain/checks.py` ``` def check_scn004_no_orphan_nodes(model: SupplyChainModel) -> list[Finding]: """SCN-004: Every node appears in at least one shipment or demand.""" findings: list[Finding] = [] connected: set[str] = set() for s in model.shipments: connected.add(s.source) connected.add(s.target) for d in model.demand_sources: connected.add(d.target) for node in model.nodes: is_connected = node.name in connected findings.append( Finding( check_id="SCN-004", severity=Severity.WARNING, message=( f"Node {node.name!r} {'is' if is_connected else 'is NOT'} connected" ), source_elements=[node.name], passed=is_connected, ) ) return findings ``` # gds_domains.business.supplychain.compile Compiler: SupplyChainModel → GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a SupplyChainModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings. Source code in `packages/gds-domains/gds_domains/business/supplychain/compile.py` ``` def compile_scn(model: SupplyChainModel) -> GDSSpec: """Compile a SupplyChainModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings. """ spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(InventoryType, ShipmentRateType, DemandType) # 2. Register spaces spec.collect(InventorySpace, ShipmentRateSpace, DemandSpace) # 3. Register entities (one per supply node) for node in model.nodes: spec.register_entity(_build_node_entity(node)) # 4. Register blocks for d in model.demand_sources: spec.register_block(_build_demand_block(d)) for p in model.order_policies: spec.register_block(_build_policy_block(p, model)) for node in model.nodes: spec.register_block(_build_node_mechanism(node, model)) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] # Demand -> Policy wires for d in model.demand_sources: for p in model.order_policies: if p.node == d.target: wires.append( Wire(source=d.name, target=p.name, space="SCN DemandSpace") ) # Policy -> Mechanism wires for p in model.order_policies: wires.append( Wire( source=p.name, target=_mechanism_block_name(p.node), space="SCN ShipmentRateSpace", ) ) # Mechanism -> Policy temporal wires (inventory feedback) for node in model.nodes: for p in model.order_policies: if node.name in p.inputs: wires.append( Wire( source=_mechanism_block_name(node.name), target=p.name, space="SCN InventorySpace", ) ) if wires: spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for SCN {model.name!r}", ) ) # Supply chain networks are discrete/synchronous/Moore spec.execution_contract = ExecutionContract(time_domain="discrete") return spec ``` Compile a SupplyChainModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). Source code in `packages/gds-domains/gds_domains/business/supplychain/compile.py` ``` def compile_scn_to_system(model: SupplyChainModel) -> SystemIR: """Compile a SupplyChainModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). """ root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.business.supplychain.elements Supply chain element declarations — frozen Pydantic models. Bases: `BaseModel` A warehouse, factory, or distribution center node. Maps to: GDS Mechanism (state update f) + Entity (inventory state X). Source code in `packages/gds-domains/gds_domains/business/supplychain/elements.py` ``` class SupplyNode(BaseModel, frozen=True): """A warehouse, factory, or distribution center node. Maps to: GDS Mechanism (state update f) + Entity (inventory state X). """ name: str initial_inventory: float = 0.0 capacity: float = Field(default=float("inf"), description="Max inventory capacity") description: str = "" ``` Bases: `BaseModel` A directed flow link between supply nodes. Maps to: GDS Wiring. Source code in `packages/gds-domains/gds_domains/business/supplychain/elements.py` ``` class Shipment(BaseModel, frozen=True): """A directed flow link between supply nodes. Maps to: GDS Wiring. """ name: str source: str target: str lead_time: float = 1.0 ``` Bases: `BaseModel` An exogenous demand signal entering the network. Maps to: GDS BoundaryAction (exogenous input U). Source code in `packages/gds-domains/gds_domains/business/supplychain/elements.py` ``` class DemandSource(BaseModel, frozen=True): """An exogenous demand signal entering the network. Maps to: GDS BoundaryAction (exogenous input U). """ name: str target: str description: str = "" ``` Bases: `BaseModel` A reorder decision logic node. Maps to: GDS Policy (decision logic g). Observes inventory and demand signals, emits order decisions. Source code in `packages/gds-domains/gds_domains/business/supplychain/elements.py` ``` class OrderPolicy(BaseModel, frozen=True): """A reorder decision logic node. Maps to: GDS Policy (decision logic g). Observes inventory and demand signals, emits order decisions. """ name: str node: str inputs: list[str] = Field( default_factory=list, description="Names of nodes whose inventory this policy observes", ) ``` # gds_domains.business.supplychain.model SupplyChainModel — declarative container for supply chain networks. Bases: `BaseModel` A complete supply chain network declaration. Validates at construction: 1. At least one node 1. No duplicate node names 1. Shipment source/target reference declared nodes 1. Demand target references a declared node 1. OrderPolicy node references a declared node 1. OrderPolicy inputs reference declared nodes Source code in `packages/gds-domains/gds_domains/business/supplychain/model.py` ``` class SupplyChainModel(BaseModel): """A complete supply chain network declaration. Validates at construction: 1. At least one node 2. No duplicate node names 3. Shipment source/target reference declared nodes 4. Demand target references a declared node 5. OrderPolicy node references a declared node 6. OrderPolicy inputs reference declared nodes """ name: str nodes: list[SupplyNode] shipments: list[Shipment] = Field(default_factory=list) demand_sources: list[DemandSource] = Field(default_factory=list) order_policies: list[OrderPolicy] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one node if not self.nodes: errors.append("Supply chain must have at least one node") # 2. No duplicate node names names: list[str] = [n.name for n in self.nodes] seen: set[str] = set() for n in names: if n in seen: errors.append(f"Duplicate node name: {n!r}") seen.add(n) node_names = set(names) # 3. Shipment source/target reference declared nodes for s in self.shipments: if s.source not in node_names: errors.append( f"Shipment {s.name!r} source {s.source!r} is not a declared node" ) if s.target not in node_names: errors.append( f"Shipment {s.name!r} target {s.target!r} is not a declared node" ) # 4. Demand target references a declared node for d in self.demand_sources: if d.target not in node_names: errors.append( f"DemandSource {d.name!r} target {d.target!r} " f"is not a declared node" ) # 5. OrderPolicy node references a declared node for op in self.order_policies: if op.node not in node_names: errors.append( f"OrderPolicy {op.name!r} node {op.node!r} is not a declared node" ) # 6. OrderPolicy inputs reference declared nodes for op in self.order_policies: for inp in op.inputs: if inp not in node_names: errors.append( f"OrderPolicy {op.name!r} input {inp!r} is not a declared node" ) if errors: raise BizValidationError( f"SupplyChainModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def node_names(self) -> set[str]: return {n.name for n in self.nodes} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.business.supplychain.compile import compile_scn return compile_scn(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.business.supplychain.compile import compile_scn_to_system return compile_scn_to_system(self) ``` ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/business/supplychain/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.business.supplychain.compile import compile_scn return compile_scn(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/business/supplychain/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.business.supplychain.compile import compile_scn_to_system return compile_scn_to_system(self) ``` # gds_domains.business.verification Verification engine — union dispatch across all business diagram types. Run verification checks on any business dynamics model. Dispatches to the appropriate domain checks based on model type, then optionally compiles to SystemIR and runs GDS generic checks. Source code in `packages/gds-domains/gds_domains/business/verification/engine.py` ``` def verify( model: Any, domain_checks: list[Callable[..., list[Finding]]] | None = None, include_gds_checks: bool = True, ) -> VerificationReport: """Run verification checks on any business dynamics model. Dispatches to the appropriate domain checks based on model type, then optionally compiles to SystemIR and runs GDS generic checks. """ from gds_domains.business.cld.checks import ALL_CLD_CHECKS from gds_domains.business.cld.model import CausalLoopModel from gds_domains.business.supplychain.checks import ALL_SCN_CHECKS from gds_domains.business.supplychain.model import SupplyChainModel from gds_domains.business.vsm.checks import ALL_VSM_CHECKS from gds_domains.business.vsm.model import ValueStreamModel # Dispatch to appropriate checks if domain_checks is not None: checks = domain_checks elif isinstance(model, CausalLoopModel): checks = ALL_CLD_CHECKS elif isinstance(model, SupplyChainModel): checks = ALL_SCN_CHECKS elif isinstance(model, ValueStreamModel): checks = ALL_VSM_CHECKS else: raise TypeError(f"Unknown model type: {type(model).__name__}") findings: list[Finding] = [] # Phase 1: Domain checks on model for check_fn in checks: findings.extend(check_fn(model)) # Phase 2: GDS generic checks on compiled SystemIR if include_gds_checks: from gds.verification.engine import ALL_CHECKS as GDS_ALL_CHECKS system_ir = model.compile_system() for gds_check in GDS_ALL_CHECKS: findings.extend(gds_check(system_ir)) return VerificationReport(system_name=model.name, findings=findings) ``` # gds_domains.business.vsm.checks VSM verification checks (VSM-001..VSM-004). VSM-001: Each step has at most 1 incoming and 1 outgoing material flow. Source code in `packages/gds-domains/gds_domains/business/vsm/checks.py` ``` def check_vsm001_linear_process_flow(model: ValueStreamModel) -> list[Finding]: """VSM-001: Each step has at most 1 incoming and 1 outgoing material flow.""" findings: list[Finding] = [] for step in model.steps: incoming = sum(1 for f in model.material_flows if f.target == step.name) outgoing = sum(1 for f in model.material_flows if f.source == step.name) linear = incoming <= 1 and outgoing <= 1 findings.append( Finding( check_id="VSM-001", severity=Severity.WARNING, message=( f"Step {step.name!r}: {incoming} incoming, {outgoing} outgoing " f"material flow(s) — " f"{'linear' if linear else 'non-linear (branching detected)'}" ), source_elements=[step.name], passed=linear, ) ) return findings ``` VSM-002: Identifies where flow_type transitions from push to pull. Source code in `packages/gds-domains/gds_domains/business/vsm/checks.py` ``` def check_vsm002_push_pull_boundary(model: ValueStreamModel) -> list[Finding]: """VSM-002: Identifies where flow_type transitions from push to pull.""" findings: list[Finding] = [] transitions: list[tuple[str, str]] = [] # Look for adjacent flows where type changes for i, flow in enumerate(model.material_flows): for other in model.material_flows[i + 1 :]: # Adjacent if one's target is the other's source if flow.target == other.source and flow.flow_type != other.flow_type: transitions.append( (flow.target, f"{flow.flow_type}->{other.flow_type}") ) elif other.target == flow.source and flow.flow_type != other.flow_type: transitions.append( (flow.source, f"{other.flow_type}->{flow.flow_type}") ) if transitions: for boundary_element, transition in transitions: findings.append( Finding( check_id="VSM-002", severity=Severity.INFO, message=( f"Push/pull boundary at {boundary_element!r}: {transition}" ), source_elements=[boundary_element], passed=True, ) ) else: # Check if all flows are same type flow_types = {f.flow_type for f in model.material_flows} if len(flow_types) <= 1: ftype = next(iter(flow_types), "push") findings.append( Finding( check_id="VSM-002", severity=Severity.INFO, message=f"All material flows are {ftype} — no push/pull boundary", source_elements=[], passed=True, ) ) else: findings.append( Finding( check_id="VSM-002", severity=Severity.INFO, message="Mixed push/pull flows but no clear boundary detected", source_elements=[], passed=True, ) ) return findings ``` VSM-003: All flow source/target are declared elements. Source code in `packages/gds-domains/gds_domains/business/vsm/checks.py` ``` def check_vsm003_flow_reference_validity(model: ValueStreamModel) -> list[Finding]: """VSM-003: All flow source/target are declared elements.""" findings: list[Finding] = [] all_names = model.element_names for flow in model.material_flows: src_valid = flow.source in all_names findings.append( Finding( check_id="VSM-003", severity=Severity.ERROR, message=( f"MaterialFlow source {flow.source!r} " f"{'is' if src_valid else 'is NOT'} a declared element" ), source_elements=[flow.source], passed=src_valid, ) ) tgt_valid = flow.target in all_names findings.append( Finding( check_id="VSM-003", severity=Severity.ERROR, message=( f"MaterialFlow target {flow.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared element" ), source_elements=[flow.target], passed=tgt_valid, ) ) for flow in model.information_flows: src_valid = flow.source in all_names findings.append( Finding( check_id="VSM-003", severity=Severity.ERROR, message=( f"InformationFlow source {flow.source!r} " f"{'is' if src_valid else 'is NOT'} a declared element" ), source_elements=[flow.source], passed=src_valid, ) ) tgt_valid = flow.target in all_names findings.append( Finding( check_id="VSM-003", severity=Severity.ERROR, message=( f"InformationFlow target {flow.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared element" ), source_elements=[flow.target], passed=tgt_valid, ) ) return findings ``` VSM-004: Max cycle_time should not exceed customer takt_time. Source code in `packages/gds-domains/gds_domains/business/vsm/checks.py` ``` def check_vsm004_bottleneck_vs_takt(model: ValueStreamModel) -> list[Finding]: """VSM-004: Max cycle_time should not exceed customer takt_time.""" findings: list[Finding] = [] if not model.steps or not model.customers: findings.append( Finding( check_id="VSM-004", severity=Severity.WARNING, message="No steps or customers to check bottleneck vs takt", source_elements=[], passed=True, ) ) return findings max_cycle = max(s.cycle_time for s in model.steps) bottleneck = next(s for s in model.steps if s.cycle_time == max_cycle) for customer in model.customers: within_takt = max_cycle <= customer.takt_time findings.append( Finding( check_id="VSM-004", severity=Severity.WARNING, message=( f"Bottleneck {bottleneck.name!r} (cycle_time={max_cycle}) " f"{'<=' if within_takt else '>'} " f"customer {customer.name!r} takt_time={customer.takt_time}" ), source_elements=[bottleneck.name, customer.name], passed=within_takt, ) ) return findings ``` # gds_domains.business.vsm.compile Compiler: ValueStreamModel → GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a ValueStreamModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings. Source code in `packages/gds-domains/gds_domains/business/vsm/compile.py` ``` def compile_vsm(model: ValueStreamModel) -> GDSSpec: """Compile a ValueStreamModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings. """ spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(MaterialType, ProcessSignalType) # 2. Register spaces spec.collect(MaterialSpace, ProcessSignalSpace) # 3. Register entities (one per buffer) for buf in model.buffers: spec.register_entity(_build_buffer_entity(buf)) # 4. Register blocks for s in model.suppliers: spec.register_block(_build_supplier_block(s)) for c in model.customers: spec.register_block(_build_customer_block(c)) for step in model.steps: spec.register_block(_build_step_block(step, model)) for buf in model.buffers: spec.register_block(_build_buffer_mechanism(buf, model)) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for flow in model.material_flows: source = flow.source target = flow.target if source in model.buffer_names: source = _buffer_block_name(source) if target in model.buffer_names: target = _buffer_block_name(target) wires.append(Wire(source=source, target=target, space="VSM MaterialSpace")) for flow in model.information_flows: source = flow.source target = flow.target if source in model.buffer_names: source = _buffer_block_name(source) if target in model.buffer_names: target = _buffer_block_name(target) wires.append(Wire(source=source, target=target, space="VSM ProcessSignalSpace")) if wires: spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for VSM {model.name!r}", ) ) # Value stream maps are atemporal — process flow structure spec.execution_contract = ExecutionContract(time_domain="atemporal") return spec ``` Compile a ValueStreamModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). Source code in `packages/gds-domains/gds_domains/business/vsm/compile.py` ``` def compile_vsm_to_system(model: ValueStreamModel) -> SystemIR: """Compile a ValueStreamModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). """ root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.business.vsm.elements VSM element declarations — frozen Pydantic models. Bases: `BaseModel` A processing stage in the value stream. Maps to: GDS Policy (decision logic g). Source code in `packages/gds-domains/gds_domains/business/vsm/elements.py` ``` class ProcessStep(BaseModel, frozen=True): """A processing stage in the value stream. Maps to: GDS Policy (decision logic g). """ name: str cycle_time: float = Field(description="Time to process one unit") changeover_time: float = 0.0 uptime: float = Field(default=1.0, ge=0.0, le=1.0) batch_size: int = 1 operators: int = 1 description: str = "" ``` Bases: `BaseModel` A WIP buffer between processing stages. Maps to: GDS Mechanism (state update f) + Entity (buffer state X). Source code in `packages/gds-domains/gds_domains/business/vsm/elements.py` ``` class InventoryBuffer(BaseModel, frozen=True): """A WIP buffer between processing stages. Maps to: GDS Mechanism (state update f) + Entity (buffer state X). """ name: str quantity: float = 0.0 between: tuple[str, str] = Field( description="(upstream_step, downstream_step) this buffer sits between" ) description: str = "" ``` Bases: `BaseModel` An external material source. Maps to: GDS BoundaryAction (exogenous input U). Source code in `packages/gds-domains/gds_domains/business/vsm/elements.py` ``` class Supplier(BaseModel, frozen=True): """An external material source. Maps to: GDS BoundaryAction (exogenous input U). """ name: str description: str = "" ``` Bases: `BaseModel` An external demand sink. Maps to: GDS BoundaryAction (exogenous input U). Source code in `packages/gds-domains/gds_domains/business/vsm/elements.py` ``` class Customer(BaseModel, frozen=True): """An external demand sink. Maps to: GDS BoundaryAction (exogenous input U). """ name: str takt_time: float = Field(description="Required pace of production") description: str = "" ``` Bases: `BaseModel` A material movement between elements. Maps to: GDS Wiring. Source code in `packages/gds-domains/gds_domains/business/vsm/elements.py` ``` class MaterialFlow(BaseModel, frozen=True): """A material movement between elements. Maps to: GDS Wiring. """ source: str target: str flow_type: Literal["push", "pull"] = "push" ``` Bases: `BaseModel` A signal or kanban flow between elements. Maps to: GDS Wiring (signal channel). Source code in `packages/gds-domains/gds_domains/business/vsm/elements.py` ``` class InformationFlow(BaseModel, frozen=True): """A signal or kanban flow between elements. Maps to: GDS Wiring (signal channel). """ source: str target: str ``` # gds_domains.business.vsm.model ValueStreamModel — declarative container for value stream maps. Bases: `BaseModel` A complete value stream map declaration. Validates at construction: 1. At least one process step 1. No duplicate element names 1. Flow source/target reference declared elements 1. Buffer between references declared steps Source code in `packages/gds-domains/gds_domains/business/vsm/model.py` ``` class ValueStreamModel(BaseModel): """A complete value stream map declaration. Validates at construction: 1. At least one process step 2. No duplicate element names 3. Flow source/target reference declared elements 4. Buffer between references declared steps """ name: str steps: list[ProcessStep] buffers: list[InventoryBuffer] = Field(default_factory=list) suppliers: list[Supplier] = Field(default_factory=list) customers: list[Customer] = Field(default_factory=list) material_flows: list[MaterialFlow] = Field(default_factory=list) information_flows: list[InformationFlow] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one process step if not self.steps: errors.append("VSM must have at least one process step") # 2. No duplicate element names all_names: list[str] = [] for s in self.steps: all_names.append(s.name) for b in self.buffers: all_names.append(b.name) for s in self.suppliers: all_names.append(s.name) for c in self.customers: all_names.append(c.name) seen: set[str] = set() for n in all_names: if n in seen: errors.append(f"Duplicate element name: {n!r}") seen.add(n) all_element_names = set(all_names) # 3. Flow source/target reference declared elements for flow in self.material_flows: if flow.source not in all_element_names: errors.append( f"MaterialFlow source {flow.source!r} is not a declared element" ) if flow.target not in all_element_names: errors.append( f"MaterialFlow target {flow.target!r} is not a declared element" ) for flow in self.information_flows: if flow.source not in all_element_names: errors.append( f"InformationFlow source {flow.source!r} is not a declared element" ) if flow.target not in all_element_names: errors.append( f"InformationFlow target {flow.target!r} is not a declared element" ) # 4. Buffer between references declared steps step_names = {s.name for s in self.steps} for buf in self.buffers: upstream, downstream = buf.between if upstream not in step_names: errors.append( f"Buffer {buf.name!r} upstream step {upstream!r} " f"is not a declared step" ) if downstream not in step_names: errors.append( f"Buffer {buf.name!r} downstream step {downstream!r} " f"is not a declared step" ) if errors: raise BizValidationError( f"ValueStreamModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def element_names(self) -> set[str]: names: set[str] = set() for s in self.steps: names.add(s.name) for b in self.buffers: names.add(b.name) for s in self.suppliers: names.add(s.name) for c in self.customers: names.add(c.name) return names @property def step_names(self) -> set[str]: return {s.name for s in self.steps} @property def buffer_names(self) -> set[str]: return {b.name for b in self.buffers} @property def supplier_names(self) -> set[str]: return {s.name for s in self.suppliers} @property def customer_names(self) -> set[str]: return {c.name for c in self.customers} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.business.vsm.compile import compile_vsm return compile_vsm(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.business.vsm.compile import compile_vsm_to_system return compile_vsm_to_system(self) ``` ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/business/vsm/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.business.vsm.compile import compile_vsm return compile_vsm(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/business/vsm/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.business.vsm.compile import compile_vsm_to_system return compile_vsm_to_system(self) ``` # Stock-Flow (gds-domains) # gds-stockflow **Declarative stock-flow DSL over GDS semantics** — system dynamics with formal verification. ## What is this? `gds-stockflow` extends the GDS framework with system dynamics vocabulary — stocks, flows, auxiliaries, and converters. It provides: - **4 element types** — Stock, Flow, Auxiliary, Converter - **Typed compilation** — Each element compiles to GDS role blocks, entities, and composition trees - **5 verification checks** — Domain-specific structural validation (SF-001..SF-005) - **Canonical decomposition** — Validated h = f ∘ g projection with state-dominant accumulation - **Full GDS integration** — All downstream tooling works immediately (canonical projection, semantic checks, gds-viz) ## Architecture ``` gds-framework (pip install gds-framework) | | Domain-neutral composition algebra, typed spaces, | state model, verification engine, flat IR compiler. | +-- gds-stockflow (pip install gds-domains) | | Stock-flow DSL: Stock, Flow, Auxiliary, Converter elements, | compile_model(), domain verification, verify() dispatch. | +-- Your application | | Concrete stock-flow models, analysis notebooks, | verification runners. ``` ## GDS Mapping ``` Your declaration What the compiler produces ---------------- ------------------------- Stock("Population") -> Mechanism + Entity (state update f + state X) Flow("Births", target=...) -> Policy (rate computation g) Auxiliary("Birth Rate") -> Policy (decision logic g) Converter("Fertility") -> BoundaryAction (exogenous input U) StockFlowModel(...) -> GDSSpec + SystemIR (full GDS specification) ``` ## Composition Tree The compiler builds a tiered composition tree: ``` (converters |) >> (auxiliaries |) >> (flows |) >> (stock mechanisms |) .loop([stock forward_out -> auxiliary forward_in]) ``` - **Within each tier:** parallel composition (`|`) -- independent elements run side-by-side - **Across tiers:** sequential composition (`>>`) -- converters feed auxiliaries, auxiliaries feed flows, flows feed stock mechanisms - **Temporal recurrence:** `.loop()` -- stock levels at timestep *t* feed back to auxiliaries at timestep *t+1* ## Canonical Form Stock-flow models produce the full dynamical form: | |X| | |f| | Form | Character | |-----|-----|------|-----------| | n | n | h = f ∘ g | State-dominant accumulation | Stocks carry state (X), mechanisms provide f, and all other elements contribute to g. ## Quick Start ``` uv add gds-stockflow # or: pip install gds-domains ``` See [Getting Started](https://blockscience.github.io/gds-core/stockflow/getting-started/index.md) for a full walkthrough. ## Credits Built on [gds-framework](https://blockscience.github.io/gds-core/framework/index.md) by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add gds-stockflow # or: pip install gds-domains ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First Stock-Flow Model A stock-flow model describes accumulation dynamics: stocks hold value, flows transfer it, auxiliaries compute intermediate values, and converters inject exogenous inputs. ``` from gds_domains.stockflow import ( Stock, Flow, Auxiliary, Converter, StockFlowModel, compile_model, compile_to_system, verify, ) # Declare a simple population model model = StockFlowModel( name="Population", stocks=[Stock(name="Population", initial=1000.0)], flows=[ Flow(name="Births", target="Population"), Flow(name="Deaths", source="Population"), ], auxiliaries=[ Auxiliary(name="Birth Rate", inputs=["Population", "Fertility"]), Auxiliary(name="Death Rate", inputs=["Population"]), ], converters=[Converter(name="Fertility")], ) # Compile to GDS spec = compile_model(model) print(f"Blocks: {len(spec.blocks)}") # 6 blocks print(f"Entities: {len(spec.entities)}") # 1 (Population stock) # Compile to SystemIR for verification ir = compile_to_system(model) print(f"{len(ir.blocks)} blocks, {len(ir.wirings)} wirings") # Verify — domain checks + GDS structural checks report = verify(model, include_gds_checks=True) print(f"{report.checks_passed}/{report.checks_total} checks passed") ``` ## A Multi-Stock Model Stock-flow models shine with multiple interacting stocks: ``` from gds_domains.stockflow import ( Stock, Flow, Auxiliary, StockFlowModel, verify, ) model = StockFlowModel( name="SIR Epidemic", stocks=[ Stock(name="Susceptible", initial=990.0), Stock(name="Infected", initial=10.0), Stock(name="Recovered", initial=0.0), ], flows=[ Flow(name="Infection", source="Susceptible", target="Infected"), Flow(name="Recovery", source="Infected", target="Recovered"), ], auxiliaries=[ Auxiliary(name="Infection Rate", inputs=["Susceptible", "Infected"]), Auxiliary(name="Recovery Rate", inputs=["Infected"]), ], ) # Compile and verify spec = model.compile() if hasattr(model, 'compile') else compile_model(model) report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'PASS' if f.passed else 'FAIL'} {f.message}") ``` ## Next Steps - [Elements & GDS Mapping](https://blockscience.github.io/gds-core/stockflow/guide/elements/index.md) -- detailed element reference and how each maps to GDS - [Verification Guide](https://blockscience.github.io/gds-core/stockflow/guide/verification/index.md) -- all 5 domain checks explained - [API Reference](https://blockscience.github.io/gds-core/stockflow/api/init/index.md) -- complete auto-generated API docs # Elements & GDS Mapping `gds-stockflow` provides four element types, each mapping to a specific GDS role. ## Stock Stocks accumulate value over time. Each stock becomes a GDS entity with a `level` state variable, and a mechanism block that applies incoming flow rates. ``` Stock(name="Population", initial=1000.0, non_negative=True) ``` **GDS mapping:** `Mechanism` (state update *f*) + `Entity` (state *X*) | Field | Type | Default | Description | | -------------- | ----- | -------- | -------------------------------- | | `name` | str | required | Stock name (becomes entity name) | | `initial` | float | None | None | | `units` | str | "" | Unit label | | `non_negative` | bool | True | Constrain level >= 0 | ### Port Convention - Output: `"{Name} Level"` (temporal feedback to auxiliaries) - Input: `"{FlowName} Rate"` (incoming flow rates) ______________________________________________________________________ ## Flow Flows transfer value between stocks (or from/to "clouds" -- external sources/sinks). ``` Flow(name="Births", target="Population") # inflow from cloud Flow(name="Deaths", source="Population") # outflow to cloud Flow(name="Migration", source="A", target="B") # transfer between stocks ``` **GDS mapping:** `Policy` (rate computation *g*) | Field | Type | Default | Description | | -------- | ---- | -------- | ------------------------------------ | | `name` | str | required | Flow name | | `source` | str | "" | Source stock (empty = cloud inflow) | | `target` | str | "" | Target stock (empty = cloud outflow) | ### Port Convention - Output: `"{Name} Rate"` ______________________________________________________________________ ## Auxiliary Auxiliaries compute intermediate values from stocks, converters, or other auxiliaries. They form an acyclic dependency graph -- the compiler validates this at construction time. ``` Auxiliary(name="Birth Rate", inputs=["Population", "Fertility"]) ``` **GDS mapping:** `Policy` (decision logic *g*) | Field | Type | Default | Description | | -------- | --------- | -------- | ----------------------------------------------------------- | | `name` | str | required | Auxiliary name | | `inputs` | list[str] | [] | Names of stocks, converters, or auxiliaries this depends on | ### Port Convention - Input: `"{InputName} Level"` or `"{InputName} Signal"` - Output: `"{Name} Signal"` ______________________________________________________________________ ## Converter Converters represent exogenous constants or parameters -- values that enter the system from outside. Converters have no internal inputs. ``` Converter(name="Fertility", units="births/person/year") ``` **GDS mapping:** `BoundaryAction` (exogenous input *U*) | Field | Type | Default | Description | | ------- | ---- | -------- | -------------- | | `name` | str | required | Converter name | | `units` | str | "" | Unit label | ### Port Convention - Output: `"{Name} Signal"` ______________________________________________________________________ ## Semantic Type System Three distinct semantic spaces, all `float`-backed but structurally separate -- this prevents accidentally wiring a rate where a level is expected: | Type | Space | Used By | Constraint | | ------------------------ | ------------------------- | -------------------------------- | ---------------------------- | | `LevelType` | `LevelSpace` | Stocks | >= 0 (by default) | | `UnconstrainedLevelType` | `UnconstrainedLevelSpace` | Stocks with `non_negative=False` | None | | `RateType` | `RateSpace` | Flows | None (rates can be negative) | | `SignalType` | `SignalSpace` | Auxiliaries, Converters | None | ## Composition Structure The compiler builds a tiered composition tree: ``` (converters |) >> (auxiliaries |) >> (flows |) >> (stock mechanisms |) .loop([stock forward_out -> auxiliary forward_in]) ``` This maps to the GDS canonical form `h = f . g` where stocks carry state (X), mechanisms provide f, and all other elements contribute to g. # Verification `gds-stockflow` provides 5 domain-specific verification checks, plus access to the 6 GDS generic checks (G-001..G-006) via the unified `verify()` function. ## Using verify() The `verify()` function runs domain checks on the stock-flow model: ``` from gds_domains.stockflow import verify report = verify(model) # Domain checks only report = verify(model, include_gds_checks=True) # Domain + GDS checks ``` The returned `VerificationReport` contains a list of `Finding` objects with: - `check_id` -- e.g., "SF-001", "G-003" - `severity` -- ERROR, WARNING, or INFO - `message` -- human-readable description - `passed` -- whether the check passed - `source_elements` -- elements involved ## Domain Checks | ID | Name | Severity | What it checks | | ------ | ---------------------- | -------- | ----------------------------------------------- | | SF-001 | Orphan stocks | WARNING | Every stock has >= 1 connected flow | | SF-002 | Flow-stock validity | ERROR | Flow source/target reference declared stocks | | SF-003 | Auxiliary acyclicity | ERROR | No cycles in auxiliary dependency graph | | SF-004 | Converter connectivity | WARNING | Every converter referenced by >= 1 auxiliary | | SF-005 | Flow completeness | ERROR | Every flow has at least one of source or target | ### SF-001: Orphan Stocks Stocks not connected to any flow are flagged -- they accumulate nothing: ``` [SF-001] WARNING: Stock 'Unused' has no connected flows ``` ### SF-002: Flow-Stock Validity Flow source and target must reference declared stock names: ``` [SF-002] ERROR: Flow 'Transfer' references undeclared stock 'Missing' ``` ### SF-003: Auxiliary Acyclicity Auxiliaries form a dependency graph. Cycles would create infinite recursion: ``` [SF-003] ERROR: Cycle detected in auxiliary dependencies: A -> B -> A ``` ### SF-004: Converter Connectivity Converters not referenced by any auxiliary are flagged as unused: ``` [SF-004] WARNING: Converter 'Unused Param' is not referenced by any auxiliary ``` ### SF-005: Flow Completeness Every flow must have at least one of `source` or `target`: ``` [SF-005] ERROR: Flow 'Broken' has neither source nor target ``` ## GDS Generic Checks When `include_gds_checks=True`, the model is compiled to `SystemIR` and the 6 GDS generic checks run: | ID | Name | What it checks | | ----- | ----------------------------- | ---------------------------------- | | G-001 | Domain/codomain compatibility | Wiring type tokens match | | G-002 | Signature completeness | Every block has inputs and outputs | | G-003 | Unique block naming | No duplicate block names | | G-004 | Wiring source existence | Wired blocks exist | | G-005 | Wiring target existence | Wired blocks exist | | G-006 | Hierarchy consistency | Block tree is well-formed | Note G-002 will flag `BoundaryAction` blocks (Converters) as having "no inputs" -- this is expected since they are exogenous sources by design. # gds_domains.stockflow.verification.checks Stock-flow verification checks (SF-001..SF-005). SF-001: Every stock has at least one flow with it as source or target. Source code in `packages/gds-domains/gds_domains/stockflow/verification/checks.py` ``` def check_sf001_orphan_stocks(model: StockFlowModel) -> list[Finding]: """SF-001: Every stock has at least one flow with it as source or target.""" findings: list[Finding] = [] for stock in model.stocks: connected = any( f.source == stock.name or f.target == stock.name for f in model.flows ) findings.append( Finding( check_id="SF-001", severity=Severity.WARNING, message=( f"Stock {stock.name!r} has no connected flows" if not connected else f"Stock {stock.name!r} has connected flows" ), source_elements=[stock.name], passed=connected, ) ) return findings ``` SF-002: Flow source/target are declared stocks. This is also enforced at model construction time, but the check provides a formal Finding for verification reports. Source code in `packages/gds-domains/gds_domains/stockflow/verification/checks.py` ``` def check_sf002_flow_stock_validity(model: StockFlowModel) -> list[Finding]: """SF-002: Flow source/target are declared stocks. This is also enforced at model construction time, but the check provides a formal Finding for verification reports. """ findings: list[Finding] = [] stock_names = model.stock_names for flow in model.flows: if flow.source: valid = flow.source in stock_names findings.append( Finding( check_id="SF-002", severity=Severity.ERROR, message=( f"Flow {flow.name!r} source {flow.source!r} " f"{'is' if valid else 'is NOT'} a declared stock" ), source_elements=[flow.name, flow.source], passed=valid, ) ) if flow.target: valid = flow.target in stock_names findings.append( Finding( check_id="SF-002", severity=Severity.ERROR, message=( f"Flow {flow.name!r} target {flow.target!r} " f"{'is' if valid else 'is NOT'} a declared stock" ), source_elements=[flow.name, flow.target], passed=valid, ) ) return findings ``` SF-003: No cycles in auxiliary dependency graph. Builds a directed graph of auxiliary → auxiliary dependencies and checks for cycles via DFS. Source code in `packages/gds-domains/gds_domains/stockflow/verification/checks.py` ``` def check_sf003_auxiliary_acyclicity(model: StockFlowModel) -> list[Finding]: """SF-003: No cycles in auxiliary dependency graph. Builds a directed graph of auxiliary → auxiliary dependencies and checks for cycles via DFS. """ # Build adjacency list: aux name → list of aux names it depends on aux_names = {a.name for a in model.auxiliaries} adj: dict[str, list[str]] = {a.name: [] for a in model.auxiliaries} for aux in model.auxiliaries: for inp in aux.inputs: if inp in aux_names: adj[aux.name].append(inp) # DFS cycle detection WHITE, GRAY, BLACK = 0, 1, 2 color: dict[str, int] = {name: WHITE for name in aux_names} cycle_members: list[str] = [] def dfs(node: str) -> bool: color[node] = GRAY for neighbor in adj[node]: if color[neighbor] == GRAY: cycle_members.append(node) cycle_members.append(neighbor) return True if color[neighbor] == WHITE and dfs(neighbor): return True color[node] = BLACK return False has_cycle = any(dfs(name) for name in aux_names if color[name] == WHITE) if has_cycle: return [ Finding( check_id="SF-003", severity=Severity.ERROR, message=f"Cycle detected in auxiliary dependency graph: {cycle_members}", source_elements=list(set(cycle_members)), passed=False, ) ] return [ Finding( check_id="SF-003", severity=Severity.ERROR, message="Auxiliary dependency graph is acyclic", source_elements=list(aux_names), passed=True, ) ] ``` SF-004: Every converter is referenced by at least one auxiliary. Source code in `packages/gds-domains/gds_domains/stockflow/verification/checks.py` ``` def check_sf004_converter_connectivity(model: StockFlowModel) -> list[Finding]: """SF-004: Every converter is referenced by at least one auxiliary.""" findings: list[Finding] = [] # Collect all input references from auxiliaries referenced: set[str] = set() for aux in model.auxiliaries: referenced.update(aux.inputs) for conv in model.converters: connected = conv.name in referenced findings.append( Finding( check_id="SF-004", severity=Severity.WARNING, message=( f"Converter {conv.name!r} " f"{'is' if connected else 'is NOT'} referenced by any auxiliary" ), source_elements=[conv.name], passed=connected, ) ) return findings ``` SF-005: Every flow has at least one of source or target. This is enforced at model construction, but provides a formal Finding. Source code in `packages/gds-domains/gds_domains/stockflow/verification/checks.py` ``` def check_sf005_flow_completeness(model: StockFlowModel) -> list[Finding]: """SF-005: Every flow has at least one of source or target. This is enforced at model construction, but provides a formal Finding. """ findings: list[Finding] = [] for flow in model.flows: has_endpoint = bool(flow.source or flow.target) findings.append( Finding( check_id="SF-005", severity=Severity.ERROR, message=( f"Flow {flow.name!r} " f"{'has' if has_endpoint else 'has neither'} source or target" ), source_elements=[flow.name], passed=has_endpoint, ) ) return findings ``` # gds_domains.stockflow.dsl.compile Compiler: StockFlowModel -> GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a StockFlowModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings, and parameters. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/compile.py` ``` def compile_model(model: StockFlowModel) -> GDSSpec: """Compile a StockFlowModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings, and parameters. """ spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(LevelType, UnconstrainedLevelType, RateType, SignalType) # 2. Register spaces spec.collect(LevelSpace, UnconstrainedLevelSpace, RateSpace, SignalSpace) # 3. Register entities (one per stock) for stock in model.stocks: spec.register_entity(_build_stock_entity(stock)) # 4. Register blocks for conv in model.converters: spec.register_block(_build_converter_block(conv)) for aux in model.auxiliaries: spec.register_block(_build_auxiliary_block(aux, model)) for flow in model.flows: spec.register_block(_build_flow_block(flow, model)) for stock in model.stocks: spec.register_block(_build_stock_mechanism(stock, model)) # 5. Register spec wirings (document the composition structure) all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] # Flow → Stock mechanism wirings for flow in model.flows: if flow.target: wires.append( Wire( source=flow.name, target=_accumulation_block_name(flow.target), space="RateSpace", ) ) if flow.source: wires.append( Wire( source=flow.name, target=_accumulation_block_name(flow.source), space="RateSpace", ) ) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for stock-flow model {model.name!r}", ) ) # 6. Register converters as parameters for conv in model.converters: spec.register_parameter( ParameterDef( name=conv.name, typedef=SignalType, description=f"Exogenous parameter: {conv.name}", ) ) # 7. Register transition signatures (mechanism read dependencies) from gds.constraints import TransitionSignature for stock in model.stocks: connected_flows = [ flow.name for flow in model.flows if flow.target == stock.name or flow.source == stock.name ] spec.register_transition_signature( TransitionSignature( mechanism=_accumulation_block_name(stock.name), reads=[(stock.name, "level")], depends_on_blocks=connected_flows, ) ) # 8. Declare execution contract — stock-flow is discrete/synchronous/Moore spec.execution_contract = ExecutionContract(time_domain="discrete") return spec ``` Compile a StockFlowModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). Source code in `packages/gds-domains/gds_domains/stockflow/dsl/compile.py` ``` def compile_to_system(model: StockFlowModel) -> SystemIR: """Compile a StockFlowModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). """ root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.stockflow.dsl.elements Stock-flow element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A state accumulator in a stock-flow diagram. Maps to: GDS Mechanism (state update f) + Entity (state X). Emits a Level port; receives Rate ports from connected flows. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/elements.py` ``` class Stock(BaseModel, frozen=True): """A state accumulator in a stock-flow diagram. Maps to: GDS Mechanism (state update f) + Entity (state X). Emits a Level port; receives Rate ports from connected flows. """ name: str initial: float | None = None units: str = "" non_negative: bool = True ``` Bases: `BaseModel` A rate of change between stocks (or from/to clouds). Maps to: GDS Policy (rate computation g). Emits a Rate port; drains from source stock, fills target stock. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/elements.py` ``` class Flow(BaseModel, frozen=True): """A rate of change between stocks (or from/to clouds). Maps to: GDS Policy (rate computation g). Emits a Rate port; drains from source stock, fills target stock. """ name: str source: str = "" target: str = "" ``` Bases: `BaseModel` An intermediate computation depending on other elements. Maps to: GDS Policy (decision logic g). Emits a Signal port; receives Level/Signal ports from inputs. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/elements.py` ``` class Auxiliary(BaseModel, frozen=True): """An intermediate computation depending on other elements. Maps to: GDS Policy (decision logic g). Emits a Signal port; receives Level/Signal ports from inputs. """ name: str inputs: list[str] = Field(default_factory=list) ``` Bases: `BaseModel` An exogenous constant or parameter. Maps to: GDS BoundaryAction (exogenous input U). Emits a Signal port; has no internal inputs. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/elements.py` ``` class Converter(BaseModel, frozen=True): """An exogenous constant or parameter. Maps to: GDS BoundaryAction (exogenous input U). Emits a Signal port; has no internal inputs. """ name: str units: str = "" ``` # gds_domains.stockflow Public API -- top-level exports. Stock-flow DSL over GDS semantics — system dynamics with formal guarantees. Declare stocks, flows, auxiliaries, and converters as plain data models. The compiler maps them to GDS role blocks, entities, and composition trees. All downstream GDS tooling works immediately — canonical projection, semantic checks, SpecQuery, serialization, gds-viz. # gds_domains.stockflow.dsl.model StockFlowModel -- declarative container for stock-flow diagrams. Bases: `BaseModel` A complete stock-flow diagram declaration. Validates at construction: 1. No duplicate element names across all lists 1. Flow source/target reference declared stock names (or empty for cloud) 1. Every flow has at least one of source or target 1. Auxiliary inputs reference declared element names 1. At least one stock exists Source code in `packages/gds-domains/gds_domains/stockflow/dsl/model.py` ``` class StockFlowModel(BaseModel): """A complete stock-flow diagram declaration. Validates at construction: 1. No duplicate element names across all lists 2. Flow source/target reference declared stock names (or empty for cloud) 3. Every flow has at least one of source or target 4. Auxiliary inputs reference declared element names 5. At least one stock exists """ name: str stocks: list[Stock] flows: list[Flow] = Field(default_factory=list) auxiliaries: list[Auxiliary] = Field(default_factory=list) converters: list[Converter] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 5. At least one stock if not self.stocks: errors.append("Model must have at least one stock") # 1. No duplicate names all_names: list[str] = [] for s in self.stocks: all_names.append(s.name) for f in self.flows: all_names.append(f.name) for a in self.auxiliaries: all_names.append(a.name) for c in self.converters: all_names.append(c.name) seen: set[str] = set() for n in all_names: if n in seen: errors.append(f"Duplicate element name: {n!r}") seen.add(n) stock_names = {s.name for s in self.stocks} # 2 & 3. Flow source/target validation for f in self.flows: if not f.source and not f.target: errors.append( f"Flow {f.name!r} must have at least one of source or target" ) if f.source and f.source not in stock_names: errors.append( f"Flow {f.name!r} source {f.source!r} is not a declared stock" ) if f.target and f.target not in stock_names: errors.append( f"Flow {f.name!r} target {f.target!r} is not a declared stock" ) # 4. Auxiliary inputs reference declared elements all_element_names = set(all_names) for a in self.auxiliaries: for inp in a.inputs: if inp not in all_element_names: errors.append( f"Auxiliary {a.name!r} input {inp!r} is not a declared element" ) if errors: raise SFValidationError( f"StockFlowModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def element_names(self) -> set[str]: """All element names in the model.""" names: set[str] = set() for s in self.stocks: names.add(s.name) for f in self.flows: names.add(f.name) for a in self.auxiliaries: names.add(a.name) for c in self.converters: names.add(c.name) return names @property def stock_names(self) -> set[str]: return {s.name for s in self.stocks} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.stockflow.dsl.compile import compile_model return compile_model(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.stockflow.dsl.compile import compile_to_system return compile_to_system(self) ``` ## `element_names` All element names in the model. ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.stockflow.dsl.compile import compile_model return compile_model(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/stockflow/dsl/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.stockflow.dsl.compile import compile_to_system return compile_to_system(self) ``` # gds_domains.stockflow.verification Verification engine -- runs domain checks with optional GDS structural checks. Run verification checks on a StockFlowModel. 1. SF-001..SF-005 on the model (pre-compilation) 1. If include_gds_checks: compile to SystemIR and run G-001..G-006 Parameters: | Name | Type | Description | Default | | -------------------- | ----------------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- | | `model` | `StockFlowModel` | The stock-flow model to verify. | *required* | | `domain_checks` | \`list\[Callable\[[StockFlowModel], list[Finding]\]\] | None\` | Optional subset of SF checks. Defaults to all. | | `include_gds_checks` | `bool` | Whether to compile and run GDS generic checks. | `True` | Source code in `packages/gds-domains/gds_domains/stockflow/verification/engine.py` ``` def verify( model: StockFlowModel, domain_checks: list[Callable[[StockFlowModel], list[Finding]]] | None = None, include_gds_checks: bool = True, ) -> VerificationReport: """Run verification checks on a StockFlowModel. 1. SF-001..SF-005 on the model (pre-compilation) 2. If include_gds_checks: compile to SystemIR and run G-001..G-006 Args: model: The stock-flow model to verify. domain_checks: Optional subset of SF checks. Defaults to all. include_gds_checks: Whether to compile and run GDS generic checks. """ checks = domain_checks or ALL_SF_CHECKS findings: list[Finding] = [] # Phase 1: SF checks on model for check_fn in checks: findings.extend(check_fn(model)) # Phase 2: GDS generic checks on compiled SystemIR if include_gds_checks: from gds.verification.engine import ALL_CHECKS as GDS_ALL_CHECKS system_ir = model.compile_system() for gds_check in GDS_ALL_CHECKS: findings.extend(gds_check(system_ir)) return VerificationReport(system_name=model.name, findings=findings) ``` # Control (gds-domains) # gds-control **State-space control DSL over GDS semantics** -- control theory with formal verification. ## What is this? `gds-control` extends the GDS framework with control systems vocabulary -- states, inputs, sensors, and controllers. It provides: - **4 element types** -- State, Input, Sensor, Controller - **Typed compilation** -- Each element compiles to GDS role blocks, entities, and composition trees - **6 verification checks** -- Domain-specific structural validation (CS-001..CS-006) - **Canonical decomposition** -- Validated h = f ∘ g projection mapping directly to state-space representation - **Full GDS integration** -- All downstream tooling works immediately (canonical projection, semantic checks, gds-viz) ## Architecture ``` gds-framework (pip install gds-framework) | | Domain-neutral composition algebra, typed spaces, | state model, verification engine, flat IR compiler. | +-- gds-control (pip install gds-domains) | | Control DSL: State, Input, Sensor, Controller elements, | compile_model(), domain verification, verify() dispatch. | +-- Your application | | Concrete control models, analysis notebooks, | verification runners. ``` ## GDS Mapping The DSL maps directly to the standard state-space representation: ``` x' = Ax + Bu (state dynamics -> Mechanism) y = Cx + Du (sensor output -> Policy) u = K(y, r) (control law -> Policy) r (reference -> BoundaryAction) ``` ``` Your declaration What the compiler produces ---------------- ------------------------- State("temperature") -> Mechanism + Entity (state update f + state X) Input("setpoint") -> BoundaryAction (exogenous input U) Sensor("thermometer") -> Policy (observation g) Controller("PID") -> Policy (decision logic g) ControlModel(...) -> GDSSpec + SystemIR (full GDS specification) ``` ## Composition Tree The compiler builds a tiered composition tree: ``` (inputs | sensors) >> (controllers) >> (state dynamics) .loop([state dynamics forward_out -> sensor forward_in]) ``` - **Within each tier:** parallel composition (`|`) -- independent inputs and sensors run side-by-side - **Across tiers:** sequential composition (`>>`) -- sensors feed controllers, controllers feed state dynamics - **Temporal recurrence:** `.loop()` -- state outputs at timestep *t* feed back to sensors at timestep *t+1* **Design decision:** All non-state-updating blocks use `Policy`. gds-control deliberately maps everything to the `(A, B, C, D) -> (X, Z, g, f)` state-space decomposition, where sensors and controllers both contribute to the input map `g`. The `ControlAction` role (output map `y = C(x, d)`) is orthogonal to this mapping -- it models inter-system output, not internal control flow. See [Controller-Plant Duality](https://blockscience.github.io/gds-core/framework/design/controller-plant-duality/index.md). ## Canonical Form Control models produce the full dynamical form: | |X| | |f| | Form | Character | |-----|-----|------|-----------| | n | n | h = f ∘ g | Full dynamical system | States carry state (X), dynamics blocks provide f, and sensors + controllers contribute to g. ## Quick Start ``` uv add gds-control # or: pip install gds-domains ``` See [Getting Started](https://blockscience.github.io/gds-core/control/getting-started/index.md) for a full walkthrough. ## Credits Built on [gds-framework](https://blockscience.github.io/gds-core/framework/index.md) by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add gds-control # or: pip install gds-domains ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First Control Model A control model describes a feedback control system: states represent the plant, inputs provide reference signals, sensors observe state, and controllers compute control actions. ``` from gds_domains.control import ( State, Input, Sensor, Controller, ControlModel, compile_model, compile_to_system, verify, ) # Declare a thermostat control system model = ControlModel( name="Thermostat", states=[State(name="temperature", initial=20.0)], inputs=[Input(name="setpoint")], sensors=[Sensor(name="thermometer", observes=["temperature"])], controllers=[ Controller( name="PID", reads=["thermometer", "setpoint"], drives=["temperature"], ) ], ) # Compile to GDS spec = compile_model(model) print(f"Blocks: {len(spec.blocks)}") # 4 blocks print(f"Entities: {len(spec.entities)}") # 1 (temperature state) # Compile to SystemIR for verification ir = compile_to_system(model) print(f"{len(ir.blocks)} blocks, {len(ir.wirings)} wirings") # Verify — domain checks + GDS structural checks report = verify(model, include_gds_checks=True) print(f"{report.checks_passed}/{report.checks_total} checks passed") ``` ## A Multi-State Model Control models support multiple states with multiple sensors and controllers: ``` from gds_domains.control import ( State, Input, Sensor, Controller, ControlModel, verify, ) model = ControlModel( name="HVAC System", states=[ State(name="temperature", initial=22.0), State(name="humidity", initial=45.0), ], inputs=[ Input(name="temp_setpoint"), Input(name="humidity_setpoint"), ], sensors=[ Sensor(name="temp_sensor", observes=["temperature"]), Sensor(name="humidity_sensor", observes=["humidity"]), ], controllers=[ Controller( name="heater", reads=["temp_sensor", "temp_setpoint"], drives=["temperature"], ), Controller( name="humidifier", reads=["humidity_sensor", "humidity_setpoint"], drives=["humidity"], ), ], ) # Verify report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'PASS' if f.passed else 'FAIL'} {f.message}") ``` ## Next Steps - [Elements & GDS Mapping](https://blockscience.github.io/gds-core/control/guide/elements/index.md) -- detailed element reference and how each maps to GDS - [Verification Guide](https://blockscience.github.io/gds-core/control/guide/verification/index.md) -- all 6 domain checks explained - [API Reference](https://blockscience.github.io/gds-core/control/api/init/index.md) -- complete auto-generated API docs # Elements & GDS Mapping `gds-control` provides four element types, each mapping to a specific GDS role and corresponding to the standard state-space representation. ## State A state variable in the plant. Each state becomes a GDS entity with a `value` state variable, and a dynamics block that applies incoming control signals. ``` State(name="temperature", initial=20.0) ``` **GDS mapping:** `Mechanism` (state update *f*) + `Entity` (state *X*) **State-space:** x (state vector) | Field | Type | Default | Description | | --------- | ----- | -------- | -------------------------------- | | `name` | str | required | State name (becomes entity name) | | `initial` | float | None | None | ### Port Convention - Output: `"{Name} State"` (temporal feedback to sensors) - Input: `"{ControllerName} Control"` (incoming control signals) ______________________________________________________________________ ## Input An exogenous reference signal or disturbance entering the system from outside. Inputs have no internal sources -- they represent the boundary between the system and its environment. ``` Input(name="setpoint") ``` **GDS mapping:** `BoundaryAction` (exogenous input *U*) **State-space:** r (reference signal) | Field | Type | Default | Description | | ------ | ---- | -------- | ----------- | | `name` | str | required | Input name | ### Port Convention - Output: `"{Name} Reference"` ______________________________________________________________________ ## Sensor A sensor reads state variables and emits a measurement signal. The `observes` list declares which states the sensor can read -- validated at model construction time. ``` Sensor(name="thermometer", observes=["temperature"]) ``` **GDS mapping:** `Policy` (observation *g*) **State-space:** y = Cx + Du (sensor output) | Field | Type | Default | Description | | ---------- | --------- | -------- | --------------------------------- | | `name` | str | required | Sensor name | | `observes` | list[str] | [] | Names of states this sensor reads | ### Port Convention - Input: `"{StateName} State"` - Output: `"{Name} Measurement"` ______________________________________________________________________ ## Controller A controller reads sensor measurements and/or reference inputs, then emits control signals to drive state variables. ``` Controller(name="PID", reads=["thermometer", "setpoint"], drives=["temperature"]) ``` **GDS mapping:** `Policy` (decision logic *g*) **State-space:** u = K(y, r) (control law) | Field | Type | Default | Description | | -------- | --------- | -------- | --------------------------------------------- | | `name` | str | required | Controller name | | `reads` | list[str] | [] | Names of sensors/inputs this controller reads | | `drives` | list[str] | [] | Names of states this controller drives | ### Port Convention - Input: `"{ReadName} Measurement"` or `"{ReadName} Reference"` - Output: `"{Name} Control"` ______________________________________________________________________ ## Semantic Type System Four distinct semantic spaces, all `float`-backed but structurally separate -- this prevents accidentally wiring a measurement where a control signal is expected: | Type | Space | Used By | Description | | ----------------- | ------------------ | ----------- | --------------------------------------- | | `StateType` | `StateSpace` | States | Plant state variables | | `ReferenceType` | `ReferenceSpace` | Inputs | Exogenous reference/disturbance signals | | `MeasurementType` | `MeasurementSpace` | Sensors | Sensor output measurements | | `ControlType` | `ControlSpace` | Controllers | Controller output signals | ## Composition Structure The compiler builds a tiered composition tree: ``` (inputs | sensors) >> (controllers) >> (state dynamics) .loop([state dynamics forward_out -> sensor forward_in]) ``` This maps to the GDS canonical form `h = f . g` where states carry state (X), dynamics provide f, and sensors + controllers contribute to g. # Verification `gds-control` provides 6 domain-specific verification checks, plus access to the 6 GDS generic checks (G-001..G-006) via the unified `verify()` function. ## Using verify() The `verify()` function runs domain checks on the control model: ``` from gds_domains.control import verify report = verify(model) # Domain checks only report = verify(model, include_gds_checks=True) # Domain + GDS checks ``` The returned `VerificationReport` contains a list of `Finding` objects with: - `check_id` -- e.g., "CS-001", "G-003" - `severity` -- ERROR, WARNING, or INFO - `message` -- human-readable description - `passed` -- whether the check passed - `source_elements` -- elements involved ## Domain Checks | ID | Name | Severity | What it checks | | ------ | ------------------------- | -------- | ---------------------------------------------------- | | CS-001 | Undriven states | WARNING | Every state driven by >= 1 controller | | CS-002 | Unobserved states | WARNING | Every state observed by >= 1 sensor | | CS-003 | Unused inputs | WARNING | Every input read by >= 1 controller | | CS-004 | Controller read validity | ERROR | Controller `reads` reference declared sensors/inputs | | CS-005 | Controller drive validity | ERROR | Controller `drives` reference declared states | | CS-006 | Sensor observe validity | ERROR | Sensor `observes` reference declared states | ### CS-001: Undriven States States not driven by any controller cannot be actuated: ``` [CS-001] WARNING: State 'pressure' is NOT driven by any controller ``` ### CS-002: Unobserved States States not observed by any sensor are invisible to the control loop: ``` [CS-002] WARNING: State 'pressure' is NOT observed by any sensor ``` ### CS-003: Unused Inputs Inputs not read by any controller have no effect on the system: ``` [CS-003] WARNING: Input 'disturbance' is NOT read by any controller ``` ### CS-004: Controller Read Validity Controllers must reference declared sensors or inputs in their `reads` list: ``` [CS-004] ERROR: Controller 'PID' reads 'missing_sensor' which is NOT a declared sensor or input ``` ### CS-005: Controller Drive Validity Controllers must reference declared states in their `drives` list: ``` [CS-005] ERROR: Controller 'PID' drives 'missing_state' which is NOT a declared state ``` ### CS-006: Sensor Observe Validity Sensors must reference declared states in their `observes` list: ``` [CS-006] ERROR: Sensor 'probe' observes 'missing_state' which is NOT a declared state ``` ## GDS Generic Checks When `include_gds_checks=True`, the model is compiled to `SystemIR` and the 6 GDS generic checks run: | ID | Name | What it checks | | ----- | ----------------------------- | ---------------------------------- | | G-001 | Domain/codomain compatibility | Wiring type tokens match | | G-002 | Signature completeness | Every block has inputs and outputs | | G-003 | Unique block naming | No duplicate block names | | G-004 | Wiring source existence | Wired blocks exist | | G-005 | Wiring target existence | Wired blocks exist | | G-006 | Hierarchy consistency | Block tree is well-formed | Note G-002 will flag `BoundaryAction` blocks (Inputs) as having "no inputs" -- this is expected since they are exogenous sources by design. # gds_domains.control.verification.checks Control system verification checks (CS-001..CS-006). CS-001: Every state is driven by at least one controller. Source code in `packages/gds-domains/gds_domains/control/verification/checks.py` ``` def check_cs001_undriven_states(model: ControlModel) -> list[Finding]: """CS-001: Every state is driven by at least one controller.""" findings: list[Finding] = [] driven_states: set[str] = set() for ctrl in model.controllers: driven_states.update(ctrl.drives) for state in model.states: driven = state.name in driven_states findings.append( Finding( check_id="CS-001", severity=Severity.WARNING, message=( f"State {state.name!r} is not driven by any controller" if not driven else f"State {state.name!r} is driven by a controller" ), source_elements=[state.name], passed=driven, ) ) return findings ``` CS-002: Every state is observed by at least one sensor. Source code in `packages/gds-domains/gds_domains/control/verification/checks.py` ``` def check_cs002_unobserved_states(model: ControlModel) -> list[Finding]: """CS-002: Every state is observed by at least one sensor.""" findings: list[Finding] = [] observed_states: set[str] = set() for sensor in model.sensors: observed_states.update(sensor.observes) for state in model.states: observed = state.name in observed_states findings.append( Finding( check_id="CS-002", severity=Severity.WARNING, message=( f"State {state.name!r} is not observed by any sensor" if not observed else f"State {state.name!r} is observed by a sensor" ), source_elements=[state.name], passed=observed, ) ) return findings ``` CS-003: Every input is read by at least one controller. Source code in `packages/gds-domains/gds_domains/control/verification/checks.py` ``` def check_cs003_unused_inputs(model: ControlModel) -> list[Finding]: """CS-003: Every input is read by at least one controller.""" findings: list[Finding] = [] read_names: set[str] = set() for ctrl in model.controllers: read_names.update(ctrl.reads) for inp in model.inputs: used = inp.name in read_names findings.append( Finding( check_id="CS-003", severity=Severity.WARNING, message=( f"Input {inp.name!r} is not read by any controller" if not used else f"Input {inp.name!r} is read by a controller" ), source_elements=[inp.name], passed=used, ) ) return findings ``` CS-004: Controller reads reference declared sensors/inputs. Source code in `packages/gds-domains/gds_domains/control/verification/checks.py` ``` def check_cs004_controller_read_validity(model: ControlModel) -> list[Finding]: """CS-004: Controller reads reference declared sensors/inputs.""" findings: list[Finding] = [] readable_names = model.sensor_names | model.input_names for ctrl in model.controllers: for read in ctrl.reads: valid = read in readable_names findings.append( Finding( check_id="CS-004", severity=Severity.ERROR, message=( f"Controller {ctrl.name!r} reads {read!r} " f"{'which is' if valid else 'which is NOT'} " f"a declared sensor or input" ), source_elements=[ctrl.name, read], passed=valid, ) ) return findings ``` CS-005: Controller drives reference declared states. Source code in `packages/gds-domains/gds_domains/control/verification/checks.py` ``` def check_cs005_controller_drive_validity(model: ControlModel) -> list[Finding]: """CS-005: Controller drives reference declared states.""" findings: list[Finding] = [] state_names = model.state_names for ctrl in model.controllers: for drive in ctrl.drives: valid = drive in state_names findings.append( Finding( check_id="CS-005", severity=Severity.ERROR, message=( f"Controller {ctrl.name!r} drives {drive!r} " f"{'which is' if valid else 'which is NOT'} " f"a declared state" ), source_elements=[ctrl.name, drive], passed=valid, ) ) return findings ``` CS-006: Sensor observes reference declared states. Source code in `packages/gds-domains/gds_domains/control/verification/checks.py` ``` def check_cs006_sensor_observe_validity(model: ControlModel) -> list[Finding]: """CS-006: Sensor observes reference declared states.""" findings: list[Finding] = [] state_names = model.state_names for sensor in model.sensors: for obs in sensor.observes: valid = obs in state_names findings.append( Finding( check_id="CS-006", severity=Severity.ERROR, message=( f"Sensor {sensor.name!r} observes {obs!r} " f"{'which is' if valid else 'which is NOT'} " f"a declared state" ), source_elements=[sensor.name, obs], passed=valid, ) ) return findings ``` # gds_domains.control.dsl.compile Compiler: ControlModel -> GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a ControlModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings, and parameters. Source code in `packages/gds-domains/gds_domains/control/dsl/compile.py` ``` def compile_model(model: ControlModel) -> GDSSpec: """Compile a ControlModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings, and parameters. """ spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(StateType, ReferenceType, MeasurementType, ControlType) # 2. Register spaces spec.collect(StateSpace, ReferenceSpace, MeasurementSpace, ControlSpace) # 3. Register entities (one per state) for state in model.states: spec.register_entity(_build_state_entity(state)) # 4. Register blocks for inp in model.inputs: spec.register_block(_build_input_block(inp)) for sensor in model.sensors: spec.register_block(_build_sensor_block(sensor)) for ctrl in model.controllers: spec.register_block(_build_controller_block(ctrl, model)) for state in model.states: spec.register_block(_build_state_mechanism(state, model)) # 5. Register spec wirings all_block_names = [b for b in spec.blocks] wires: list[Wire] = [] # Controller → State dynamics wirings for ctrl in model.controllers: for drive in ctrl.drives: wires.append( Wire( source=ctrl.name, target=_dynamics_block_name(drive), space="ControlSpace", ) ) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=(f"Auto-generated wiring for control model {model.name!r}"), ) ) # 6. Register inputs as parameters for inp in model.inputs: spec.register_parameter( ParameterDef( name=inp.name, typedef=ReferenceType, description=f"Exogenous input: {inp.name}", ) ) # 7. Register transition signatures (mechanism read dependencies) from gds.constraints import TransitionSignature for state in model.states: driving_controllers = [ ctrl.name for ctrl in model.controllers if state.name in ctrl.drives ] spec.register_transition_signature( TransitionSignature( mechanism=_dynamics_block_name(state.name), reads=[(state.name, "value")], depends_on_blocks=driving_controllers, ) ) # 8. Declare execution contract — control systems are discrete/synchronous/Moore spec.execution_contract = ExecutionContract(time_domain="discrete") return spec ``` Compile a ControlModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). Source code in `packages/gds-domains/gds_domains/control/dsl/compile.py` ``` def compile_to_system(model: ControlModel) -> SystemIR: """Compile a ControlModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). """ root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.control.dsl.elements Control system element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A plant state variable. Maps to: GDS Mechanism (state update f) + Entity (state X). Receives control ports from driving controllers, emits state port. Source code in `packages/gds-domains/gds_domains/control/dsl/elements.py` ``` class State(BaseModel, frozen=True): """A plant state variable. Maps to: GDS Mechanism (state update f) + Entity (state X). Receives control ports from driving controllers, emits state port. """ name: str initial: float | None = None ``` Bases: `BaseModel` Exogenous signal — reference setpoint or disturbance. Maps to: GDS BoundaryAction (exogenous input U). Emits a reference port; has no internal inputs. Source code in `packages/gds-domains/gds_domains/control/dsl/elements.py` ``` class Input(BaseModel, frozen=True): """Exogenous signal — reference setpoint or disturbance. Maps to: GDS BoundaryAction (exogenous input U). Emits a reference port; has no internal inputs. """ name: str ``` Bases: `BaseModel` Observation: reads state variables, emits measurement. Maps to: GDS Policy (observation g). Receives state ports from observed states, emits measurement port. Source code in `packages/gds-domains/gds_domains/control/dsl/elements.py` ``` class Sensor(BaseModel, frozen=True): """Observation: reads state variables, emits measurement. Maps to: GDS Policy (observation g). Receives state ports from observed states, emits measurement port. """ name: str observes: list[str] = Field(default_factory=list) ``` Bases: `BaseModel` Control law: reads sensors/inputs, emits control signal. Maps to: GDS Policy (decision logic g). Receives measurement/reference ports, emits control port. Source code in `packages/gds-domains/gds_domains/control/dsl/elements.py` ``` class Controller(BaseModel, frozen=True): """Control law: reads sensors/inputs, emits control signal. Maps to: GDS Policy (decision logic g). Receives measurement/reference ports, emits control port. """ name: str reads: list[str] = Field(default_factory=list) drives: list[str] = Field(default_factory=list) ``` # gds_domains.control Public API -- top-level exports. State-space control DSL over GDS semantics — control theory with formal guarantees. Declare states, inputs, sensors, and controllers as plain data models. The compiler maps them to GDS role blocks, entities, and composition trees. All downstream GDS tooling works immediately — canonical projection, semantic checks, SpecQuery, serialization, gds-viz. # gds_domains.control.dsl.model ControlModel -- declarative container for control system specifications. Bases: `BaseModel` A complete state-space control system declaration. Validates at construction: 1. At least one state 1. No duplicate names across all elements 1. Sensor observes references declared state names 1. Controller reads references declared sensor/input names 1. Controller drives references declared state names Source code in `packages/gds-domains/gds_domains/control/dsl/model.py` ``` class ControlModel(BaseModel): """A complete state-space control system declaration. Validates at construction: 1. At least one state 2. No duplicate names across all elements 3. Sensor observes references declared state names 4. Controller reads references declared sensor/input names 5. Controller drives references declared state names """ name: str states: list[State] inputs: list[Input] = Field(default_factory=list) sensors: list[Sensor] = Field(default_factory=list) controllers: list[Controller] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one state if not self.states: errors.append("Model must have at least one state") # 2. No duplicate names all_names: list[str] = [] for s in self.states: all_names.append(s.name) for i in self.inputs: all_names.append(i.name) for s in self.sensors: all_names.append(s.name) for c in self.controllers: all_names.append(c.name) seen: set[str] = set() for n in all_names: if n in seen: errors.append(f"Duplicate element name: {n!r}") seen.add(n) state_names = {s.name for s in self.states} sensor_names = {s.name for s in self.sensors} input_names = {i.name for i in self.inputs} readable_names = sensor_names | input_names # 3. Sensor observes references declared state names for sensor in self.sensors: for obs in sensor.observes: if obs not in state_names: errors.append( f"Sensor {sensor.name!r} observes {obs!r} " f"which is not a declared state" ) # 4. Controller reads references declared sensor/input names for ctrl in self.controllers: for read in ctrl.reads: if read not in readable_names: errors.append( f"Controller {ctrl.name!r} reads {read!r} " f"which is not a declared sensor or input" ) # 5. Controller drives references declared state names for ctrl in self.controllers: for drive in ctrl.drives: if drive not in state_names: errors.append( f"Controller {ctrl.name!r} drives {drive!r} " f"which is not a declared state" ) if errors: raise CSValidationError( f"ControlModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def element_names(self) -> set[str]: """All element names in the model.""" names: set[str] = set() for s in self.states: names.add(s.name) for i in self.inputs: names.add(i.name) for s in self.sensors: names.add(s.name) for c in self.controllers: names.add(c.name) return names @property def state_names(self) -> set[str]: return {s.name for s in self.states} @property def sensor_names(self) -> set[str]: return {s.name for s in self.sensors} @property def input_names(self) -> set[str]: return {i.name for i in self.inputs} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.control.dsl.compile import compile_model return compile_model(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.control.dsl.compile import compile_to_system return compile_to_system(self) ``` ## `element_names` All element names in the model. ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/control/dsl/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.control.dsl.compile import compile_model return compile_model(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/control/dsl/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.control.dsl.compile import compile_to_system return compile_to_system(self) ``` # gds_domains.control.verification Verification engine -- runs domain checks with optional GDS structural checks. Run verification checks on a ControlModel. 1. CS-001..CS-006 on the model (pre-compilation) 1. If include_gds_checks: compile to SystemIR and run G-001..G-006 Parameters: | Name | Type | Description | Default | | -------------------- | --------------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- | | `model` | `ControlModel` | The control system model to verify. | *required* | | `domain_checks` | \`list\[Callable\[[ControlModel], list[Finding]\]\] | None\` | Optional subset of CS checks. Defaults to all. | | `include_gds_checks` | `bool` | Whether to compile and run GDS generic checks. | `True` | Source code in `packages/gds-domains/gds_domains/control/verification/engine.py` ``` def verify( model: ControlModel, domain_checks: list[Callable[[ControlModel], list[Finding]]] | None = None, include_gds_checks: bool = True, ) -> VerificationReport: """Run verification checks on a ControlModel. 1. CS-001..CS-006 on the model (pre-compilation) 2. If include_gds_checks: compile to SystemIR and run G-001..G-006 Args: model: The control system model to verify. domain_checks: Optional subset of CS checks. Defaults to all. include_gds_checks: Whether to compile and run GDS generic checks. """ checks = domain_checks or ALL_CS_CHECKS findings: list[Finding] = [] # Phase 1: CS checks on model for check_fn in checks: findings.extend(check_fn(model)) # Phase 2: GDS generic checks on compiled SystemIR if include_gds_checks: from gds.verification.engine import ALL_CHECKS as GDS_ALL_CHECKS system_ir = model.compile_system() for gds_check in GDS_ALL_CHECKS: findings.extend(gds_check(system_ir)) return VerificationReport(system_name=model.name, findings=findings) ``` # Software (gds-domains) # gds-software **Software architecture DSL over GDS semantics** -- DFDs, state machines, component diagrams, C4 models, ERDs, and dependency graphs with formal verification. ## What is this? `gds-software` extends the GDS framework with software architecture vocabulary -- six diagram types commonly used in software engineering, each compiled to GDS specifications with structural verification. It provides: - **6 diagram types** -- Data Flow Diagram (DFD), State Machine (SM), Component Diagram (CP), C4 Model, Entity-Relationship Diagram (ERD), Dependency Graph (DG) - **Typed compilation** -- Each diagram compiles to GDS role blocks, entities, and composition trees - **27 verification checks** -- Domain-specific structural validation across all diagram types - **Canonical decomposition** -- Validated h = f ∘ g projection for all diagram types - **Full GDS integration** -- All downstream tooling works immediately (canonical projection, semantic checks, gds-viz) ## Architecture ``` gds-framework (pip install gds-framework) | | Domain-neutral composition algebra, typed spaces, | state model, verification engine, flat IR compiler. | +-- gds-software (pip install gds-domains) | | Software architecture DSL: 6 diagram types, | compile_*(), domain verification, verify() dispatch. | +-- Your application | | Concrete architecture models, analysis notebooks, | verification runners. ``` ## Diagram Types at a Glance | Diagram | Elements | Checks | Canonical Form | | ----------------- | -------------------------------------------------------------- | ------------ | ---------------------------------- | | **DFD** | ExternalEntity, Process, DataStore, DataFlow | DFD-001..005 | Varies (stateful with data stores) | | **State Machine** | State, Event, Transition, Guard, Region | SM-001..006 | Varies (stateful with states) | | **Component** | Component, InterfaceDef, Connector | CP-001..004 | h = g (stateless) | | **C4** | Person, ExternalSystem, Container, C4Component, C4Relationship | C4-001..004 | h = g (stateless) | | **ERD** | ERDEntity, Attribute, ERDRelationship, Cardinality | ER-001..004 | h = g (stateless) | | **Dependency** | Module, Dep, Layer | DG-001..004 | h = g (stateless) | ## GDS Role Mappings All six diagram types follow a shared mapping pattern: - Exogenous inputs (ExternalEntity, Person, Event) --> `BoundaryAction` - Decision/observation logic (Process, Transition, Module, Component) --> `Policy` - State updates (DataStore, State, stateful containers) --> `Mechanism` + `Entity` - Connections (DataFlow, Connector, Dep, Relationship) --> `Wiring` ## Quick Start ``` uv add gds-software # or: pip install gds-domains ``` See [Getting Started](https://blockscience.github.io/gds-core/software/getting-started/index.md) for a full walkthrough. ## Credits Built on [gds-framework](https://blockscience.github.io/gds-core/framework/index.md) by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add gds-software # or: pip install gds-domains ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First DFD A Data Flow Diagram models processes, external entities, data stores, and the flows between them: ``` from gds_domains.software import ( ExternalEntity, Process, DataStore, DataFlow, DFDModel, compile_dfd, compile_dfd_to_system, verify, ) model = DFDModel( name="Order System", entities=[ExternalEntity(name="Customer")], processes=[ Process(name="Validate Order"), Process(name="Process Payment"), ], stores=[DataStore(name="Order DB")], flows=[ DataFlow(name="Order", source="Customer", target="Validate Order"), DataFlow(name="Valid Order", source="Validate Order", target="Process Payment"), DataFlow(name="Record", source="Process Payment", target="Order DB"), DataFlow(name="Confirmation", source="Process Payment", target="Customer"), ], ) # Compile to GDS spec = compile_dfd(model) ir = compile_dfd_to_system(model) print(f"{len(ir.blocks)} blocks, {len(ir.wirings)} wirings") # Verify report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'PASS' if f.passed else 'FAIL'} {f.message}") ``` ## Your First State Machine A State Machine models states, events, and transitions: ``` from gds_domains.software import ( State, Event, Transition, StateMachineModel, compile_sm, verify, ) model = StateMachineModel( name="Traffic Light", states=[ State(name="Red", is_initial=True), State(name="Green"), State(name="Yellow"), ], events=[ Event(name="timer"), ], transitions=[ Transition(source="Red", target="Green", event="timer"), Transition(source="Green", target="Yellow", event="timer"), Transition(source="Yellow", target="Red", event="timer"), ], ) # Compile and verify spec = compile_sm(model) report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'PASS' if f.passed else 'FAIL'} {f.message}") ``` ## Your First Component Diagram A Component Diagram models software components with provided and required interfaces: ``` from gds_domains.software import ( Component, InterfaceDef, Connector, ComponentModel, compile_component, verify, ) model = ComponentModel( name="Web App", components=[ Component(name="Frontend", provides=["UI"], requires=["API"]), Component(name="Backend", provides=["API"], requires=["DB"]), Component(name="Database", provides=["DB"]), ], interfaces=[ InterfaceDef(name="UI"), InterfaceDef(name="API"), InterfaceDef(name="DB"), ], connectors=[ Connector(name="F->B", source="Frontend", target="Backend", interface="API"), Connector(name="B->D", source="Backend", target="Database", interface="DB"), ], ) spec = compile_component(model) report = verify(model, include_gds_checks=False) for f in report.findings: print(f" [{f.check_id}] {'PASS' if f.passed else 'FAIL'} {f.message}") ``` ## Next Steps - [Diagram Types Guide](https://blockscience.github.io/gds-core/software/guide/diagram-types/index.md) -- all 6 diagram types with elements and GDS mapping - [Verification Guide](https://blockscience.github.io/gds-core/software/guide/verification/index.md) -- all 27 domain checks explained - [API Reference](https://blockscience.github.io/gds-core/software/api/init/index.md) -- complete auto-generated API docs # Diagram Types `gds-software` supports six software architecture diagram types, each with its own element vocabulary, GDS mapping, and composition structure. ## Data Flow Diagram (DFD) Data flow diagrams model **data movement** through a system -- processes transform data, external entities provide/consume it, and data stores persist it. ### Elements | Element | Description | GDS Role | | ---------------- | --------------------------------- | ------------------ | | `ExternalEntity` | Actor outside the system boundary | BoundaryAction | | `Process` | Data transformation step | Policy | | `DataStore` | Persistent data repository | Mechanism + Entity | | `DataFlow` | Directed data movement | Wiring | ### GDS Mapping Three-tier composition with optional temporal loop: ``` Composition: (entities |) >> (processes |) >> (stores |) .loop([stores -> processes]) # if stores exist Canonical: h = g (no stores) or h = f . g (with stores) ``` ### Checks DFD-001..DFD-005 (5 checks). See [Verification](https://blockscience.github.io/gds-core/software/guide/verification/#dfd-checks). ### Example ``` from gds_domains.software import ( ExternalEntity, Process, DataStore, DataFlow, DFDModel, ) model = DFDModel( name="Order Processing", entities=[ExternalEntity(name="Customer")], processes=[Process(name="Validate"), Process(name="Ship")], stores=[DataStore(name="Orders")], flows=[ DataFlow(name="Request", source="Customer", target="Validate"), DataFlow(name="Save", source="Validate", target="Orders"), DataFlow(name="Lookup", source="Orders", target="Ship"), DataFlow(name="Notify", source="Ship", target="Customer"), ], ) ``` ______________________________________________________________________ ## State Machine (SM) State machines model **behavioral transitions** -- states connected by event-triggered transitions with optional guards. ### Elements | Element | Description | GDS Role | | ------------ | ------------------------------- | ------------------------ | | `State` | A discrete system state | Mechanism + Entity | | `Event` | External trigger | BoundaryAction | | `Transition` | State change on event | Policy | | `Guard` | Boolean condition on transition | (embedded in Transition) | | `Region` | Orthogonal concurrent partition | ParallelComposition | ### GDS Mapping Three-tier composition: ``` Composition: (events |) >> (transitions |) >> (states |) .loop([states -> transitions]) Canonical: h = f . g (stateful) ``` ### Checks SM-001..SM-006 (6 checks). See [Verification](https://blockscience.github.io/gds-core/software/guide/verification/#state-machine-checks). ### Example ``` from gds_domains.software import ( State, Event, Transition, StateMachineModel, ) model = StateMachineModel( name="Door", states=[ State(name="Closed", is_initial=True), State(name="Open"), State(name="Locked"), ], events=[Event(name="open"), Event(name="close"), Event(name="lock"), Event(name="unlock")], transitions=[ Transition(source="Closed", target="Open", event="open"), Transition(source="Open", target="Closed", event="close"), Transition(source="Closed", target="Locked", event="lock"), Transition(source="Locked", target="Closed", event="unlock"), ], ) ``` ______________________________________________________________________ ## Component Diagram (CP) Component diagrams model **software structure** -- components with provided/required interfaces connected by connectors. ### Elements | Element | Description | GDS Role | | -------------- | --------------------------------------- | ---------- | | `Component` | Software module with interfaces | Policy | | `InterfaceDef` | Named interface contract | (metadata) | | `Connector` | Wiring between components via interface | Wiring | ### GDS Mapping Single-tier parallel composition: ``` Composition: (components |) Canonical: h = g (stateless) ``` ### Port Naming Uses `+` delimiter for token overlap: `"{Interface} + Provided"`, `"{Interface} + Required"`. ### Checks CP-001..CP-004 (4 checks). See [Verification](https://blockscience.github.io/gds-core/software/guide/verification/#component-checks). ### Example ``` from gds_domains.software import ( Component, InterfaceDef, Connector, ComponentModel, ) model = ComponentModel( name="Microservices", components=[ Component(name="AuthService", provides=["Auth"], requires=["UserDB"]), Component(name="UserStore", provides=["UserDB"]), ], interfaces=[InterfaceDef(name="Auth"), InterfaceDef(name="UserDB")], connectors=[ Connector(name="Auth->Users", source="AuthService", target="UserStore", interface="UserDB"), ], ) ``` ______________________________________________________________________ ## C4 Model C4 models describe **system context and containers** -- people, systems, containers, and components with relationships. ### Elements | Element | Description | GDS Role | | ---------------- | ------------------------------------- | -------------- | | `Person` | Human user/actor | BoundaryAction | | `ExternalSystem` | System outside the boundary | BoundaryAction | | `Container` | Deployable unit (app, database, etc.) | Policy | | `C4Component` | Component within a container | Policy | | `C4Relationship` | Directed dependency | Wiring | ### GDS Mapping Two-tier composition: ``` Composition: (persons | externals) >> (containers | components) Canonical: h = g (stateless) ``` ### Checks C4-001..C4-004 (4 checks). See [Verification](https://blockscience.github.io/gds-core/software/guide/verification/#c4-checks). ### Example ``` from gds_domains.software import ( Person, ExternalSystem, Container, C4Relationship, C4Model, ) model = C4Model( name="E-Commerce", persons=[Person(name="Shopper")], external_systems=[ExternalSystem(name="Payment Gateway")], containers=[ Container(name="Web App", technology="React"), Container(name="API", technology="FastAPI"), Container(name="Database", technology="PostgreSQL"), ], relationships=[ C4Relationship(source="Shopper", target="Web App", description="Browses"), C4Relationship(source="Web App", target="API", description="API calls"), C4Relationship(source="API", target="Database", description="Reads/writes"), C4Relationship(source="API", target="Payment Gateway", description="Charges"), ], ) ``` ______________________________________________________________________ ## Entity-Relationship Diagram (ERD) ERDs model **data structure** -- entities with attributes connected by relationships with cardinality. ### Elements | Element | Description | GDS Role | | ----------------- | -------------------------------------- | ----------------------------- | | `ERDEntity` | Data entity with attributes | Policy | | `Attribute` | Entity field (name, type, PK/FK flags) | (embedded in ERDEntity) | | `ERDRelationship` | Association between entities | Wiring | | `Cardinality` | Relationship multiplicity (ONE, MANY) | (embedded in ERDRelationship) | ### GDS Mapping Single-tier parallel composition: ``` Composition: (entities |) Canonical: h = g (stateless) ``` ### Checks ER-001..ER-004 (4 checks). See [Verification](https://blockscience.github.io/gds-core/software/guide/verification/#erd-checks). ### Example ``` from gds_domains.software import ( ERDEntity, Attribute, ERDRelationship, Cardinality, ERDModel, ) model = ERDModel( name="Blog", entities=[ ERDEntity( name="User", attributes=[ Attribute(name="id", type="int", is_pk=True), Attribute(name="email", type="str"), ], ), ERDEntity( name="Post", attributes=[ Attribute(name="id", type="int", is_pk=True), Attribute(name="author_id", type="int", is_fk=True), Attribute(name="title", type="str"), ], ), ], relationships=[ ERDRelationship( name="writes", source="User", target="Post", source_cardinality=Cardinality.ONE, target_cardinality=Cardinality.MANY, ), ], ) ``` ______________________________________________________________________ ## Dependency Graph (DG) Dependency graphs model **module dependencies** with optional layered architecture constraints. ### Elements | Element | Description | GDS Role | | -------- | ---------------------------------------------- | ---------- | | `Module` | Software module/package | Policy | | `Dep` | Directed dependency between modules | Wiring | | `Layer` | Architectural layer (for ordering constraints) | (metadata) | ### GDS Mapping Single-tier parallel composition: ``` Composition: (modules |) Canonical: h = g (stateless) ``` ### Checks DG-001..DG-004 (4 checks). See [Verification](https://blockscience.github.io/gds-core/software/guide/verification/#dependency-checks). ### Example ``` from gds_domains.software import ( Module, Dep, Layer, DependencyModel, ) model = DependencyModel( name="Clean Architecture", modules=[ Module(name="handlers", layer="application"), Module(name="services", layer="domain"), Module(name="repository", layer="infrastructure"), ], deps=[ Dep(source="handlers", target="services"), Dep(source="repository", target="services"), ], layers=[ Layer(name="application", order=0), Layer(name="domain", order=1), Layer(name="infrastructure", order=2), ], ) ``` # Verification `gds-software` provides 27 domain-specific verification checks across six diagram types, plus access to the 6 GDS generic checks (G-001..G-006) via the unified `verify()` function. ## Using verify() The `verify()` function auto-dispatches to the correct domain checks based on model type: ``` from gds_domains.software import verify report = verify(model) # Domain + GDS checks report = verify(model, include_gds_checks=False) # Domain checks only ``` The returned `VerificationReport` contains a list of `Finding` objects with: - `check_id` -- e.g., "DFD-001", "SM-003", "G-003" - `severity` -- ERROR, WARNING, or INFO - `message` -- human-readable description - `passed` -- whether the check passed - `source_elements` -- elements involved ## DFD Checks | ID | Name | Severity | What it checks | | ------- | ----------------------- | -------- | ------------------------------------------------------ | | DFD-001 | Process connectivity | WARNING | Every process has >= 1 incoming and >= 1 outgoing flow | | DFD-002 | Flow validity | ERROR | Flow source/target reference declared elements | | DFD-003 | No external-to-external | ERROR | No direct flow between two external entities | | DFD-004 | Store connectivity | WARNING | Every data store has >= 1 connected flow | | DFD-005 | Process output | ERROR | Every process has at least one outgoing flow | ## State Machine Checks | ID | Name | Severity | What it checks | | ------ | ------------------- | -------- | ---------------------------------------------------------------------------------- | | SM-001 | Initial state | ERROR | Exactly one initial state exists (per region) | | SM-002 | Reachability | WARNING | All states reachable from initial state via transitions | | SM-003 | Determinism | WARNING | No two transitions from the same state on the same event (without distinct guards) | | SM-004 | Guard completeness | INFO | For guarded transitions, checks if guards cover all cases | | SM-005 | Region partition | ERROR | If regions declared, every state belongs to exactly one region | | SM-006 | Transition validity | ERROR | Transition source/target/event reference declared elements | ## Component Checks | ID | Name | Severity | What it checks | | ------ | ---------------------- | -------- | ---------------------------------------------------------------------------- | | CP-001 | Interface satisfaction | WARNING | Every required interface is satisfied by some component's provided interface | | CP-002 | Connector validity | ERROR | Connector source/target reference declared components | | CP-003 | Dangling interfaces | WARNING | Every declared interface is referenced by at least one component | | CP-004 | Component naming | ERROR | No duplicate component names | ## C4 Checks | ID | Name | Severity | What it checks | | ------ | --------------------- | -------- | ------------------------------------------------------- | | C4-001 | Relationship validity | ERROR | Relationship source/target reference declared elements | | C4-002 | Container hierarchy | WARNING | Components reference valid parent containers | | C4-003 | External connectivity | WARNING | Every external system has >= 1 relationship | | C4-004 | Level consistency | WARNING | Relationships connect elements at appropriate C4 levels | ## ERD Checks | ID | Name | Severity | What it checks | | ------ | --------------------- | -------- | ------------------------------------------------------ | | ER-001 | Relationship validity | ERROR | Relationship source/target reference declared entities | | ER-002 | Primary key existence | WARNING | Every entity has at least one PK attribute | | ER-003 | Attribute uniqueness | ERROR | No duplicate attribute names within an entity | | ER-004 | Relationship naming | ERROR | No duplicate relationship names | ## Dependency Checks | ID | Name | Severity | What it checks | | ------ | ------------------- | -------- | ------------------------------------------------------------------- | | DG-001 | Dependency validity | ERROR | Dep source/target reference declared modules | | DG-002 | Acyclicity | ERROR | No circular dependencies in the module graph | | DG-003 | Layer ordering | WARNING | Dependencies respect layer ordering (higher layers depend on lower) | | DG-004 | Module connectivity | WARNING | Every module is part of at least one dependency | ## GDS Generic Checks When `include_gds_checks=True` (default), the model is compiled to `SystemIR` and the 6 GDS generic checks run: | ID | Name | What it checks | | ----- | ----------------------------- | ---------------------------------- | | G-001 | Domain/codomain compatibility | Wiring type tokens match | | G-002 | Signature completeness | Every block has inputs and outputs | | G-003 | Unique block naming | No duplicate block names | | G-004 | Wiring source existence | Wired blocks exist | | G-005 | Wiring target existence | Wired blocks exist | | G-006 | Hierarchy consistency | Block tree is well-formed | Note G-002 will flag `BoundaryAction` blocks (ExternalEntity, Person, Event) as having "no inputs" -- this is expected since they are exogenous sources by design. # API Reference Complete API documentation for `gds-domains` (software), auto-generated from source docstrings. ## Common | Module | Description | | --------------------------------------------------------------------------------------------------- | ------------------------------------------- | | [gds_domains.software](https://blockscience.github.io/gds-core/software/api/init/index.md) | Package root -- version, top-level imports | | [gds_domains.software.common](https://blockscience.github.io/gds-core/software/api/common/index.md) | Shared types, errors, compilation utilities | ## Data Flow Diagrams | Module | Description | | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------- | | [gds_domains.software.dfd.elements](https://blockscience.github.io/gds-core/software/api/dfd-elements/index.md) | ExternalEntity, Process, DataStore, DataFlow declarations | | [gds_domains.software.dfd.model](https://blockscience.github.io/gds-core/software/api/dfd-model/index.md) | DFDModel container | | [gds_domains.software.dfd.compile](https://blockscience.github.io/gds-core/software/api/dfd-compile/index.md) | DFD -> GDSSpec / SystemIR compiler | | [gds_domains.software.dfd.checks](https://blockscience.github.io/gds-core/software/api/dfd-checks/index.md) | DFD-001..DFD-005 verification checks | ## State Machines | Module | Description | | ----------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | | [gds_domains.software.statemachine.elements](https://blockscience.github.io/gds-core/software/api/sm-elements/index.md) | State, Event, Transition, Guard, Region | | [gds_domains.software.statemachine.model](https://blockscience.github.io/gds-core/software/api/sm-model/index.md) | StateMachineModel container | | [gds_domains.software.statemachine.compile](https://blockscience.github.io/gds-core/software/api/sm-compile/index.md) | SM -> GDSSpec / SystemIR compiler | | [gds_domains.software.statemachine.checks](https://blockscience.github.io/gds-core/software/api/sm-checks/index.md) | SM-001..SM-006 verification checks | ## Component Diagrams | Module | Description | | -------------------------------------------------------------------------------------------------------------------- | ---------------------------------------- | | [gds_domains.software.component.elements](https://blockscience.github.io/gds-core/software/api/cp-elements/index.md) | Component, InterfaceDef, Connector | | [gds_domains.software.component.model](https://blockscience.github.io/gds-core/software/api/cp-model/index.md) | ComponentModel container | | [gds_domains.software.component.compile](https://blockscience.github.io/gds-core/software/api/cp-compile/index.md) | Component -> GDSSpec / SystemIR compiler | | [gds_domains.software.component.checks](https://blockscience.github.io/gds-core/software/api/cp-checks/index.md) | CP-001..CP-004 verification checks | ## C4 Models | Module | Description | | ------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------- | | [gds_domains.software.c4.elements](https://blockscience.github.io/gds-core/software/api/c4-elements/index.md) | Person, ExternalSystem, Container, C4Component, C4Relationship | | [gds_domains.software.c4.model](https://blockscience.github.io/gds-core/software/api/c4-model/index.md) | C4Model container | | [gds_domains.software.c4.compile](https://blockscience.github.io/gds-core/software/api/c4-compile/index.md) | C4 -> GDSSpec / SystemIR compiler | | [gds_domains.software.c4.checks](https://blockscience.github.io/gds-core/software/api/c4-checks/index.md) | C4-001..C4-004 verification checks | ## Entity-Relationship Diagrams | Module | Description | | --------------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | | [gds_domains.software.erd.elements](https://blockscience.github.io/gds-core/software/api/erd-elements/index.md) | ERDEntity, Attribute, ERDRelationship, Cardinality | | [gds_domains.software.erd.model](https://blockscience.github.io/gds-core/software/api/erd-model/index.md) | ERDModel container | | [gds_domains.software.erd.compile](https://blockscience.github.io/gds-core/software/api/erd-compile/index.md) | ERD -> GDSSpec / SystemIR compiler | | [gds_domains.software.erd.checks](https://blockscience.github.io/gds-core/software/api/erd-checks/index.md) | ER-001..ER-004 verification checks | ## Dependency Graphs | Module | Description | | ---------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | | [gds_domains.software.dependency.elements](https://blockscience.github.io/gds-core/software/api/dep-elements/index.md) | Module, Dep, Layer | | [gds_domains.software.dependency.model](https://blockscience.github.io/gds-core/software/api/dep-model/index.md) | DependencyModel container | | [gds_domains.software.dependency.compile](https://blockscience.github.io/gds-core/software/api/dep-compile/index.md) | Dependency -> GDSSpec / SystemIR compiler | | [gds_domains.software.dependency.checks](https://blockscience.github.io/gds-core/software/api/dep-checks/index.md) | DG-001..DG-004 verification checks | ## Verification | Module | Description | | --------------------------------------------------------------------------------------------------------------- | ------------------------------ | | [gds_domains.software.verification](https://blockscience.github.io/gds-core/software/api/verification/index.md) | Union dispatch verify() engine | # gds_domains.software.c4.checks C4 model verification checks (C4-001..C4-004). C4-001: Relationship source/target are declared elements. Source code in `packages/gds-domains/gds_domains/software/c4/checks.py` ``` def check_c4001_relationship_validity(model: C4Model) -> list[Finding]: """C4-001: Relationship source/target are declared elements.""" findings: list[Finding] = [] for rel in model.relationships: src_valid = rel.source in model.element_names findings.append( Finding( check_id="C4-001", severity=Severity.ERROR, message=( f"Relationship {rel.name!r} source {rel.source!r} " f"{'is' if src_valid else 'is NOT'} a declared element" ), source_elements=[rel.name, rel.source], passed=src_valid, ) ) tgt_valid = rel.target in model.element_names findings.append( Finding( check_id="C4-001", severity=Severity.ERROR, message=( f"Relationship {rel.name!r} target {rel.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared element" ), source_elements=[rel.name, rel.target], passed=tgt_valid, ) ) return findings ``` C4-002: Component containers reference declared containers. Source code in `packages/gds-domains/gds_domains/software/c4/checks.py` ``` def check_c4002_container_hierarchy(model: C4Model) -> list[Finding]: """C4-002: Component containers reference declared containers.""" findings: list[Finding] = [] for comp in model.components: valid = comp.container in model.container_names findings.append( Finding( check_id="C4-002", severity=Severity.ERROR, message=( f"Component {comp.name!r} container {comp.container!r} " f"{'is' if valid else 'is NOT'} a declared container" ), source_elements=[comp.name, comp.container], passed=valid, ) ) return findings ``` C4-003: External actors have at least one relationship. Source code in `packages/gds-domains/gds_domains/software/c4/checks.py` ``` def check_c4003_external_connectivity(model: C4Model) -> list[Finding]: """C4-003: External actors have at least one relationship.""" findings: list[Finding] = [] for p in model.persons: connected = any( r.source == p.name or r.target == p.name for r in model.relationships ) findings.append( Finding( check_id="C4-003", severity=Severity.WARNING, message=( f"Person {p.name!r} {'is' if connected else 'is NOT'} connected" ), source_elements=[p.name], passed=connected, ) ) for e in model.external_systems: connected = any( r.source == e.name or r.target == e.name for r in model.relationships ) findings.append( Finding( check_id="C4-003", severity=Severity.WARNING, message=( f"External system {e.name!r} " f"{'is' if connected else 'is NOT'} connected" ), source_elements=[e.name], passed=connected, ) ) return findings ``` C4-004: Relationships between elements at appropriate levels. Source code in `packages/gds-domains/gds_domains/software/c4/checks.py` ``` def check_c4004_level_consistency(model: C4Model) -> list[Finding]: """C4-004: Relationships between elements at appropriate levels.""" findings: list[Finding] = [] comp_names = {c.name for c in model.components} for rel in model.relationships: # Components should primarily relate to their parent container # or to other components, not directly to persons if rel.source in comp_names and rel.target in model.person_names: findings.append( Finding( check_id="C4-004", severity=Severity.WARNING, message=( f"Component {rel.source!r} directly relates to " f"Person {rel.target!r} — consider routing through container" ), source_elements=[rel.name], passed=False, ) ) if not findings: findings.append( Finding( check_id="C4-004", severity=Severity.WARNING, message="Level consistency is satisfied", source_elements=[], passed=True, ) ) return findings ``` # gds_domains.software.c4.compile Compiler: C4Model -> GDSSpec / SystemIR. ## Public Functions Compile a C4Model into a GDSSpec. Source code in `packages/gds-domains/gds_domains/software/c4/compile.py` ``` def compile_c4(model: C4Model) -> GDSSpec: """Compile a C4Model into a GDSSpec.""" spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(C4RequestType, C4StateType) # 2. Register spaces spec.collect(C4RequestSpace, C4StateSpace) # 3. Register entities for stateful containers/components for c in model.containers: if c.stateful: spec.register_entity(_build_entity(c.name)) for c in model.components: if c.stateful: spec.register_entity(_build_entity(c.name)) # 4. Register blocks for p in model.persons: spec.register_block(_build_person_block(p)) for e in model.external_systems: spec.register_block(_build_external_system_block(e)) for c in model.containers: spec.register_block(_build_container_block(c, model)) for c in model.components: spec.register_block(_build_component_block(c, model)) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for rel in model.relationships: wires.append( Wire(source=rel.source, target=rel.target, space="C4 RequestSpace") ) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for C4 model {model.name!r}", ) ) # C4 diagrams are atemporal — architectural structure spec.execution_contract = ExecutionContract(time_domain="atemporal") return spec ``` Compile a C4Model directly to SystemIR. Source code in `packages/gds-domains/gds_domains/software/c4/compile.py` ``` def compile_c4_to_system(model: C4Model) -> SystemIR: """Compile a C4Model directly to SystemIR.""" root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.software.c4.elements C4 model element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A person that interacts with the system. Maps to: GDS BoundaryAction (exogenous input). Source code in `packages/gds-domains/gds_domains/software/c4/elements.py` ``` class Person(BaseModel, frozen=True): """A person that interacts with the system. Maps to: GDS BoundaryAction (exogenous input). """ name: str description: str = "" ``` Bases: `BaseModel` An external system that the system interacts with. Maps to: GDS BoundaryAction (exogenous input). Source code in `packages/gds-domains/gds_domains/software/c4/elements.py` ``` class ExternalSystem(BaseModel, frozen=True): """An external system that the system interacts with. Maps to: GDS BoundaryAction (exogenous input). """ name: str description: str = "" ``` Bases: `BaseModel` A deployable unit (API, database, web app, etc). Maps to: GDS Policy (if stateless) or Mechanism (if stateful/database). Source code in `packages/gds-domains/gds_domains/software/c4/elements.py` ``` class Container(BaseModel, frozen=True): """A deployable unit (API, database, web app, etc). Maps to: GDS Policy (if stateless) or Mechanism (if stateful/database). """ name: str technology: str = "" stateful: bool = False description: str = "" ``` Bases: `BaseModel` A component within a container. Maps to: GDS Policy or Mechanism based on stateful flag. Source code in `packages/gds-domains/gds_domains/software/c4/elements.py` ``` class C4Component(BaseModel, frozen=True): """A component within a container. Maps to: GDS Policy or Mechanism based on stateful flag. """ name: str container: str = Field(description="Parent container name") technology: str = "" stateful: bool = False description: str = "" ``` Bases: `BaseModel` A directed relationship between C4 elements. Maps to: GDS Wiring. Source code in `packages/gds-domains/gds_domains/software/c4/elements.py` ``` class C4Relationship(BaseModel, frozen=True): """A directed relationship between C4 elements. Maps to: GDS Wiring. """ name: str source: str target: str technology: str = "" description: str = "" ``` # gds_domains.software.c4.model C4Model -- declarative container for C4 architecture models. Bases: `BaseModel` A complete C4 architecture model declaration. Validates at construction: 1. No duplicate element names 1. Relationship source/target reference declared elements 1. Component containers reference declared containers Source code in `packages/gds-domains/gds_domains/software/c4/model.py` ``` class C4Model(BaseModel): """A complete C4 architecture model declaration. Validates at construction: 1. No duplicate element names 2. Relationship source/target reference declared elements 3. Component containers reference declared containers """ name: str persons: list[Person] = Field(default_factory=list) external_systems: list[ExternalSystem] = Field(default_factory=list) containers: list[Container] = Field(default_factory=list) components: list[C4Component] = Field(default_factory=list) relationships: list[C4Relationship] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. No duplicate names all_names: list[str] = [] for p in self.persons: all_names.append(p.name) for e in self.external_systems: all_names.append(e.name) for c in self.containers: all_names.append(c.name) for c in self.components: all_names.append(c.name) seen: set[str] = set() for n in all_names: if n in seen: errors.append(f"Duplicate element name: {n!r}") seen.add(n) all_element_names = set(all_names) container_names = {c.name for c in self.containers} # 2. Relationship source/target reference declared elements for rel in self.relationships: if rel.source not in all_element_names: errors.append( f"Relationship {rel.name!r} source {rel.source!r} " f"is not a declared element" ) if rel.target not in all_element_names: errors.append( f"Relationship {rel.name!r} target {rel.target!r} " f"is not a declared element" ) # 3. Component containers reference declared containers for comp in self.components: if comp.container not in container_names: errors.append( f"Component {comp.name!r} container {comp.container!r} " f"is not a declared container" ) if errors: raise SWValidationError( f"C4Model {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def element_names(self) -> set[str]: names: set[str] = set() for p in self.persons: names.add(p.name) for e in self.external_systems: names.add(e.name) for c in self.containers: names.add(c.name) for c in self.components: names.add(c.name) return names @property def person_names(self) -> set[str]: return {p.name for p in self.persons} @property def external_system_names(self) -> set[str]: return {e.name for e in self.external_systems} @property def container_names(self) -> set[str]: return {c.name for c in self.containers} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: from gds_domains.software.c4.compile import compile_c4 return compile_c4(self) def compile_system(self) -> SystemIR: from gds_domains.software.c4.compile import compile_c4_to_system return compile_c4_to_system(self) ``` # gds_domains.software.common Shared types, errors, and compilation utilities. ## Diagram Kinds Bases: `StrEnum` The six software architecture diagram types. Source code in `packages/gds-domains/gds_domains/software/common/types.py` ``` class DiagramKind(StrEnum): """The six software architecture diagram types.""" DFD = "dfd" STATE_MACHINE = "state_machine" COMPONENT = "component" C4 = "c4" ERD = "erd" DEPENDENCY = "dependency" ``` ## Errors Bases: `GDSError` Base exception for software architecture DSL errors. Source code in `packages/gds-domains/gds_domains/software/common/errors.py` ``` class SWError(GDSError): """Base exception for software architecture DSL errors.""" ``` Bases: `SWError` Raised when a software architecture model fails structural validation. Source code in `packages/gds-domains/gds_domains/software/common/errors.py` ``` class SWValidationError(SWError): """Raised when a software architecture model fails structural validation.""" ``` Bases: `SWError` Raised when compilation of a software architecture model fails. Source code in `packages/gds-domains/gds_domains/software/common/errors.py` ``` class SWCompilationError(SWError): """Raised when compilation of a software architecture model fails.""" ``` ## Compilation Utilities Compose a list of blocks in parallel. Source code in `packages/gds-domains/gds_domains/software/common/compile_utils.py` ``` def parallel_tier(blocks: list[Block]) -> Block: """Compose a list of blocks in parallel.""" tier: Block = blocks[0] for b in blocks[1:]: tier = tier | b return tier ``` Build explicit wirings between two tiers based on port token overlap. For each output port in the first tier, find matching input ports in the second tier (by token intersection). This replaces auto-wiring so we can use explicit StackComposition and bypass the token overlap validator. Source code in `packages/gds-domains/gds_domains/software/common/compile_utils.py` ``` def build_inter_tier_wirings( first_tier_blocks: list[Block], second_tier_blocks: list[Block], ) -> list[Wiring]: """Build explicit wirings between two tiers based on port token overlap. For each output port in the first tier, find matching input ports in the second tier (by token intersection). This replaces auto-wiring so we can use explicit StackComposition and bypass the token overlap validator. """ wirings: list[Wiring] = [] for first_block in first_tier_blocks: for out_port in first_block.interface.forward_out: for second_block in second_tier_blocks: for in_port in second_block.interface.forward_in: if out_port.type_tokens & in_port.type_tokens: wirings.append( Wiring( source_block=first_block.name, source_port=out_port.name, target_block=second_block.name, target_port=in_port.name, ) ) return wirings ``` Compose two tiers sequentially with explicit wiring. Uses StackComposition directly to bypass the auto-wire token overlap check. If no wirings found, falls back to auto-wiring via >>. Source code in `packages/gds-domains/gds_domains/software/common/compile_utils.py` ``` def sequential_with_explicit_wiring( first: Block, second: Block, wiring: list[Wiring], ) -> Block: """Compose two tiers sequentially with explicit wiring. Uses StackComposition directly to bypass the auto-wire token overlap check. If no wirings found, falls back to auto-wiring via >>. """ if wiring: return StackComposition( name=f"{first.name} >> {second.name}", first=first, second=second, wiring=wiring, ) return first >> second ``` # gds_domains.software.component.checks Component diagram verification checks (CP-001..CP-004). CP-001: Every required interface is satisfied by a connector. Source code in `packages/gds-domains/gds_domains/software/component/checks.py` ``` def check_cp001_interface_satisfaction(model: ComponentModel) -> list[Finding]: """CP-001: Every required interface is satisfied by a connector.""" findings: list[Finding] = [] # Collect all satisfied required interfaces satisfied: set[tuple[str, str]] = set() for conn in model.connectors: satisfied.add((conn.target, conn.target_interface)) for comp in model.components: for req in comp.requires: is_satisfied = (comp.name, req) in satisfied findings.append( Finding( check_id="CP-001", severity=Severity.ERROR, message=( f"Component {comp.name!r} required interface {req!r} " f"{'is' if is_satisfied else 'is NOT'} satisfied" ), source_elements=[comp.name, req], passed=is_satisfied, ) ) return findings ``` CP-002: Connector source/target reference declared components and interfaces. Source code in `packages/gds-domains/gds_domains/software/component/checks.py` ``` def check_cp002_connector_validity(model: ComponentModel) -> list[Finding]: """CP-002: Connector source/target reference declared components and interfaces.""" findings: list[Finding] = [] comp_map = {c.name: c for c in model.components} for conn in model.connectors: src_valid = ( conn.source in comp_map and conn.source_interface in comp_map[conn.source].provides ) findings.append( Finding( check_id="CP-002", severity=Severity.ERROR, message=( f"Connector {conn.name!r} source {conn.source!r}." f"{conn.source_interface!r} " f"{'is' if src_valid else 'is NOT'} valid" ), source_elements=[conn.name, conn.source], passed=src_valid, ) ) tgt_valid = ( conn.target in comp_map and conn.target_interface in comp_map[conn.target].requires ) findings.append( Finding( check_id="CP-002", severity=Severity.ERROR, message=( f"Connector {conn.name!r} target {conn.target!r}." f"{conn.target_interface!r} " f"{'is' if tgt_valid else 'is NOT'} valid" ), source_elements=[conn.name, conn.target], passed=tgt_valid, ) ) return findings ``` CP-003: Every provided interface is consumed by a connector or is external. Source code in `packages/gds-domains/gds_domains/software/component/checks.py` ``` def check_cp003_dangling_interfaces(model: ComponentModel) -> list[Finding]: """CP-003: Every provided interface is consumed by a connector or is external.""" findings: list[Finding] = [] consumed: set[tuple[str, str]] = set() for conn in model.connectors: consumed.add((conn.source, conn.source_interface)) for comp in model.components: for prov in comp.provides: is_consumed = (comp.name, prov) in consumed findings.append( Finding( check_id="CP-003", severity=Severity.WARNING, message=( f"Component {comp.name!r} provided interface {prov!r} " f"{'is' if is_consumed else 'is NOT'} consumed by a connector" ), source_elements=[comp.name, prov], passed=is_consumed, ) ) return findings ``` CP-004: No duplicate component names. Source code in `packages/gds-domains/gds_domains/software/component/checks.py` ``` def check_cp004_component_naming(model: ComponentModel) -> list[Finding]: """CP-004: No duplicate component names.""" findings: list[Finding] = [] seen: set[str] = set() for comp in model.components: is_unique = comp.name not in seen findings.append( Finding( check_id="CP-004", severity=Severity.ERROR, message=( f"Component name {comp.name!r} " f"{'is' if is_unique else 'is NOT'} unique" ), source_elements=[comp.name], passed=is_unique, ) ) seen.add(comp.name) return findings ``` # gds_domains.software.component.compile Compiler: ComponentModel -> GDSSpec / SystemIR. ## Public Functions Compile a ComponentModel into a GDSSpec. Source code in `packages/gds-domains/gds_domains/software/component/compile.py` ``` def compile_component(model: ComponentModel) -> GDSSpec: """Compile a ComponentModel into a GDSSpec.""" spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(ComponentDataType, ComponentStateType) # 2. Register spaces spec.collect(ComponentDataSpace, ComponentStateSpace) # 3. Register entities for stateful components for comp in model.components: if comp.stateful: spec.register_entity(_build_component_entity(comp)) # 4. Register blocks for comp in model.components: spec.register_block(_build_component_block(comp, model)) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for conn in model.connectors: wires.append( Wire( source=conn.source, target=conn.target, space="CP DataSpace", ) ) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for component diagram {model.name!r}", ) ) # Component diagrams are atemporal — structural architecture spec.execution_contract = ExecutionContract(time_domain="atemporal") return spec ``` Compile a ComponentModel directly to SystemIR. Source code in `packages/gds-domains/gds_domains/software/component/compile.py` ``` def compile_component_to_system(model: ComponentModel) -> SystemIR: """Compile a ComponentModel directly to SystemIR.""" root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.software.component.elements Component diagram element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A software component with provided and required interfaces. Maps to: GDS Policy (if stateless) or Mechanism (if stateful). Source code in `packages/gds-domains/gds_domains/software/component/elements.py` ``` class Component(BaseModel, frozen=True): """A software component with provided and required interfaces. Maps to: GDS Policy (if stateless) or Mechanism (if stateful). """ name: str provides: list[str] = Field(default_factory=list) requires: list[str] = Field(default_factory=list) stateful: bool = False description: str = "" ``` Bases: `BaseModel` A named interface that a component provides or requires. Interfaces define the contract between components. Source code in `packages/gds-domains/gds_domains/software/component/elements.py` ``` class InterfaceDef(BaseModel, frozen=True): """A named interface that a component provides or requires. Interfaces define the contract between components. """ name: str description: str = "" ``` Bases: `BaseModel` A connector between component interfaces. Maps to: GDS Wiring. Connects a provided interface to a required interface. Source code in `packages/gds-domains/gds_domains/software/component/elements.py` ``` class Connector(BaseModel, frozen=True): """A connector between component interfaces. Maps to: GDS Wiring. Connects a provided interface to a required interface. """ name: str source: str source_interface: str target: str target_interface: str ``` # gds_domains.software.component.model ComponentModel -- declarative container for component diagrams. Bases: `BaseModel` A complete component diagram declaration. Validates at construction: 1. At least one component 1. No duplicate component names 1. Connector source/target reference declared components 1. Connector interfaces reference declared interfaces on their components Source code in `packages/gds-domains/gds_domains/software/component/model.py` ``` class ComponentModel(BaseModel): """A complete component diagram declaration. Validates at construction: 1. At least one component 2. No duplicate component names 3. Connector source/target reference declared components 4. Connector interfaces reference declared interfaces on their components """ name: str components: list[Component] interfaces: list[InterfaceDef] = Field(default_factory=list) connectors: list[Connector] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one component if not self.components: errors.append("Component diagram must have at least one component") # 2. No duplicate names names: list[str] = [c.name for c in self.components] seen: set[str] = set() for n in names: if n in seen: errors.append(f"Duplicate component name: {n!r}") seen.add(n) comp_names = set(names) comp_map = {c.name: c for c in self.components} # 3 & 4. Connector validation for conn in self.connectors: if conn.source not in comp_names: errors.append( f"Connector {conn.name!r} source {conn.source!r} " f"is not a declared component" ) elif conn.source_interface not in comp_map[conn.source].provides: errors.append( f"Connector {conn.name!r} source interface " f"{conn.source_interface!r} is not provided by {conn.source!r}" ) if conn.target not in comp_names: errors.append( f"Connector {conn.name!r} target {conn.target!r} " f"is not a declared component" ) elif conn.target_interface not in comp_map[conn.target].requires: errors.append( f"Connector {conn.name!r} target interface " f"{conn.target_interface!r} is not required by {conn.target!r}" ) if errors: raise SWValidationError( f"ComponentModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def component_names(self) -> set[str]: return {c.name for c in self.components} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: from gds_domains.software.component.compile import compile_component return compile_component(self) def compile_system(self) -> SystemIR: from gds_domains.software.component.compile import compile_component_to_system return compile_component_to_system(self) ``` # gds_domains.software.dependency.checks Dependency graph verification checks (DG-001..DG-004). DG-001: Dependency source/target are declared modules. Source code in `packages/gds-domains/gds_domains/software/dependency/checks.py` ``` def check_dg001_dep_validity(model: DependencyModel) -> list[Finding]: """DG-001: Dependency source/target are declared modules.""" findings: list[Finding] = [] for dep in model.deps: src_valid = dep.source in model.module_names findings.append( Finding( check_id="DG-001", severity=Severity.ERROR, message=( f"Dep source {dep.source!r} " f"{'is' if src_valid else 'is NOT'} a declared module" ), source_elements=[dep.source], passed=src_valid, ) ) tgt_valid = dep.target in model.module_names findings.append( Finding( check_id="DG-001", severity=Severity.ERROR, message=( f"Dep target {dep.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared module" ), source_elements=[dep.target], passed=tgt_valid, ) ) return findings ``` DG-002: No cycles in the dependency graph. Source code in `packages/gds-domains/gds_domains/software/dependency/checks.py` ``` def check_dg002_acyclicity(model: DependencyModel) -> list[Finding]: """DG-002: No cycles in the dependency graph.""" adj: dict[str, list[str]] = {m.name: [] for m in model.modules} for dep in model.deps: if dep.source in adj: adj[dep.source].append(dep.target) WHITE, GRAY, BLACK = 0, 1, 2 color: dict[str, int] = {name: WHITE for name in adj} cycle_members: list[str] = [] def dfs(node: str) -> bool: color[node] = GRAY for neighbor in adj[node]: if neighbor not in color: continue if color[neighbor] == GRAY: cycle_members.append(node) cycle_members.append(neighbor) return True if color[neighbor] == WHITE and dfs(neighbor): return True color[node] = BLACK return False has_cycle = any(dfs(name) for name in adj if color[name] == WHITE) if has_cycle: return [ Finding( check_id="DG-002", severity=Severity.ERROR, message=f"Cycle detected in dependency graph: {cycle_members}", source_elements=list(set(cycle_members)), passed=False, ) ] return [ Finding( check_id="DG-002", severity=Severity.ERROR, message="Dependency graph is acyclic", source_elements=list(adj.keys()), passed=True, ) ] ``` DG-003: Dependencies only go from higher layers to lower layers. Source code in `packages/gds-domains/gds_domains/software/dependency/checks.py` ``` def check_dg003_layer_ordering(model: DependencyModel) -> list[Finding]: """DG-003: Dependencies only go from higher layers to lower layers.""" findings: list[Finding] = [] module_layer = {m.name: m.layer for m in model.modules} for dep in model.deps: if dep.source not in module_layer or dep.target not in module_layer: continue src_layer = module_layer[dep.source] tgt_layer = module_layer[dep.target] valid = src_layer > tgt_layer or src_layer == tgt_layer findings.append( Finding( check_id="DG-003", severity=Severity.WARNING, message=( f"Dep {dep.source!r} (layer {src_layer}) -> " f"{dep.target!r} (layer {tgt_layer}): " f"{'valid' if valid else 'upward dependency'}" ), source_elements=[dep.source, dep.target], passed=valid, ) ) if not findings: findings.append( Finding( check_id="DG-003", severity=Severity.WARNING, message="No dependencies to check layer ordering", source_elements=[], passed=True, ) ) return findings ``` DG-004: Every module is connected (has at least one dependency). Source code in `packages/gds-domains/gds_domains/software/dependency/checks.py` ``` def check_dg004_module_connectivity(model: DependencyModel) -> list[Finding]: """DG-004: Every module is connected (has at least one dependency).""" findings: list[Finding] = [] connected: set[str] = set() for dep in model.deps: connected.add(dep.source) connected.add(dep.target) for mod in model.modules: is_connected = mod.name in connected findings.append( Finding( check_id="DG-004", severity=Severity.WARNING, message=( f"Module {mod.name!r} " f"{'is' if is_connected else 'is NOT'} connected" ), source_elements=[mod.name], passed=is_connected, ) ) return findings ``` # gds_domains.software.dependency.compile Compiler: DependencyModel -> GDSSpec / SystemIR. ## Public Functions Compile a DependencyModel into a GDSSpec. Source code in `packages/gds-domains/gds_domains/software/dependency/compile.py` ``` def compile_dep(model: DependencyModel) -> GDSSpec: """Compile a DependencyModel into a GDSSpec.""" spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(ModuleType) # 2. Register spaces spec.collect(ModuleSpace) # 3. Register blocks for mod in model.modules: spec.register_block(_build_module_block(mod, model)) # 4. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for dep in model.deps: wires.append(Wire(source=dep.target, target=dep.source, space="DG ModuleSpace")) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for dependency graph {model.name!r}", ) ) # Dependency graphs are atemporal — structural relationships spec.execution_contract = ExecutionContract(time_domain="atemporal") return spec ``` Compile a DependencyModel directly to SystemIR. Source code in `packages/gds-domains/gds_domains/software/dependency/compile.py` ``` def compile_dep_to_system(model: DependencyModel) -> SystemIR: """Compile a DependencyModel directly to SystemIR.""" root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.software.dependency.elements Dependency graph element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A module or package in a dependency graph. Maps to: GDS Policy. Source code in `packages/gds-domains/gds_domains/software/dependency/elements.py` ``` class Module(BaseModel, frozen=True): """A module or package in a dependency graph. Maps to: GDS Policy. """ name: str layer: int = 0 description: str = "" ``` Bases: `BaseModel` A directed dependency between modules. Maps to: GDS Wiring (forward dependency). Source code in `packages/gds-domains/gds_domains/software/dependency/elements.py` ``` class Dep(BaseModel, frozen=True): """A directed dependency between modules. Maps to: GDS Wiring (forward dependency). """ source: str target: str description: str = "" ``` Bases: `BaseModel` A named layer for organizing modules. Layers define ordering constraints — a module at layer N should only depend on modules at layer < N. Source code in `packages/gds-domains/gds_domains/software/dependency/elements.py` ``` class Layer(BaseModel, frozen=True): """A named layer for organizing modules. Layers define ordering constraints — a module at layer N should only depend on modules at layer < N. """ name: str depth: int = Field(description="Layer depth (0 = foundation)") description: str = "" ``` # gds_domains.software.dependency.model DependencyModel -- declarative container for dependency graphs. Bases: `BaseModel` A complete dependency graph declaration. Validates at construction: 1. At least one module 1. No duplicate module names 1. Dep source/target reference declared modules Source code in `packages/gds-domains/gds_domains/software/dependency/model.py` ``` class DependencyModel(BaseModel): """A complete dependency graph declaration. Validates at construction: 1. At least one module 2. No duplicate module names 3. Dep source/target reference declared modules """ name: str modules: list[Module] deps: list[Dep] = Field(default_factory=list) layers: list[Layer] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one module if not self.modules: errors.append("Dependency graph must have at least one module") # 2. No duplicate names names: list[str] = [m.name for m in self.modules] seen: set[str] = set() for n in names: if n in seen: errors.append(f"Duplicate module name: {n!r}") seen.add(n) module_names = set(names) # 3. Dep source/target reference declared modules for dep in self.deps: if dep.source not in module_names: errors.append( f"Dependency source {dep.source!r} is not a declared module" ) if dep.target not in module_names: errors.append( f"Dependency target {dep.target!r} is not a declared module" ) if errors: raise SWValidationError( f"DependencyModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def module_names(self) -> set[str]: return {m.name for m in self.modules} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: from gds_domains.software.dependency.compile import compile_dep return compile_dep(self) def compile_system(self) -> SystemIR: from gds_domains.software.dependency.compile import compile_dep_to_system return compile_dep_to_system(self) ``` # gds_domains.software.dfd.checks DFD verification checks (DFD-001..DFD-005). DFD-001: Every process has at least one connected flow. Source code in `packages/gds-domains/gds_domains/software/dfd/checks.py` ``` def check_dfd001_process_connectivity(model: DFDModel) -> list[Finding]: """DFD-001: Every process has at least one connected flow.""" findings: list[Finding] = [] for proc in model.processes: connected = any( f.source == proc.name or f.target == proc.name for f in model.data_flows ) findings.append( Finding( check_id="DFD-001", severity=Severity.WARNING, message=( f"Process {proc.name!r} has no connected flows" if not connected else f"Process {proc.name!r} has connected flows" ), source_elements=[proc.name], passed=connected, ) ) return findings ``` DFD-002: Flow source/target are declared elements. Source code in `packages/gds-domains/gds_domains/software/dfd/checks.py` ``` def check_dfd002_flow_validity(model: DFDModel) -> list[Finding]: """DFD-002: Flow source/target are declared elements.""" findings: list[Finding] = [] all_names = model.element_names for flow in model.data_flows: src_valid = flow.source in all_names findings.append( Finding( check_id="DFD-002", severity=Severity.ERROR, message=( f"Flow {flow.name!r} source {flow.source!r} " f"{'is' if src_valid else 'is NOT'} a declared element" ), source_elements=[flow.name, flow.source], passed=src_valid, ) ) tgt_valid = flow.target in all_names findings.append( Finding( check_id="DFD-002", severity=Severity.ERROR, message=( f"Flow {flow.name!r} target {flow.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared element" ), source_elements=[flow.name, flow.target], passed=tgt_valid, ) ) return findings ``` DFD-003: No direct flows between two external entities. Source code in `packages/gds-domains/gds_domains/software/dfd/checks.py` ``` def check_dfd003_no_ext_to_ext(model: DFDModel) -> list[Finding]: """DFD-003: No direct flows between two external entities.""" findings: list[Finding] = [] ext_names = model.external_names for flow in model.data_flows: is_ext_to_ext = flow.source in ext_names and flow.target in ext_names findings.append( Finding( check_id="DFD-003", severity=Severity.ERROR, message=( f"Flow {flow.name!r} connects two external entities " f"({flow.source!r} -> {flow.target!r})" if is_ext_to_ext else f"Flow {flow.name!r} does not connect two externals" ), source_elements=[flow.name], passed=not is_ext_to_ext, ) ) return findings ``` DFD-004: Every data store has at least one connected flow. Source code in `packages/gds-domains/gds_domains/software/dfd/checks.py` ``` def check_dfd004_store_connectivity(model: DFDModel) -> list[Finding]: """DFD-004: Every data store has at least one connected flow.""" findings: list[Finding] = [] for store in model.data_stores: connected = any( f.source == store.name or f.target == store.name for f in model.data_flows ) findings.append( Finding( check_id="DFD-004", severity=Severity.WARNING, message=( f"DataStore {store.name!r} has no connected flows" if not connected else f"DataStore {store.name!r} has connected flows" ), source_elements=[store.name], passed=connected, ) ) return findings ``` DFD-005: Every process has at least one outgoing flow. Source code in `packages/gds-domains/gds_domains/software/dfd/checks.py` ``` def check_dfd005_process_output(model: DFDModel) -> list[Finding]: """DFD-005: Every process has at least one outgoing flow.""" findings: list[Finding] = [] for proc in model.processes: has_output = any(f.source == proc.name for f in model.data_flows) findings.append( Finding( check_id="DFD-005", severity=Severity.WARNING, message=( f"Process {proc.name!r} has no outgoing flows" if not has_output else f"Process {proc.name!r} has outgoing flows" ), source_elements=[proc.name], passed=has_output, ) ) return findings ``` # gds_domains.software.dfd.compile Compiler: DFDModel -> GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a DFDModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings. Source code in `packages/gds-domains/gds_domains/software/dfd/compile.py` ``` def compile_dfd(model: DFDModel) -> GDSSpec: """Compile a DFDModel into a GDSSpec. Registers: types, spaces, entities, blocks, wirings. """ spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(SignalType, DataType, ContentType) # 2. Register spaces spec.collect(SignalSpace, DataSpace, ContentSpace) # 3. Register entities (one per data store) for store in model.data_stores: spec.register_entity(_build_store_entity(store)) # 4. Register blocks for ext in model.external_entities: spec.register_block(_build_external_block(ext)) for proc in model.processes: spec.register_block(_build_process_block(proc, model)) for store in model.data_stores: spec.register_block(_build_store_mechanism(store, model)) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for flow in model.data_flows: source_block = flow.source target_block = flow.target # Determine the space based on source type if flow.source in model.external_names: space = "DFD SignalSpace" elif flow.source in model.store_names: space = "DFD ContentSpace" source_block = _store_block_name(flow.source) else: space = "DFD DataSpace" if flow.target in model.store_names: target_block = _store_block_name(flow.target) wires.append(Wire(source=source_block, target=target_block, space=space)) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for DFD {model.name!r}", ) ) # DFDs are atemporal — structural data flow diagrams spec.execution_contract = ExecutionContract(time_domain="atemporal") return spec ``` Compile a DFDModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). Source code in `packages/gds-domains/gds_domains/software/dfd/compile.py` ``` def compile_dfd_to_system(model: DFDModel) -> SystemIR: """Compile a DFDModel directly to SystemIR. Builds the composition tree and delegates to GDS compile_system(). """ root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.software.dfd.elements DFD element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` An external actor that produces or consumes data. Maps to: GDS BoundaryAction (exogenous input U). Emits a Signal port; has no internal inputs. Source code in `packages/gds-domains/gds_domains/software/dfd/elements.py` ``` class ExternalEntity(BaseModel, frozen=True): """An external actor that produces or consumes data. Maps to: GDS BoundaryAction (exogenous input U). Emits a Signal port; has no internal inputs. """ name: str description: str = "" ``` Bases: `BaseModel` A data transformation or processing step. Maps to: GDS Policy (decision logic g). Receives data flows as input, produces data flows as output. Source code in `packages/gds-domains/gds_domains/software/dfd/elements.py` ``` class Process(BaseModel, frozen=True): """A data transformation or processing step. Maps to: GDS Policy (decision logic g). Receives data flows as input, produces data flows as output. """ name: str description: str = "" ``` Bases: `BaseModel` A data repository or database. Maps to: GDS Mechanism (state update f) + Entity (state X). Receives write flows, emits content for read flows. Source code in `packages/gds-domains/gds_domains/software/dfd/elements.py` ``` class DataStore(BaseModel, frozen=True): """A data repository or database. Maps to: GDS Mechanism (state update f) + Entity (state X). Receives write flows, emits content for read flows. """ name: str description: str = "" ``` Bases: `BaseModel` A directed data flow between DFD elements. Maps to: GDS Wiring (connects elements). The source and target reference element names. Source code in `packages/gds-domains/gds_domains/software/dfd/elements.py` ``` class DataFlow(BaseModel, frozen=True): """A directed data flow between DFD elements. Maps to: GDS Wiring (connects elements). The source and target reference element names. """ name: str source: str target: str data: str = Field(default="", description="Description of data carried") ``` # gds_domains.software.dfd.model DFDModel -- declarative container for data flow diagrams. Bases: `BaseModel` A complete data flow diagram declaration. Validates at construction: 1. At least one process 1. No duplicate element names across all lists 1. Flow source/target reference declared element names 1. No direct external-to-external flows 1. Every process has at least one connected flow Source code in `packages/gds-domains/gds_domains/software/dfd/model.py` ``` class DFDModel(BaseModel): """A complete data flow diagram declaration. Validates at construction: 1. At least one process 2. No duplicate element names across all lists 3. Flow source/target reference declared element names 4. No direct external-to-external flows 5. Every process has at least one connected flow """ name: str external_entities: list[ExternalEntity] = Field(default_factory=list) processes: list[Process] = Field(default_factory=list) data_stores: list[DataStore] = Field(default_factory=list) data_flows: list[DataFlow] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one process if not self.processes: errors.append("DFD must have at least one process") # 2. No duplicate names all_names: list[str] = [] for e in self.external_entities: all_names.append(e.name) for p in self.processes: all_names.append(p.name) for d in self.data_stores: all_names.append(d.name) seen: set[str] = set() for n in all_names: if n in seen: errors.append(f"Duplicate element name: {n!r}") seen.add(n) all_element_names = set(all_names) # 3. Flow source/target reference declared elements for flow in self.data_flows: if flow.source not in all_element_names: errors.append( f"Flow {flow.name!r} source {flow.source!r} " f"is not a declared element" ) if flow.target not in all_element_names: errors.append( f"Flow {flow.name!r} target {flow.target!r} " f"is not a declared element" ) # 4. No direct external-to-external flows ext_names = {e.name for e in self.external_entities} for flow in self.data_flows: if flow.source in ext_names and flow.target in ext_names: errors.append( f"Flow {flow.name!r} connects two external entities " f"({flow.source!r} -> {flow.target!r})" ) if errors: raise SWValidationError( f"DFDModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def element_names(self) -> set[str]: """All element names in the model.""" names: set[str] = set() for e in self.external_entities: names.add(e.name) for p in self.processes: names.add(p.name) for d in self.data_stores: names.add(d.name) return names @property def external_names(self) -> set[str]: return {e.name for e in self.external_entities} @property def process_names(self) -> set[str]: return {p.name for p in self.processes} @property def store_names(self) -> set[str]: return {d.name for d in self.data_stores} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.software.dfd.compile import compile_dfd return compile_dfd(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.software.dfd.compile import compile_dfd_to_system return compile_dfd_to_system(self) ``` ## `element_names` All element names in the model. ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/software/dfd/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.software.dfd.compile import compile_dfd return compile_dfd(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/software/dfd/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.software.dfd.compile import compile_dfd_to_system return compile_dfd_to_system(self) ``` # gds_domains.software.erd.checks ERD verification checks (ER-001..ER-004). ER-001: Relationship source/target are declared entities. Source code in `packages/gds-domains/gds_domains/software/erd/checks.py` ``` def check_er001_relationship_validity(model: ERDModel) -> list[Finding]: """ER-001: Relationship source/target are declared entities.""" findings: list[Finding] = [] for rel in model.relationships: src_valid = rel.source in model.entity_names findings.append( Finding( check_id="ER-001", severity=Severity.ERROR, message=( f"Relationship {rel.name!r} source {rel.source!r} " f"{'is' if src_valid else 'is NOT'} a declared entity" ), source_elements=[rel.name, rel.source], passed=src_valid, ) ) tgt_valid = rel.target in model.entity_names findings.append( Finding( check_id="ER-001", severity=Severity.ERROR, message=( f"Relationship {rel.name!r} target {rel.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared entity" ), source_elements=[rel.name, rel.target], passed=tgt_valid, ) ) return findings ``` ER-002: Every entity has at least one primary key attribute. Source code in `packages/gds-domains/gds_domains/software/erd/checks.py` ``` def check_er002_pk_existence(model: ERDModel) -> list[Finding]: """ER-002: Every entity has at least one primary key attribute.""" findings: list[Finding] = [] for entity in model.entities: has_pk = any(a.is_primary_key for a in entity.attributes) findings.append( Finding( check_id="ER-002", severity=Severity.WARNING, message=( f"Entity {entity.name!r} " f"{'has' if has_pk else 'has NO'} primary key" ), source_elements=[entity.name], passed=has_pk, ) ) return findings ``` ER-003: No duplicate attribute names within an entity. Source code in `packages/gds-domains/gds_domains/software/erd/checks.py` ``` def check_er003_attribute_uniqueness(model: ERDModel) -> list[Finding]: """ER-003: No duplicate attribute names within an entity.""" findings: list[Finding] = [] for entity in model.entities: seen: set[str] = set() for attr in entity.attributes: is_unique = attr.name not in seen if not is_unique: findings.append( Finding( check_id="ER-003", severity=Severity.ERROR, message=( f"Entity {entity.name!r} has duplicate " f"attribute {attr.name!r}" ), source_elements=[entity.name, attr.name], passed=False, ) ) seen.add(attr.name) if not findings: findings.append( Finding( check_id="ER-003", severity=Severity.ERROR, message="All entity attributes are unique", source_elements=[], passed=True, ) ) return findings ``` ER-004: No duplicate relationship names. Source code in `packages/gds-domains/gds_domains/software/erd/checks.py` ``` def check_er004_relationship_naming(model: ERDModel) -> list[Finding]: """ER-004: No duplicate relationship names.""" findings: list[Finding] = [] seen: set[str] = set() for rel in model.relationships: is_unique = rel.name not in seen findings.append( Finding( check_id="ER-004", severity=Severity.ERROR, message=( f"Relationship name {rel.name!r} " f"{'is' if is_unique else 'is NOT'} unique" ), source_elements=[rel.name], passed=is_unique, ) ) seen.add(rel.name) return findings ``` # gds_domains.software.erd.compile Compiler: ERDModel -> GDSSpec / SystemIR. ## Public Functions Compile an ERDModel into a GDSSpec. Source code in `packages/gds-domains/gds_domains/software/erd/compile.py` ``` def compile_erd(model: ERDModel) -> GDSSpec: """Compile an ERDModel into a GDSSpec.""" spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(ERDAttributeType) # 2. Register spaces spec.collect(ERDAttributeSpace) # 3. Register entities for entity in model.entities: spec.register_entity(_build_erd_entity(entity)) # 4. Register blocks for rel in model.relationships: spec.register_block(_build_relationship_mechanism(rel, model)) # If no relationships, create identity mechanism if not model.relationships: entity = model.entities[0] spec.register_block( Mechanism( name=f"{entity.name} Identity", interface=Interface( forward_out=(port(_entity_port_name(entity.name)),), ), updates=[(entity.name, "id")], ) ) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for rel in model.relationships: wires.append( Wire( source=_rel_block_name(rel.name), target=_rel_block_name(rel.name), space="ERD AttributeSpace", ) ) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for ERD {model.name!r}", ) ) # ERDs are atemporal — entity relationship structure spec.execution_contract = ExecutionContract(time_domain="atemporal") return spec ``` Compile an ERDModel directly to SystemIR. Source code in `packages/gds-domains/gds_domains/software/erd/compile.py` ``` def compile_erd_to_system(model: ERDModel) -> SystemIR: """Compile an ERDModel directly to SystemIR.""" root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.software.erd.elements ERD element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` An entity in an ER diagram. Maps to: GDS Entity with StateVariables for each attribute. Source code in `packages/gds-domains/gds_domains/software/erd/elements.py` ``` class ERDEntity(BaseModel, frozen=True): """An entity in an ER diagram. Maps to: GDS Entity with StateVariables for each attribute. """ name: str attributes: list[Attribute] = Field(default_factory=list) description: str = "" ``` Bases: `BaseModel` An attribute of an ERD entity. Maps to: GDS StateVariable within an Entity. Source code in `packages/gds-domains/gds_domains/software/erd/elements.py` ``` class Attribute(BaseModel, frozen=True): """An attribute of an ERD entity. Maps to: GDS StateVariable within an Entity. """ name: str type: str = "string" is_primary_key: bool = False is_nullable: bool = True description: str = "" ``` Bases: `BaseModel` A relationship between ERD entities. Maps to: GDS Mechanism + SpecWiring (cross-entity updates). Source code in `packages/gds-domains/gds_domains/software/erd/elements.py` ``` class ERDRelationship(BaseModel, frozen=True): """A relationship between ERD entities. Maps to: GDS Mechanism + SpecWiring (cross-entity updates). """ name: str source: str target: str cardinality: Cardinality = Cardinality.ONE_TO_MANY description: str = "" ``` Bases: `StrEnum` Relationship cardinality. Source code in `packages/gds-domains/gds_domains/software/erd/elements.py` ``` class Cardinality(StrEnum): """Relationship cardinality.""" ONE_TO_ONE = "1:1" ONE_TO_MANY = "1:N" MANY_TO_ONE = "N:1" MANY_TO_MANY = "N:N" ``` # gds_domains.software.erd.model ERDModel -- declarative container for entity-relationship diagrams. Bases: `BaseModel` A complete entity-relationship diagram declaration. Validates at construction: 1. At least one entity 1. No duplicate entity names 1. Relationship source/target reference declared entities 1. No duplicate attribute names within an entity Source code in `packages/gds-domains/gds_domains/software/erd/model.py` ``` class ERDModel(BaseModel): """A complete entity-relationship diagram declaration. Validates at construction: 1. At least one entity 2. No duplicate entity names 3. Relationship source/target reference declared entities 4. No duplicate attribute names within an entity """ name: str entities: list[ERDEntity] relationships: list[ERDRelationship] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one entity if not self.entities: errors.append("ERD must have at least one entity") # 2. No duplicate entity names names: list[str] = [e.name for e in self.entities] seen: set[str] = set() for n in names: if n in seen: errors.append(f"Duplicate entity name: {n!r}") seen.add(n) entity_names = set(names) # 3. Relationship source/target for rel in self.relationships: if rel.source not in entity_names: errors.append( f"Relationship {rel.name!r} source {rel.source!r} " f"is not a declared entity" ) if rel.target not in entity_names: errors.append( f"Relationship {rel.name!r} target {rel.target!r} " f"is not a declared entity" ) # 4. No duplicate attribute names within an entity for entity in self.entities: attr_seen: set[str] = set() for attr in entity.attributes: if attr.name in attr_seen: errors.append( f"Entity {entity.name!r} has duplicate attribute {attr.name!r}" ) attr_seen.add(attr.name) if errors: raise SWValidationError( f"ERDModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def entity_names(self) -> set[str]: return {e.name for e in self.entities} # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: from gds_domains.software.erd.compile import compile_erd return compile_erd(self) def compile_system(self) -> SystemIR: from gds_domains.software.erd.compile import compile_erd_to_system return compile_erd_to_system(self) ``` # gds_domains.software Public API -- top-level exports. Software architecture DSL over GDS semantics. Declare software architecture diagrams — DFDs, state machines, component diagrams, C4 models, ERDs, and dependency graphs — as typed compositional specifications. The compiler maps them to GDS role blocks, entities, and composition trees. All downstream GDS tooling works immediately — canonical projection, semantic checks, SpecQuery, serialization, gds-viz. # gds_domains.software.statemachine.checks State machine verification checks (SM-001..SM-006). SM-001: Exactly one initial state exists. Source code in `packages/gds-domains/gds_domains/software/statemachine/checks.py` ``` def check_sm001_initial_state(model: StateMachineModel) -> list[Finding]: """SM-001: Exactly one initial state exists.""" initial = [s for s in model.states if s.is_initial] passed = len(initial) == 1 return [ Finding( check_id="SM-001", severity=Severity.ERROR, message=( f"State machine has {len(initial)} initial state(s), expected 1" if not passed else "State machine has exactly one initial state" ), source_elements=[s.name for s in initial] or [""], passed=passed, ) ] ``` SM-002: All states are reachable from the initial state. Source code in `packages/gds-domains/gds_domains/software/statemachine/checks.py` ``` def check_sm002_reachability(model: StateMachineModel) -> list[Finding]: """SM-002: All states are reachable from the initial state.""" if not any(s.is_initial for s in model.states): return [] initial = model.initial_state # BFS from initial reachable: set[str] = set() queue = [initial.name] while queue: current = queue.pop(0) if current in reachable: continue reachable.add(current) for t in model.transitions: if t.source == current and t.target not in reachable: queue.append(t.target) findings: list[Finding] = [] for state in model.states: is_reachable = state.name in reachable findings.append( Finding( check_id="SM-002", severity=Severity.WARNING, message=( f"State {state.name!r} is not reachable from initial state" if not is_reachable else f"State {state.name!r} is reachable" ), source_elements=[state.name], passed=is_reachable, ) ) return findings ``` SM-003: No two transitions from the same state on the same event (without guards). Source code in `packages/gds-domains/gds_domains/software/statemachine/checks.py` ``` def check_sm003_determinism(model: StateMachineModel) -> list[Finding]: """SM-003: No two transitions from the same state on the same event (without guards).""" findings: list[Finding] = [] # Group transitions by (source, event) groups: dict[tuple[str, str], list[str]] = {} for t in model.transitions: key = (t.source, t.event) groups.setdefault(key, []).append(t.name) for (source, event), trans_names in groups.items(): # Check if any of these transitions lack guards transitions = [t for t in model.transitions if t.name in trans_names] unguarded = [t for t in transitions if t.guard is None] if len(unguarded) > 1: findings.append( Finding( check_id="SM-003", severity=Severity.ERROR, message=( f"Non-deterministic: {len(unguarded)} unguarded transitions " f"from {source!r} on event {event!r}: " f"{[t.name for t in unguarded]}" ), source_elements=[t.name for t in unguarded], passed=False, ) ) if not findings: findings.append( Finding( check_id="SM-003", severity=Severity.ERROR, message="State machine is deterministic", source_elements=[], passed=True, ) ) return findings ``` SM-004: Transitions with guards from same state/event should be exhaustive. Source code in `packages/gds-domains/gds_domains/software/statemachine/checks.py` ``` def check_sm004_guard_completeness(model: StateMachineModel) -> list[Finding]: """SM-004: Transitions with guards from same state/event should be exhaustive.""" findings: list[Finding] = [] groups: dict[tuple[str, str], list[str]] = {} for t in model.transitions: key = (t.source, t.event) groups.setdefault(key, []).append(t.name) for (source, event), trans_names in groups.items(): transitions = [t for t in model.transitions if t.name in trans_names] guarded = [t for t in transitions if t.guard is not None] unguarded = [t for t in transitions if t.guard is None] if guarded and not unguarded: # All guarded, no default — may not be exhaustive findings.append( Finding( check_id="SM-004", severity=Severity.WARNING, message=( f"All transitions from {source!r} on {event!r} are guarded " f"with no default — may not be exhaustive" ), source_elements=[t.name for t in guarded], passed=False, ) ) if not findings: findings.append( Finding( check_id="SM-004", severity=Severity.WARNING, message="Guard completeness is satisfied", source_elements=[], passed=True, ) ) return findings ``` SM-005: Region states must partition the state set (no overlaps, no gaps). Source code in `packages/gds-domains/gds_domains/software/statemachine/checks.py` ``` def check_sm005_region_partition(model: StateMachineModel) -> list[Finding]: """SM-005: Region states must partition the state set (no overlaps, no gaps).""" if not model.regions: return [ Finding( check_id="SM-005", severity=Severity.WARNING, message="No regions defined — partition check skipped", source_elements=[], passed=True, ) ] findings: list[Finding] = [] all_region_states: list[str] = [] for region in model.regions: all_region_states.extend(region.states) # Check for overlaps seen: set[str] = set() for s in all_region_states: if s in seen: findings.append( Finding( check_id="SM-005", severity=Severity.ERROR, message=f"State {s!r} appears in multiple regions", source_elements=[s], passed=False, ) ) seen.add(s) # Check for gaps for state in model.states: if state.name not in seen: findings.append( Finding( check_id="SM-005", severity=Severity.WARNING, message=f"State {state.name!r} is not assigned to any region", source_elements=[state.name], passed=False, ) ) if not findings: findings.append( Finding( check_id="SM-005", severity=Severity.WARNING, message="Region partition is valid", source_elements=[], passed=True, ) ) return findings ``` SM-006: Transition source/target are declared states, events are declared. Source code in `packages/gds-domains/gds_domains/software/statemachine/checks.py` ``` def check_sm006_transition_validity(model: StateMachineModel) -> list[Finding]: """SM-006: Transition source/target are declared states, events are declared.""" findings: list[Finding] = [] for t in model.transitions: src_valid = t.source in model.state_names findings.append( Finding( check_id="SM-006", severity=Severity.ERROR, message=( f"Transition {t.name!r} source {t.source!r} " f"{'is' if src_valid else 'is NOT'} a declared state" ), source_elements=[t.name, t.source], passed=src_valid, ) ) tgt_valid = t.target in model.state_names findings.append( Finding( check_id="SM-006", severity=Severity.ERROR, message=( f"Transition {t.name!r} target {t.target!r} " f"{'is' if tgt_valid else 'is NOT'} a declared state" ), source_elements=[t.name, t.target], passed=tgt_valid, ) ) evt_valid = t.event in model.event_names findings.append( Finding( check_id="SM-006", severity=Severity.ERROR, message=( f"Transition {t.name!r} event {t.event!r} " f"{'is' if evt_valid else 'is NOT'} a declared event" ), source_elements=[t.name, t.event], passed=evt_valid, ) ) return findings ``` # gds_domains.software.statemachine.compile Compiler: StateMachineModel -> GDSSpec / SystemIR. ## Semantic Types ## Public Functions Compile a StateMachineModel into a GDSSpec. Source code in `packages/gds-domains/gds_domains/software/statemachine/compile.py` ``` def compile_sm(model: StateMachineModel) -> GDSSpec: """Compile a StateMachineModel into a GDSSpec.""" spec = GDSSpec(name=model.name, description=model.description) # 1. Register types spec.collect(EventType, StateType) # 2. Register spaces spec.collect(EventSpace, StateSpace) # 3. Register entities (one per state) for state in model.states: spec.register_entity(_build_state_entity(state)) # 4. Register blocks for event in model.events: spec.register_block(_build_event_block(event)) for t in model.transitions: spec.register_block(_build_transition_block(t, model)) for state in model.states: spec.register_block(_build_state_mechanism(state, model)) # 5. Register spec wirings all_block_names = [b.name for b in spec.blocks.values()] wires: list[Wire] = [] for t in model.transitions: wires.append( Wire( source=t.event, target=_transition_block_name(t.name), space="SM EventSpace", ) ) wires.append( Wire( source=_transition_block_name(t.name), target=_state_mech_name(t.target), space="SM StateSpace", ) ) spec.register_wiring( SpecWiring( name=f"{model.name} Wiring", block_names=all_block_names, wires=wires, description=f"Auto-generated wiring for state machine {model.name!r}", ) ) # State machines are discrete/synchronous/Moore spec.execution_contract = ExecutionContract(time_domain="discrete") return spec ``` Compile a StateMachineModel directly to SystemIR. Source code in `packages/gds-domains/gds_domains/software/statemachine/compile.py` ``` def compile_sm_to_system(model: StateMachineModel) -> SystemIR: """Compile a StateMachineModel directly to SystemIR.""" root = _build_composition_tree(model) return compile_system(model.name, root) ``` # gds_domains.software.statemachine.elements State machine element declarations -- frozen Pydantic models for user-facing declarations. Bases: `BaseModel` A state in a state machine. Maps to: GDS Entity (state X) + StateVariable. Source code in `packages/gds-domains/gds_domains/software/statemachine/elements.py` ``` class State(BaseModel, frozen=True): """A state in a state machine. Maps to: GDS Entity (state X) + StateVariable. """ name: str is_initial: bool = False is_final: bool = False description: str = "" ``` Bases: `BaseModel` An external or internal event that triggers transitions. Maps to: GDS BoundaryAction (exogenous input U). Source code in `packages/gds-domains/gds_domains/software/statemachine/elements.py` ``` class Event(BaseModel, frozen=True): """An external or internal event that triggers transitions. Maps to: GDS BoundaryAction (exogenous input U). """ name: str description: str = "" ``` Bases: `BaseModel` A directed transition between states. Maps to: GDS Policy (guard evaluation) + Mechanism (state update). Source code in `packages/gds-domains/gds_domains/software/statemachine/elements.py` ``` class Transition(BaseModel, frozen=True): """A directed transition between states. Maps to: GDS Policy (guard evaluation) + Mechanism (state update). """ name: str source: str target: str event: str guard: Guard | None = None action: str = "" ``` Bases: `BaseModel` A boolean condition on a transition. Guards are evaluated at transition time — they restrict when a transition may fire. Source code in `packages/gds-domains/gds_domains/software/statemachine/elements.py` ``` class Guard(BaseModel, frozen=True): """A boolean condition on a transition. Guards are evaluated at transition time — they restrict when a transition may fire. """ condition: str description: str = "" ``` Bases: `BaseModel` An orthogonal region in a hierarchical state machine. Maps to: GDS ParallelComposition — regions execute concurrently. Source code in `packages/gds-domains/gds_domains/software/statemachine/elements.py` ``` class Region(BaseModel, frozen=True): """An orthogonal region in a hierarchical state machine. Maps to: GDS ParallelComposition — regions execute concurrently. """ name: str states: list[str] = Field(default_factory=list) description: str = "" ``` # gds_domains.software.statemachine.model StateMachineModel -- declarative container for state machine diagrams. Bases: `BaseModel` A complete state machine declaration. Validates at construction: 1. At least one state 1. Exactly one initial state 1. No duplicate state/event names 1. Transition source/target reference declared states 1. Transition events reference declared events 1. Region states reference declared states (if regions used) Source code in `packages/gds-domains/gds_domains/software/statemachine/model.py` ``` class StateMachineModel(BaseModel): """A complete state machine declaration. Validates at construction: 1. At least one state 2. Exactly one initial state 3. No duplicate state/event names 4. Transition source/target reference declared states 5. Transition events reference declared events 6. Region states reference declared states (if regions used) """ name: str states: list[State] events: list[Event] = Field(default_factory=list) transitions: list[Transition] = Field(default_factory=list) regions: list[Region] = Field(default_factory=list) description: str = "" @model_validator(mode="after") def _validate_structure(self) -> Self: errors: list[str] = [] # 1. At least one state if not self.states: errors.append("State machine must have at least one state") # 2. Exactly one initial state initial_states = [s for s in self.states if s.is_initial] if len(initial_states) == 0: errors.append("State machine must have exactly one initial state") elif len(initial_states) > 1: names = [s.name for s in initial_states] errors.append(f"State machine has multiple initial states: {names}") # 3. No duplicate names state_names_list: list[str] = [s.name for s in self.states] seen: set[str] = set() for n in state_names_list: if n in seen: errors.append(f"Duplicate state name: {n!r}") seen.add(n) event_names_list: list[str] = [e.name for e in self.events] seen_events: set[str] = set() for n in event_names_list: if n in seen_events: errors.append(f"Duplicate event name: {n!r}") seen_events.add(n) state_names = set(state_names_list) event_names = set(event_names_list) # 4. Transition source/target reference declared states for t in self.transitions: if t.source not in state_names: errors.append( f"Transition {t.name!r} source {t.source!r} is not a declared state" ) if t.target not in state_names: errors.append( f"Transition {t.name!r} target {t.target!r} is not a declared state" ) # 5. Transition events reference declared events for t in self.transitions: if t.event not in event_names: errors.append( f"Transition {t.name!r} event {t.event!r} is not a declared event" ) # 6. Region states reference declared states for region in self.regions: for s in region.states: if s not in state_names: errors.append( f"Region {region.name!r} references undeclared state {s!r}" ) if errors: raise SWValidationError( f"StateMachineModel {self.name!r} validation failed:\n" + "\n".join(f" - {e}" for e in errors) ) return self # ── Convenience properties ────────────────────────────── @property def state_names(self) -> set[str]: return {s.name for s in self.states} @property def event_names(self) -> set[str]: return {e.name for e in self.events} @property def initial_state(self) -> State: return next(s for s in self.states if s.is_initial) # ── Compilation ───────────────────────────────────────── def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.software.statemachine.compile import compile_sm return compile_sm(self) def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.software.statemachine.compile import compile_sm_to_system return compile_sm_to_system(self) ``` ## `compile()` Compile this model to a GDS specification. Source code in `packages/gds-domains/gds_domains/software/statemachine/model.py` ``` def compile(self) -> GDSSpec: """Compile this model to a GDS specification.""" from gds_domains.software.statemachine.compile import compile_sm return compile_sm(self) ``` ## `compile_system()` Compile this model to a flat SystemIR for verification + visualization. Source code in `packages/gds-domains/gds_domains/software/statemachine/model.py` ``` def compile_system(self) -> SystemIR: """Compile this model to a flat SystemIR for verification + visualization.""" from gds_domains.software.statemachine.compile import compile_sm_to_system return compile_sm_to_system(self) ``` # gds_domains.software.verification Verification engine -- union dispatch across all software diagram types. Run verification checks on any software architecture model. Dispatches to the appropriate domain checks based on model type, then optionally compiles to SystemIR and runs GDS generic checks. Source code in `packages/gds-domains/gds_domains/software/verification/engine.py` ``` def verify( model: Any, domain_checks: list[Callable[..., list[Finding]]] | None = None, include_gds_checks: bool = True, ) -> VerificationReport: """Run verification checks on any software architecture model. Dispatches to the appropriate domain checks based on model type, then optionally compiles to SystemIR and runs GDS generic checks. """ from gds_domains.software.c4.checks import ALL_C4_CHECKS from gds_domains.software.c4.model import C4Model from gds_domains.software.component.checks import ALL_CP_CHECKS from gds_domains.software.component.model import ComponentModel from gds_domains.software.dependency.checks import ALL_DG_CHECKS from gds_domains.software.dependency.model import DependencyModel from gds_domains.software.dfd.checks import ALL_DFD_CHECKS from gds_domains.software.dfd.model import DFDModel from gds_domains.software.erd.checks import ALL_ER_CHECKS from gds_domains.software.erd.model import ERDModel from gds_domains.software.statemachine.checks import ALL_SM_CHECKS from gds_domains.software.statemachine.model import StateMachineModel # Dispatch to appropriate checks if domain_checks is not None: checks = domain_checks elif isinstance(model, DFDModel): checks = ALL_DFD_CHECKS elif isinstance(model, StateMachineModel): checks = ALL_SM_CHECKS elif isinstance(model, ComponentModel): checks = ALL_CP_CHECKS elif isinstance(model, C4Model): checks = ALL_C4_CHECKS elif isinstance(model, ERDModel): checks = ALL_ER_CHECKS elif isinstance(model, DependencyModel): checks = ALL_DG_CHECKS else: raise TypeError(f"Unknown model type: {type(model).__name__}") findings: list[Finding] = [] # Phase 1: Domain checks on model for check_fn in checks: findings.extend(check_fn(model)) # Phase 2: GDS generic checks on compiled SystemIR if include_gds_checks: from gds.verification.engine import ALL_CHECKS as GDS_ALL_CHECKS system_ir = model.compile_system() for gds_check in GDS_ALL_CHECKS: findings.extend(gds_check(system_ir)) return VerificationReport(system_name=model.name, findings=findings) ``` # Examples # GDS Framework Examples **Seven example models** demonstrating every [gds-framework](https://blockscience.github.io/gds-framework) feature. The first six are GDS framework tutorials with inline theory commentary. The seventh uses the OGS game theory DSL with tournament simulation. ## Quick Start ``` # Run all example tests (168 tests) uv run pytest examples/ -v # Run a specific example uv run pytest examples/sir_epidemic/ -v # Generate all 6 views for one example uv run python examples/sir_epidemic/generate_views.py --save ``` ## Learning Path | # | Example | New Concept | Composition | | --- | ----------------------------------------------------------------------------------------------------------- | -------------------------------------------------- | ------------------- | | 1 | [SIR Epidemic](https://blockscience.github.io/gds-core/examples/examples/sir-epidemic/index.md) | Fundamentals — TypeDef, Entity, Space, blocks | `>>` `\|` | | 2 | [Thermostat PID](https://blockscience.github.io/gds-core/examples/examples/thermostat/index.md) | `.feedback()`, CONTRAVARIANT backward flow | `>>` `.feedback()` | | 3 | [Lotka-Volterra](https://blockscience.github.io/gds-core/examples/examples/lotka-volterra/index.md) | `.loop()`, COVARIANT temporal iteration | `>>` `\|` `.loop()` | | 4 | [Prisoner's Dilemma](https://blockscience.github.io/gds-core/examples/examples/prisoners-dilemma/index.md) | Nested `\|`, multi-entity X, complex trees | `\|` `>>` `.loop()` | | 5 | [Insurance Contract](https://blockscience.github.io/gds-core/examples/examples/insurance/index.md) | ControlAction role, complete 4-role taxonomy | `>>` | | 6 | [Crosswalk Problem](https://blockscience.github.io/gds-core/examples/examples/crosswalk/index.md) | Mechanism design, discrete Markov transitions | `>>` | | 7 | [Evolution of Trust](https://blockscience.github.io/gds-core/examples/examples/evolution-of-trust/index.md) | OGS iterated PD, strategies, tournament simulation | OGS feedback | Start with SIR Epidemic and work down — each introduces one new concept. The Evolution of Trust example uses gds-games (OGS) rather than gds-framework directly. ## File Structure Each example follows the same layout: ``` examples/sir_epidemic/ ├── __init__.py # empty ├── model.py # types, entities, spaces, blocks, build_spec(), build_system() ├── test_model.py # comprehensive tests for every layer ├── generate_views.py # generates all 6 visualization views with commentary └── VIEWS.md # generated output — 6 Mermaid diagrams with explanations ``` ## Credits **Author:** [Rohan Mehta](https://github.com/rororowyourboat) — [BlockScience](https://block.science/) **Theoretical foundation:** [Dr. Michael Zargham](https://github.com/mzargham) and [Dr. Jamsheed Shorish](https://github.com/jshorish) **Lineage:** Part of the [cadCAD](https://github.com/cadCAD-org/cadCAD) ecosystem for Complex Adaptive Dynamics. # Learning Path Start with SIR Epidemic and work down. Each example introduces one new concept while reinforcing the previous ones. ## Progression ### 1. [SIR Epidemic](https://blockscience.github.io/gds-core/examples/examples/sir-epidemic/index.md) — Fundamentals The foundation. Learn TypeDef, Entity, Space, BoundaryAction, Policy, Mechanism, `>>` sequential composition, and `|` parallel composition. **Roles:** BoundaryAction, Policy, Mechanism ### 2. [Thermostat PID](https://blockscience.github.io/gds-core/examples/examples/thermostat/index.md) — Feedback Adds `.feedback()` for within-evaluation backward information flow. Introduces CONTRAVARIANT flow direction and the ControlAction role. **New:** `.feedback()`, backward ports, ControlAction ### 3. [Lotka-Volterra](https://blockscience.github.io/gds-core/examples/examples/lotka-volterra/index.md) — Temporal Loops Adds `.loop()` for cross-boundary recurrence. Introduces COVARIANT temporal wiring and Mechanism with `forward_out`. **New:** `.loop()`, temporal wiring, exit conditions ### 4. [Prisoner's Dilemma](https://blockscience.github.io/gds-core/examples/examples/prisoners-dilemma/index.md) — Complex Composition Most complex composition tree. Nested parallel (`(A | B) | C`), multi-entity state space, and combining all operators except `.feedback()`. **New:** nested parallel, multi-entity state, complex trees ### 5. [Insurance Contract](https://blockscience.github.io/gds-core/examples/examples/insurance/index.md) — Complete Taxonomy Completes the 4-role taxonomy. The only example using all four roles: BoundaryAction, Policy, ControlAction, Mechanism. **New:** complete role taxonomy, parameterized admissibility ### 6. [Crosswalk Problem](https://blockscience.github.io/gds-core/examples/examples/crosswalk/index.md) — Mechanism Design The canonical GDS example from BlockScience. Demonstrates mechanism design with a governance parameter constraining agent behavior via discrete Markov transitions. **New:** mechanism design, governance parameters, discrete state ### 7. [Evolution of Trust](https://blockscience.github.io/gds-core/examples/examples/evolution-of-trust/index.md) — OGS + Simulation Bridges specification and computation. Uses the OGS game theory DSL to define an iterated Prisoner's Dilemma, then projects the specification into a tournament simulator with 8 strategies and evolutionary dynamics. **New:** OGS DSL, `DecisionGame`/`CovariantFunction`, `Strategy` protocol, interoperability pattern ## Prerequisites - Python 3.12+ - [gds-framework](https://pypi.org/project/gds-framework/) and [gds-viz](https://pypi.org/project/gds-viz/) (installed with gds-examples) - Basic understanding of dynamical systems concepts # Building New Models A step-by-step guide to creating GDS models. ## File Structure ``` examples/ └── my_model/ ├── __init__.py # empty ├── model.py # types, entities, spaces, blocks, build_spec(), build_system() ├── test_model.py # tests for all layers └── generate_views.py # visualization script ``` ## Step-by-Step ### 1. Define Types (TypeDef) Define value constraints before anything that references them. ``` from gds import typedef Count = typedef("Count", int, constraint=lambda x: x >= 0, description="Non-negative count") ``` ### 2. Define Entities (state space X) What persists across temporal boundaries. ``` from gds import entity, state_var agent = entity("Agent", wealth=state_var(Currency, symbol="W")) ``` ### 3. Define Spaces (communication channels) Transient signals within an evaluation -- NOT state. ``` from gds import space signal = space("TransferSignal", amount=Currency, recipient=AgentID) ``` ### 4. Define Blocks (with roles) ``` from gds import BoundaryAction, Policy, Mechanism, interface sensor = BoundaryAction( name="Sensor", interface=interface(forward_out=["Signal"]), ) controller = Policy( name="Controller", interface=interface( forward_in=["Signal"], forward_out=["Command"], ), params_used=["gain"], ) update = Mechanism( name="Update State", interface=interface(forward_in=["Command"]), updates=[("Agent", "wealth")], ) ``` ### 5. Register in GDSSpec ``` from gds import GDSSpec def build_spec() -> GDSSpec: spec = GDSSpec(name="My Model", description="...") spec.collect(Currency, signal, agent, sensor, controller, update) spec.register_parameter("gain", GainType) return spec ``` ### 6. Compose and Compile ``` from gds import compile_system def build_system(): pipeline = sensor >> controller >> update return compile_system("My Model", pipeline) ``` ## Design Decisions | Decision | Guidance | | ----------------------- | ----------------------------------------------------------------------------------- | | State vs Signal | State persists (Entity). Signals are transient (Space). | | Parameter vs Input | Parameters fixed per run (Θ). Inputs vary per step (BoundaryAction). | | Which operator | Linear → `>>`. Independent → `\|`. Backward → `.feedback()`. Iteration → `.loop()`. | | ControlAction vs Policy | Policy = decision logic g(x, z) → d. ControlAction = output observable y = C(x, d). | ## Role Constraints | Role | forward_in | forward_out | backward_in | backward_out | | -------------- | ------------ | ----------- | ------------ | ------------ | | BoundaryAction | MUST be `()` | any | any | any | | Policy | any | any | any | any | | ControlAction | any | any | any | any | | Mechanism | any | any | MUST be `()` | MUST be `()` | # Feature Coverage Matrix | Feature | SIR | Thermostat | Lotka-V | Prisoner's D | Insurance | Crosswalk | Evol. of Trust | | --------------------- | --- | ---------- | ------- | ------------ | --------- | --------- | -------------- | | BoundaryAction | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | Policy | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | Mechanism | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | ControlAction | | ✓ | | | ✓ | ✓ | | | `>>` (sequential) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | `\|` (parallel) | ✓ | | ✓ | ✓ | | | | | `.feedback()` | | ✓ | | | | | | | `.loop()` | | | ✓ | ✓ | | | | | CONTRAVARIANT wiring | | ✓ | | | | | | | Temporal wiring | | | ✓ | ✓ | | | | | Multi-variable Entity | | ✓ | | ✓ | ✓ | | | | Multiple entities | ✓ | | ✓ | ✓ | ✓ | | | | Parameters (Θ) | ✓ | ✓ | ✓ | | ✓ | ✓ | | | OGS DSL | | | | | | | ✓ | | OGS Feedback | | | | | | | ✓ | | Simulation layer | | | | | | | ✓ | ## Complexity Progression | Example | Roles Used | Operators | Key Teaching Point | | ------------------ | ------------ | --------------------- | --------------------------------------- | | SIR Epidemic | BA, P, M | `>>`, `\|` | Fundamentals, 3-role pipeline | | Insurance | BA, P, CA, M | `>>` | ControlAction, complete 4-role taxonomy | | Thermostat | BA, P, CA, M | `>>`, `.feedback()` | CONTRAVARIANT backward flow | | Lotka-Volterra | BA, P, M | `>>`, `\|`, `.loop()` | COVARIANT temporal loops | | Prisoner's Dilemma | BA, P, M | `\|`, `>>`, `.loop()` | Nested parallel, multi-entity | | Crosswalk | BA, P, CA, M | `>>` | Mechanism design, governance | | Evolution of Trust | OGS games | OGS feedback | Iterated PD, strategies, simulation | **Roles:** BA = BoundaryAction, P = Policy, CA = ControlAction, M = Mechanism OGS examples The Evolution of Trust uses the gds-games (OGS) DSL rather than gds-framework roles directly. OGS games (DecisionGame, CovariantFunction) compile to GDS blocks via the canonical bridge. # Crosswalk Problem **Mechanism design** — the canonical GDS example from BlockScience. A pedestrian decides whether to cross a one-way street while traffic evolves as a discrete Markov chain. A governance body chooses crosswalk placement to minimize accident probability. ## GDS Decomposition ``` X = traffic_state ∈ {-1, 0, +1} U = (luck, crossing_position) g = pedestrian_decision d = safety_check f = traffic_transition Θ = {crosswalk_location} ``` ## Composition ``` observe >> decide >> check >> transition ``` ``` flowchart LR Observe([Observe Traffic]) --> Decide[Pedestrian Decision] Decide --> Check[Safety Check] Check --> Transition[[Traffic Transition]] ``` ## What You'll Learn - Discrete Markov state transitions as GDS - **Mechanism design**: governance parameter (crosswalk location) constraining agent behavior - ControlAction for admissibility enforcement (safety check) - Complete 4-role taxonomy in a minimal model - Design parameter Θ as a governance lever ## Files - [model.py](https://github.com/BlockScience/gds-examples/blob/main/crosswalk/model.py) - [test_model.py](https://github.com/BlockScience/gds-examples/blob/main/crosswalk/test_model.py) - [VIEWS.md](https://github.com/BlockScience/gds-examples/blob/main/crosswalk/VIEWS.md) - [README.md](https://github.com/BlockScience/gds-examples/blob/main/crosswalk/README.md) # Evolution of Trust **Iterated Prisoner's Dilemma** — OGS game structure with tournament simulation and evolutionary dynamics. Based on Nicky Case's [The Evolution of Trust](https://ncase.me/trust/). Demonstrates how a single OGS specification can serve as the source of truth for both structural analysis and computational simulation. ## OGS Decomposition ``` Players: Alice, Bob Actions: {Cooperate, Defect} Payoff Matrix: (R, T, S, P) = (2, 3, -1, 0) Composition: (alice | bob) >> payoff .feedback([payoff -> decisions]) ``` ## Composition ``` pipeline = (alice_decision | bob_decision) >> payoff_computation system = pipeline.feedback([payoff -> decisions]) ``` ``` flowchart TD subgraph Decisions[Simultaneous Decisions] Alice[Alice Decision] Bob[Bob Decision] end Decisions --> Payoff[[Payoff Computation]] Payoff -. Alice Payoff .-> Alice Payoff -. Bob Payoff .-> Bob ``` ## What You'll Learn - Building a 2-player normal-form game from OGS primitives (`DecisionGame`, `CovariantFunction`) - Feedback composition for iterated play (payoffs fed back to decision nodes) - Non-zero-sum payoff matrices with negative values (Sucker payoff S = -1) - **Interoperability pattern**: same OGS specification consumed by multiple tools (visualization, simulation, equilibrium analysis) - Strategy protocol design for agent-based simulation on top of GDS specifications ## Key Concepts ### OGS Game Structure | Block | OGS Type | Purpose | | ------------------ | ----------------- | ------------------------------------------------ | | Alice Decision | DecisionGame | Chooses Cooperate or Defect based on observation | | Bob Decision | DecisionGame | Symmetric to Alice | | Payoff Computation | CovariantFunction | Maps action pairs to payoffs via the matrix | ### Payoff Matrix | | Bob: Cooperate | Bob: Defect | | -------------------- | -------------- | ----------- | | **Alice: Cooperate** | (2, 2) | (-1, 3) | | **Alice: Defect** | (3, -1) | (0, 0) | T > R > P > S and 2R > T + S (satisfies the Prisoner's Dilemma conditions). ### Simulation Stack The tournament code builds three layers on top of the OGS specification: 1. **Strategies** — 8 implementations (Tit for Tat, Grim Trigger, Detective, Pavlov, etc.) following a common `Strategy` protocol 1. **Tournament** — `play_match()` for iterated rounds, `play_round_robin()` for all-pairs competition 1. **Evolutionary dynamics** — `run_evolution()` for generational population selection Each layer consumes only `get_payoff()` from the specification — no GDS internals needed. ### Terminal Conditions | Outcome | Actions | Payoffs | Character | | ------------------ | ------- | ------- | -------------------- | | Mutual Cooperation | (C, C) | (2, 2) | Pareto optimal | | Mutual Defection | (D, D) | (0, 0) | Nash equilibrium | | Alice Exploits | (D, C) | (3, -1) | Temptation vs Sucker | | Bob Exploits | (C, D) | (-1, 3) | Sucker vs Temptation | ## Files - [model.py](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/games/evolution_of_trust/model.py) - [strategies.py](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/games/evolution_of_trust/strategies.py) - [tournament.py](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/games/evolution_of_trust/tournament.py) - [test_model.py](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/games/evolution_of_trust/test_model.py) ## Related - [Interoperability Guide](https://blockscience.github.io/gds-core/guides/interoperability/index.md) — detailed explanation of the specification-as-interoperability-layer pattern - [Prisoner's Dilemma](https://blockscience.github.io/gds-core/examples/examples/prisoners-dilemma/index.md) — the base GDS framework version (without OGS or simulation) # Insurance Contract **Completes the role taxonomy** — the only example using all 4 block roles. ## GDS Decomposition ``` X = (R, P, C, H) U = claim_event g = risk_assessment d = premium_calculation f = (claim_payout, reserve_update) Θ = {base_premium_rate, deductible, coverage_limit} ``` ## Composition ``` claim >> risk >> premium >> payout >> reserve_update ``` ``` flowchart LR Claim([Claim Event]) --> Risk[Risk Assessment] Risk --> Premium[Premium Calculation] Premium --> Payout[[Claim Payout]] Payout --> Reserve[[Reserve Update]] ``` ## What You'll Learn - **ControlAction** role — the 4th block role, for output observables - Complete 4-role taxonomy: BoundaryAction → Policy → ControlAction → Mechanism - ControlAction vs Policy: Policy is decision logic g(x, z) → d, ControlAction is the output observable y = C(x, d) - `params_used` on ControlAction — parameterized output computation Key distinction Premium Calculation is **ControlAction** because it computes an observable output signal — the premium rate that downstream mechanisms consume. It maps state and decisions to an output, rather than making the core risk assessment decision. ## Files - [model.py](https://github.com/BlockScience/gds-examples/blob/main/insurance/model.py) - [test_model.py](https://github.com/BlockScience/gds-examples/blob/main/insurance/test_model.py) - [VIEWS.md](https://github.com/BlockScience/gds-examples/blob/main/insurance/VIEWS.md) # Lotka-Volterra **Adds temporal loops** -- structural recurrence across temporal boundaries. ## GDS Decomposition ``` X = (x, y) U = population_signal g = compute_rates f = (update_prey, update_predator) Θ = {prey_birth_rate, predation_rate, predator_death_rate, predator_efficiency} ``` ## Composition ``` (observe >> compute >> (update_prey | update_pred)).loop([ Population Signal -> Compute Rates COVARIANT ]) ``` ``` flowchart TD Observe([Observe Populations]) --> Compute[Compute Rates] Compute --> Update_Prey[[Update Prey]] Compute --> Update_Pred[[Update Predator]] Update_Prey -.Population Signal.-> Compute Update_Pred -.Population Signal.-> Compute ``` ## What You'll Learn - `.loop()` composition for cross-boundary temporal recurrence - **COVARIANT** flow direction — mandatory for `.loop()` (CONTRAVARIANT raises GDSTypeError) - Mechanism with `forward_out` — emitting signals after state update - `exit_condition` parameter for loop termination - Contrast with `.feedback()`: within-evaluation vs across-boundary Key distinction Temporal wirings must be **COVARIANT** — `.loop()` enforces this at construction time. ## Files - [model.py](https://github.com/BlockScience/gds-examples/blob/main/lotka_volterra/model.py) - [test_model.py](https://github.com/BlockScience/gds-examples/blob/main/lotka_volterra/test_model.py) - [VIEWS.md](https://github.com/BlockScience/gds-examples/blob/main/lotka_volterra/VIEWS.md) # Prisoner's Dilemma **Most complex composition** — nested parallel + sequential + temporal loop. ## GDS Decomposition ``` X = (s_A, U_A, s_B, U_B, t) U = game_config g = (alice, bob) f = (payoff, world_models) Θ = {} ``` ## Composition ``` pipeline = (payoff_setting | (alice | bob)) >> payoff_realization >> (alice_world | bob_world) system = pipeline.loop([world models -> decisions]) ``` ``` flowchart TD subgraph Parallel Payoff_Setting([Payoff Setting]) subgraph Agents Alice[Alice] Bob[Bob] end end Parallel --> Payoff[[Payoff Realization]] Payoff --> Alice_World[[Alice World Model]] Payoff --> Bob_World[[Bob World Model]] Alice_World -.-> Alice Bob_World -.-> Bob ``` ## What You'll Learn - Nested parallel composition: `(A | B) | C` for logical grouping - Multi-entity state space X with 3 entities (5 state variables total) - Mechanism with `forward_out` for temporal feedback - Complex composition tree combining all operators except `.feedback()` - Design choice: parameter vs exogenous input (payoff matrix is U, not Θ) ## Files - [model.py](https://github.com/BlockScience/gds-examples/blob/main/prisoners_dilemma/model.py) - [test_model.py](https://github.com/BlockScience/gds-examples/blob/main/prisoners_dilemma/test_model.py) - [VIEWS.md](https://github.com/BlockScience/gds-examples/blob/main/prisoners_dilemma/VIEWS.md) # SIR Epidemic **Start here.** 3 compartments (Susceptible, Infected, Recovered) with contact-driven infection dynamics. ## GDS Decomposition ``` X = (S, I, R) U = contact_rate g = infection_policy f = (update_s, update_i, update_r) Θ = {beta, gamma, contact_rate} ``` ## Composition ``` contact >> infection_policy >> (update_s | update_i | update_r) ``` ``` flowchart TD Contact_Process([Contact Process]) --> Infection_Policy[Infection Policy] Infection_Policy --> Update_Susceptible[[Update Susceptible]] Infection_Policy --> Update_Infected[[Update Infected]] Infection_Policy --> Update_Recovered[[Update Recovered]] ``` ## What You'll Learn - **TypeDef** with runtime constraints (non-negative counts, positive rates) - **Entity** and **StateVariable** for defining state space X - **Space** for typed inter-block communication channels - **BoundaryAction** (exogenous input), **Policy** (decision logic), **Mechanism** (state update) - `>>` sequential composition with token-based auto-wiring - `|` parallel composition for independent mechanisms - **GDSSpec** registration and SpecWiring - `compile_system()` to produce SystemIR ## Key Concepts ### Three Block Roles | Block | Role | Purpose | | ------------------------------------- | -------------- | ----------------------------------------- | | Contact Process | BoundaryAction | Exogenous observation — no forward inputs | | Infection Policy | Policy | Decision logic — computes deltas | | Update Susceptible/Infected/Recovered | Mechanism | State update — writes to entities | ### State Space Three entities, each with a single count variable: - `Susceptible.count` (S) - `Infected.count` (I) - `Recovered.count` (R) ### Parameters - `beta` — infection rate - `gamma` — recovery rate - `contact_rate` — contact frequency ## Files - [model.py](https://github.com/BlockScience/gds-examples/blob/main/sir_epidemic/model.py) - [test_model.py](https://github.com/BlockScience/gds-examples/blob/main/sir_epidemic/test_model.py) - [VIEWS.md](https://github.com/BlockScience/gds-examples/blob/main/sir_epidemic/VIEWS.md) # Thermostat PID **Adds feedback** -- backward information flow within a single evaluation. ## GDS Decomposition ``` X = (T, E) U = measured_temp g = pid_controller f = update_room Θ = {setpoint, Kp, Ki, Kd} ``` ## Composition ``` (sensor >> controller >> plant >> update).feedback([ Energy Cost: plant -> controller CONTRAVARIANT ]) ``` ``` flowchart TD Temperature_Sensor([Temperature Sensor]) --> PID_Controller[PID Controller] PID_Controller --> Room_Plant[Room Plant] Room_Plant --> Update_Room[[Update Room]] Room_Plant ==Energy Cost==> PID_Controller ``` ## What You'll Learn - `.feedback()` composition for within-timestep backward flow - **CONTRAVARIANT** flow direction (backward_out → backward_in) - **ControlAction** role — reads state and emits control signals - `backward_in` / `backward_out` ports on block interfaces - Multi-variable Entity (Room has both temperature and energy_consumed) Key distinction Room Plant is **ControlAction** (not Mechanism) because it has `backward_out`. Mechanisms cannot have backward ports. ## Files - [model.py](https://github.com/BlockScience/gds-examples/blob/main/thermostat/model.py) - [test_model.py](https://github.com/BlockScience/gds-examples/blob/main/thermostat/test_model.py) - [VIEWS.md](https://github.com/BlockScience/gds-examples/blob/main/thermostat/VIEWS.md) # Tutorials # Start Here New to GDS? Choose the path that matches your goal. ## Build Your First Model The **Getting Started Guide** walks you through building a thermostat model in 5 progressive stages, from raw blocks to DSL to verification. [Getting Started Guide](https://blockscience.github.io/gds-core/guides/getting-started/index.md) ## Learn by Example Work through the **example models** in order. Each introduces one new concept while reinforcing previous ones. | # | Example | What You'll Learn | | --- | ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | | 1 | [SIR Epidemic](https://blockscience.github.io/gds-core/examples/examples/sir-epidemic/index.md) | Fundamentals — types, entities, spaces, blocks, composition | | 2 | [Thermostat PID](https://blockscience.github.io/gds-core/examples/examples/thermostat/index.md) | Feedback and backward flow | | 3 | [Lotka-Volterra](https://blockscience.github.io/gds-core/examples/examples/lotka-volterra/index.md) | Temporal loops and structural recurrence | | 4 | [Prisoner's Dilemma](https://blockscience.github.io/gds-core/examples/examples/prisoners-dilemma/index.md) | Nested composition and multi-entity state | | 5 | [Insurance Contract](https://blockscience.github.io/gds-core/examples/examples/insurance/index.md) | Complete 4-role taxonomy | | 6 | [Crosswalk Problem](https://blockscience.github.io/gds-core/examples/examples/crosswalk/index.md) | Mechanism design and governance parameters | | 7 | [Evolution of Trust](https://blockscience.github.io/gds-core/examples/examples/evolution-of-trust/index.md) | OGS game theory DSL and simulation | [Full Learning Path](https://blockscience.github.io/gds-core/examples/learning-path/index.md) ## Compare DSLs The **Rosetta Stone** guide models the same problem (thermostat) using three different DSLs — stockflow, control, and game theory — to show how GDS unifies them. [Rosetta Stone](https://blockscience.github.io/gds-core/guides/rosetta-stone/index.md) ## Other Guides | Guide | Description | | -------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- | | [Real-World Patterns](https://blockscience.github.io/gds-core/guides/real-world-patterns/index.md) | Common modeling patterns and anti-patterns | | [Verification](https://blockscience.github.io/gds-core/guides/verification/index.md) | All 3 verification layers with deliberately broken models | | [Visualization](https://blockscience.github.io/gds-core/guides/visualization/index.md) | 6 view types, 5 themes, cross-DSL rendering | | [Interoperability](https://blockscience.github.io/gds-core/guides/interoperability/index.md) | Using GDS specs as input to external tools (Nash solvers, simulators) | # Guides # Getting Started: Build Your First Model A progressive 5-stage tutorial that teaches GDS fundamentals using a **thermostat control system** as the running example. Each stage builds on the previous one, introducing new concepts incrementally. ## Prerequisites - Python 3.12+ - Install the required packages: ``` pip install gds-framework gds-viz gds-control ``` Or, if working from the monorepo: ``` uv sync --all-packages ``` ## Learning Path | Stage | Concepts | | --------------------- | --------------------------------------------------------------------- | | 1. Minimal Model | Entity, BoundaryAction, Mechanism, `>>` composition, GDSSpec | | 2. Feedback | Policy, `.loop()` temporal composition, parameters | | 3. DSL Shortcut | gds-control DSL: ControlModel, compile_model, compile_to_system | | 4. Verification & Viz | Generic checks (G-001..G-006), semantic checks, Mermaid visualization | | 5. Query API | SpecQuery: parameter influence, entity updates, causal chains | ______________________________________________________________________ ## Stage 1 -- Minimal Model The simplest possible GDS model: a **heater** (BoundaryAction) warms a **room** (Entity with one state variable). Two blocks composed with `>>`. - **BoundaryAction**: exogenous input -- no `forward_in` ports - **Mechanism**: state update -- writes to entity variables, no backward ports - **`>>`**: sequential composition via token-matched port wiring ### Types and Entity ``` from gds.types.typedef import TypeDef from gds.state import Entity, StateVariable Temperature = TypeDef( name="Temperature", python_type=float, description="Temperature in degrees Celsius", ) HeatRate = TypeDef( name="HeatRate", python_type=float, constraint=lambda x: x >= 0, description="Heat input rate (watts)", ) room = Entity( name="Room", variables={ "temperature": StateVariable( name="temperature", typedef=Temperature, symbol="T", description="Current room temperature", ), }, description="The room being heated", ) ``` ### Blocks and Composition ``` from gds.blocks.roles import BoundaryAction, Mechanism from gds.types.interface import Interface, port # BoundaryAction: exogenous heat input heater = BoundaryAction( name="Heater", interface=Interface( forward_out=(port("Heat Signal"),), ), ) # Mechanism: state update update_temperature = Mechanism( name="Update Temperature", interface=Interface( forward_in=(port("Heat Signal"),), ), updates=[("Room", "temperature")], ) # Sequential composition -- tokens "heat" and "signal" overlap pipeline = heater >> update_temperature ``` ### Structural Diagram ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d Heater([Heater]):::boundary Update_Temperature[[Update Temperature]]:::mechanism Heater --Heat Signal--> Update_Temperature ``` ______________________________________________________________________ ## Stage 2 -- Adding Feedback Extend the minimal model with **observation and control**: - A **Sensor** (Policy) reads the room temperature - A **Controller** (Policy) decides the heat command using a `setpoint` parameter - A **TemporalLoop** (`.loop()`) feeds updated temperature back to the sensor across temporal boundaries New operators: `|` (parallel composition) and `.loop()` (temporal feedback). ### Blocks ``` from gds.blocks.roles import BoundaryAction, Mechanism, Policy from gds.types.interface import Interface, port sensor = Policy( name="Sensor", interface=Interface( forward_in=(port("Temperature Reading"),), forward_out=(port("Temperature Observation"),), ), ) controller = Policy( name="Controller", interface=Interface( forward_in=( port("Temperature Observation"), port("Heat Signal"), ), forward_out=(port("Heat Command"),), ), params_used=["setpoint"], ) update_temperature = Mechanism( name="Update Temperature", interface=Interface( forward_in=(port("Heat Command"),), forward_out=(port("Temperature Reading"),), ), updates=[("Room", "temperature")], ) ``` ### Composition with Temporal Loop ``` from gds.blocks.composition import Wiring from gds.ir.models import FlowDirection input_tier = heater | sensor forward_pipeline = input_tier >> controller >> update_temperature system_with_loop = forward_pipeline.loop( [ Wiring( source_block="Update Temperature", source_port="Temperature Reading", target_block="Sensor", target_port="Temperature Reading", direction=FlowDirection.COVARIANT, ) ], ) ``` ### Structural Diagram ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b Heater([Heater]):::boundary Sensor[Sensor]:::generic Controller[Controller]:::generic Update_Temperature[Update Temperature]:::generic Heater --Heat Signal--> Controller Sensor --Temperature Observation--> Controller Controller --Heat Command--> Update_Temperature Update_Temperature -.Temperature Reading..-> Sensor ``` Note the dashed arrow from Update Temperature back to Sensor -- this is the temporal loop (`.loop()`), indicating structural recurrence across temporal boundaries. ______________________________________________________________________ ## Stage 3 -- DSL Shortcut Rebuild the same thermostat using the **gds-control** DSL. Declare states, inputs, sensors, and controllers -- the compiler generates all types, spaces, entities, blocks, wirings, and the temporal loop automatically. **~15 lines of DSL vs ~60 lines of manual GDS construction.** ### ControlModel Declaration ``` from gds_domains.control.dsl.compile import compile_model, compile_to_system from gds_domains.control.dsl.elements import Controller, Input, Sensor, State from gds_domains.control.dsl.model import ControlModel model = ControlModel( name="Thermostat DSL", states=[ State(name="temperature", initial=20.0), ], inputs=[ Input(name="heater"), ], sensors=[ Sensor(name="temp_sensor", observes=["temperature"]), ], controllers=[ Controller( name="thermo", reads=["temp_sensor", "heater"], drives=["temperature"], ), ], description="Thermostat built with the gds-control DSL", ) spec = compile_model(model) # -> GDSSpec system = compile_to_system(model) # -> SystemIR ``` ### DSL Element to GDS Role Mapping | DSL Element | GDS Role | | ----------------------- | ----------------------- | | `State("temperature")` | Mechanism + Entity | | `Input("heater")` | BoundaryAction | | `Sensor("temp_sensor")` | Policy (observer) | | `Controller("thermo")` | Policy (decision logic) | ### Canonical Decomposition The canonical projection separates the system into the formal `h = f . g` form: ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a X_t(["X_t
value"]):::state X_next(["X_{t+1}
value"]):::state Theta{{"Θ
heater"}}:::param subgraph U ["Boundary (U)"] heater[heater]:::boundary end subgraph g ["Policy (g)"] temp_sensor[temp_sensor]:::policy thermo[thermo]:::policy end subgraph f ["Mechanism (f)"] temperature_Dynamics[temperature Dynamics]:::mechanism end X_t --> U U --> g g --> f temperature_Dynamics -.-> |temperature.value| X_next Theta -.-> g Theta -.-> f style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style f fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` ______________________________________________________________________ ## Stage 4 -- Verification & Visualization GDS provides two layers of verification: 1. **Generic checks (G-001..G-006)** on `SystemIR` -- structural topology 1. **Semantic checks (SC-001..SC-007)** on `GDSSpec` -- domain properties Plus three Mermaid diagram views of the compiled system. ### Running Verification ``` from gds.verification.engine import verify from gds.verification.generic_checks import ( check_g001_domain_codomain_matching, check_g003_direction_consistency, check_g004_dangling_wirings, check_g005_sequential_type_compatibility, check_g006_covariant_acyclicity, ) report = verify(system, checks=[ check_g001_domain_codomain_matching, check_g003_direction_consistency, check_g004_dangling_wirings, check_g005_sequential_type_compatibility, check_g006_covariant_acyclicity, ]) for finding in report.findings: status = "PASS" if finding.passed else "FAIL" print(f"[{finding.check_id}] {status}: {finding.message}") ``` ### Three Visualization Views The compiled block graph showing blocks as nodes and wirings as arrows. ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b heater([heater]):::boundary temp_sensor[temp_sensor]:::generic thermo[thermo]:::generic temperature_Dynamics[temperature Dynamics]:::generic heater --heater Reference--> thermo temp_sensor --temp_sensor Measurement--> thermo thermo --thermo Control--> temperature_Dynamics temperature_Dynamics -.temperature State..-> temp_sensor ``` Blocks grouped by GDS role: Boundary (U), Policy (g), Mechanism (f). ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef entity fill:#e2e8f0,stroke:#475569,stroke-width:2px,color:#0f172a subgraph boundary ["Boundary (U)"] heater([heater]):::boundary end subgraph policy ["Policy (g)"] temp_sensor[temp_sensor]:::policy thermo[thermo]:::policy end subgraph mechanism ["Mechanism (f)"] temperature_Dynamics[[temperature Dynamics]]:::mechanism end entity_temperature[("temperature
value")]:::entity temperature_Dynamics -.-> entity_temperature thermo --ControlSpace--> temperature_Dynamics style boundary fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style policy fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style mechanism fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` The formal `h = f . g` decomposition diagram (same as Stage 3 above). ______________________________________________________________________ ## Stage 5 -- Query API `SpecQuery` provides static analysis over a `GDSSpec` without running any simulation. It answers structural questions about information flow, parameter influence, and causal chains. ### Usage ``` from gds.query import SpecQuery query = SpecQuery(spec) # Which blocks does each parameter affect? query.param_to_blocks() # -> {'heater': ['heater']} # Which mechanisms update each entity variable? query.entity_update_map() # -> {'temperature': {'value': ['temperature Dynamics']}} # Group blocks by GDS role query.blocks_by_kind() # -> {'boundary': ['heater'], 'policy': ['temp_sensor', 'thermo'], # 'mechanism': ['temperature Dynamics'], ...} # Which blocks can transitively affect temperature.value? query.blocks_affecting("temperature", "value") # -> ['temperature Dynamics', 'thermo', 'temp_sensor', 'heater'] # Full block-to-block dependency DAG query.dependency_graph() ``` ______________________________________________________________________ ## Summary You have built a complete GDS specification for a thermostat system, progressing through five stages: 1. **Minimal model** -- types, entity, two blocks, sequential composition 1. **Feedback** -- policies, parameters, temporal loop 1. **DSL** -- same system in 15 lines with `gds-control` 1. **Verification** -- structural and semantic checks, three diagram views 1. **Query** -- static analysis of parameter influence and causal chains From here, explore the [example models](https://blockscience.github.io/gds-core/examples/index.md) or the [Rosetta Stone](https://blockscience.github.io/gds-core/guides/rosetta-stone/index.md) guide to see the same system through different DSL lenses. ## Interactive Notebook Source code for `packages/gds-examples/notebooks/getting_started.py` Tip: paste this code into an empty cell, and the marimo editor will create cells for you ``` """Interactive Getting Started guide for GDS — marimo notebook. A progressive 5-stage tutorial that teaches GDS fundamentals using a thermostat control system. Run with: marimo run notebooks/getting_started.py """ # /// script # requires-python = ">=3.12" # dependencies = [ # "gds-examples", # "marimo>=0.20.0", # ] # /// import marimo __generated_with = "0.13.0" app = marimo.App(width="medium", app_title="GDS Getting Started Guide") @app.cell def _(): import marimo as mo return (mo,) @app.cell def _(mo): mo.md( """ # Build Your First GDS Model This interactive notebook walks you through **five stages** of building a Generalized Dynamical Systems (GDS) specification for a thermostat control system. | Stage | What You Learn | |-------|---------------| | **1 -- Minimal Model** | Types, Entity, BoundaryAction, Mechanism, `>>` | | **2 -- Feedback** | Policy, parameters, temporal loop (`.loop()`) | | **3 -- DSL Shortcut** | `gds-control` ControlModel, canonical | | **4 -- Verification & Viz** | Generic/semantic checks, Mermaid diagrams | | **5 -- Query API** | `SpecQuery` for static analysis of your spec | Use the dropdown below to jump to a specific stage, or scroll through the notebook to follow the full progression. """ ) return @app.cell def _(mo): _stage_selector = mo.ui.dropdown( options={ "Stage 1 -- Minimal Model": "stage1", "Stage 2 -- Feedback": "stage2", "Stage 3 -- DSL Shortcut": "stage3", "Stage 4 -- Verification & Visualization": "stage4", "Stage 5 -- Query API": "stage5", }, value="Stage 1 -- Minimal Model", label="Jump to stage:", ) return (_stage_selector,) # ══════════════════════════════════════════════════════════════ # Stage 1: Minimal Model # ══════════════════════════════════════════════════════════════ @app.cell def _(mo): mo.md( """ --- ## Stage 1 -- Minimal Model The simplest possible GDS model: a **heater** (BoundaryAction) warms a **room** (Entity with one state variable). Two blocks composed with `>>`. - **BoundaryAction**: exogenous input -- no `forward_in` ports - **Mechanism**: state update -- writes to entity variables, no backward ports - **`>>`**: sequential composition via token-matched port wiring """ ) return @app.cell def _(mo): from gds_examples.getting_started.stage1_minimal import build_spec as _build_spec_s1 from gds_examples.getting_started.stage1_minimal import ( build_system as _build_system_s1, ) _spec_s1 = _build_spec_s1() _system_s1 = _build_system_s1() _summary_s1 = mo.md( f""" ### What Stage 1 Creates | Component | Count | Details | |-----------|------:|---------| | Entities | {len(_spec_s1.entities)} | {", ".join(_spec_s1.entities.keys())} | | Blocks | {len(_spec_s1.blocks)} | {", ".join(_spec_s1.blocks.keys())} | | Wirings | {len(_system_s1.wirings)} | auto-wired via token overlap | | Parameters | {len(_spec_s1.parameters)} | *(none yet -- added in stage 2)* | """ ) from gds_viz import system_to_mermaid as _s2m _mermaid_s1 = mo.mermaid(_s2m(_system_s1)) mo.vstack( [ _summary_s1, mo.md("### Structural Diagram"), _mermaid_s1, ] ) return # ══════════════════════════════════════════════════════════════ # Stage 2: Feedback # ══════════════════════════════════════════════════════════════ @app.cell def _(mo): mo.md( """ --- ## Stage 2 -- Adding Feedback Extend the minimal model with **observation and control**: - A **Sensor** (Policy) reads the room temperature - A **Controller** (Policy) decides the heat command using a `setpoint` parameter - A **TemporalLoop** (`.loop()`) feeds updated temperature back to the sensor across timesteps New operators: `|` (parallel composition) and `.loop()` (temporal feedback). """ ) return @app.cell def _(mo): from gds_examples.getting_started.stage2_feedback import ( build_spec as _build_spec_s2, ) from gds_examples.getting_started.stage2_feedback import ( build_system as _build_system_s2, ) _spec_s2 = _build_spec_s2() _system_s2 = _build_system_s2() _temporal_s2 = [w for w in _system_s2.wirings if w.is_temporal] _comparison = mo.md( f""" ### Stage 1 vs Stage 2 | Property | Stage 1 | Stage 2 | Change | |----------|--------:|--------:|--------| | Blocks | 2 | {len(_system_s2.blocks)} | +Sensor, +Controller (Policy role) | | Wirings | 1 | {len(_system_s2.wirings)} | +inter-tier wiring, +temporal | | Temporal | 0 | {len(_temporal_s2)} | `.loop()` feeds state back to sensor | | Parameters | 0 | {len(_spec_s2.parameters)} | `setpoint` for controller | """ ) from gds_viz import system_to_mermaid as _s2m_2 _mermaid_s2 = mo.mermaid(_s2m_2(_system_s2)) mo.vstack( [ _comparison, mo.md("### Structural Diagram (with Temporal Loop)"), _mermaid_s2, ] ) return # ══════════════════════════════════════════════════════════════ # Stage 3: DSL Shortcut # ══════════════════════════════════════════════════════════════ @app.cell def _(mo): mo.md( """ --- ## Stage 3 -- DSL Shortcut Rebuild the same thermostat using the **gds-control** DSL. Declare states, inputs, sensors, and controllers -- the compiler generates all types, spaces, entities, blocks, wirings, and the temporal loop automatically. **~15 lines of DSL vs ~60 lines of manual GDS construction.** """ ) return @app.cell def _(mo): from gds_examples.getting_started.stage3_dsl import ( build_canonical as _build_canonical_s3, ) from gds_examples.getting_started.stage3_dsl import ( build_spec as _build_spec_s3, ) from gds_examples.getting_started.stage3_dsl import ( build_system as _build_system_s3, ) _spec_s3 = _build_spec_s3() _system_s3 = _build_system_s3() _canonical_s3 = _build_canonical_s3() _n_temporal_s3 = len([w for w in _system_s3.wirings if w.is_temporal]) _dsl_comparison = mo.md( f""" ### Manual (Stage 2) vs DSL (Stage 3) | Property | Manual | DSL | Notes | |----------|-------:|----:|-------| | Blocks | 4 | {len(_spec_s3.blocks)} | Same count | | Entities | 1 | {len(_spec_s3.entities)} | `Room` vs `temperature` | | Types | 3 | {len(_spec_s3.types)} | DSL generates type set | | Temporal | 1 | {_n_temporal_s3} | Same structure | ### Canonical Decomposition: h = f . g The canonical projection separates the system into: - **X** (state): {len(_canonical_s3.state_variables)} variable(s) - **U** (boundary): {len(_canonical_s3.boundary_blocks)} block(s) - **g** (policy): {len(_canonical_s3.policy_blocks)} block(s) - **f** (mechanism): {len(_canonical_s3.mechanism_blocks)} block(s) """ ) from gds_viz import canonical_to_mermaid as _c2m _canonical_mermaid = mo.mermaid(_c2m(_canonical_s3)) mo.vstack( [ _dsl_comparison, mo.md("### Canonical Diagram"), _canonical_mermaid, ] ) return # ══════════════════════════════════════════════════════════════ # Stage 4: Verification & Visualization # ══════════════════════════════════════════════════════════════ @app.cell def _(mo): mo.md( """ --- ## Stage 4 -- Verification & Visualization GDS provides two layers of verification: 1. **Generic checks (G-001..G-006)** on `SystemIR` -- structural topology 2. **Semantic checks (SC-001..SC-007)** on `GDSSpec` -- domain properties Plus three Mermaid diagram views of the compiled system. """ ) return @app.cell def _(mo): from gds_examples.getting_started.stage3_dsl import ( build_spec as _build_spec_s4, ) from gds_examples.getting_started.stage3_dsl import ( build_system as _build_system_s4, ) from gds_examples.getting_started.stage4_verify_viz import ( generate_architecture_view as _gen_arch, ) from gds_examples.getting_started.stage4_verify_viz import ( generate_canonical_view as _gen_canon, ) from gds_examples.getting_started.stage4_verify_viz import ( generate_structural_view as _gen_struct, ) from gds_examples.getting_started.stage4_verify_viz import ( run_generic_checks as _run_generic, ) from gds_examples.getting_started.stage4_verify_viz import ( run_semantic_checks as _run_semantic, ) _system_s4 = _build_system_s4() _spec_s4 = _build_spec_s4() # -- Generic checks -- _report = _run_generic(_system_s4) _generic_rows = "\n".join( f"| {f.check_id} | {'PASS' if f.passed else 'FAIL'} | {f.message} |" for f in _report.findings ) _generic_table = mo.md( "### Generic Checks (SystemIR)\n\n" "| Check | Result | Message |\n" "|-------|--------|---------|" + ("\n" + _generic_rows if _generic_rows else "") + f"\n\n**Summary**: {_report.checks_passed}/{_report.checks_total} passed," f" {_report.errors} errors" ) # -- Semantic checks -- _semantic_results = _run_semantic(_spec_s4) _semantic_rows = "\n".join(f"| {line} |" for line in _semantic_results) _semantic_table = mo.md( "### Semantic Checks (GDSSpec)\n\n" "| Result |\n" "|--------|" + ("\n" + _semantic_rows if _semantic_rows else "") ) # -- Mermaid views in tabs -- _structural_mermaid = mo.mermaid(_gen_struct(_system_s4)) _architecture_mermaid = mo.mermaid(_gen_arch(_spec_s4)) _canonical_mermaid_s4 = mo.mermaid(_gen_canon(_spec_s4)) _diagram_tabs = mo.ui.tabs( { "Structural": _structural_mermaid, "Architecture": _architecture_mermaid, "Canonical": _canonical_mermaid_s4, } ) mo.vstack( [ _generic_table, _semantic_table, mo.md("### Diagrams"), _diagram_tabs, ] ) return # ══════════════════════════════════════════════════════════════ # Stage 5: Query API # ══════════════════════════════════════════════════════════════ @app.cell def _(mo): mo.md( """ --- ## Stage 5 -- Query API `SpecQuery` provides static analysis over a `GDSSpec` without running any simulation. It answers structural questions about information flow, parameter influence, and causal chains. """ ) return @app.cell def _(mo): from gds_examples.getting_started.stage5_query import ( build_query as _build_query, ) from gds_examples.getting_started.stage5_query import ( show_blocks_by_role as _show_blocks_by_role, ) from gds_examples.getting_started.stage5_query import ( show_causal_chain as _show_causal_chain, ) from gds_examples.getting_started.stage5_query import ( show_dependency_graph as _show_dep_graph, ) from gds_examples.getting_started.stage5_query import ( show_entity_updates as _show_entity_updates, ) from gds_examples.getting_started.stage5_query import ( show_param_influence as _show_param_influence, ) _query = _build_query() # -- Parameter influence -- _param_map = _show_param_influence(_query) _param_rows = "\n".join( f"| `{param}` | {', '.join(blocks)} |" for param, blocks in _param_map.items() ) _param_table = mo.md( "### Parameter Influence\n\n" "Which blocks does each parameter affect?\n\n" "| Parameter | Blocks |\n" "|-----------|--------|" + ("\n" + _param_rows if _param_rows else "") ) # -- Entity updates -- _entity_map = _show_entity_updates(_query) _entity_rows_list = [] for _ent, _vars in _entity_map.items(): for _var, _mechs in _vars.items(): _entity_rows_list.append(f"| `{_ent}` | `{_var}` | {', '.join(_mechs)} |") _entity_rows = "\n".join(_entity_rows_list) _entity_table = mo.md( "### Entity Update Map\n\n" "Which mechanisms update each entity variable?\n\n" "| Entity | Variable | Mechanisms |\n" "|--------|----------|------------|" + ("\n" + _entity_rows if _entity_rows else "") ) # -- Blocks by role -- _by_role = _show_blocks_by_role(_query) _role_rows = "\n".join( f"| **{role}** | {', '.join(blocks)} |" for role, blocks in _by_role.items() if blocks ) _role_table = mo.md( "### Blocks by Role\n\n" "| Role | Blocks |\n" "|------|--------|" + ("\n" + _role_rows if _role_rows else "") ) # -- Causal chain -- _affecting = _show_causal_chain(_query, "temperature", "value") _causal_table = mo.md( "### Causal Chain: temperature.value\n\n" "Blocks that can transitively affect `temperature.value`:\n\n" + ", ".join(f"`{b}`" for b in _affecting) ) # -- Dependency graph -- _dep_graph = _show_dep_graph(_query) _dep_rows = "\n".join( f"| `{src}` | {', '.join(f'`{t}`' for t in targets)} |" for src, targets in _dep_graph.items() ) _dep_table = mo.md( "### Dependency Graph\n\n" "Block-to-block information flow:\n\n" "| Source | Targets |\n" "|--------|---------|" + ("\n" + _dep_rows if _dep_rows else "") ) mo.vstack( [ _param_table, _entity_table, _role_table, _causal_table, _dep_table, ] ) return @app.cell def _(mo): mo.md( """ --- ## Summary You have built a complete GDS specification for a thermostat system, progressing through five stages: 1. **Minimal model** -- types, entity, two blocks, sequential composition 2. **Feedback** -- policies, parameters, temporal loop 3. **DSL** -- same system in 15 lines with `gds-control` 4. **Verification** -- structural and semantic checks, three diagram views 5. **Query** -- static analysis of parameter influence and causal chains From here, explore the other examples (`sir_epidemic`, `lotka_volterra`, `prisoners_dilemma`) or build your own domain model. """ ) return if __name__ == "__main__": app.run() ``` To run the notebook locally: ``` uv run marimo run packages/gds-examples/notebooks/getting_started.py ``` Run the test suite: ``` uv run --package gds-examples pytest packages/gds-examples/tests/test_getting_started.py -v ``` ## Source Files | File | Purpose | | ---------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------ | | [`stage1_minimal.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/getting_started/stage1_minimal.py) | Minimal heater model | | [`stage2_feedback.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/getting_started/stage2_feedback.py) | Feedback loop with policies | | [`stage3_dsl.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/getting_started/stage3_dsl.py) | gds-control DSL version | | [`stage4_verify_viz.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/getting_started/stage4_verify_viz.py) | Verification and visualization | | [`stage5_query.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/getting_started/stage5_query.py) | SpecQuery API | | [`getting_started.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/notebooks/getting_started.py) | Interactive marimo notebook | # Choosing a DSL Five domain-specific languages compile to the same GDS core. This guide helps you pick the right one for your problem -- or decide when to use the framework directly. ______________________________________________________________________ ## Starting from the Problem The Decision Matrix below is a technical reference — it assumes you already know your primitives. In practice, most modelers start earlier: with a domain question. The same system can often be modeled with more than one DSL. An epidemic could be stockflow (if you care about accumulation rates) or raw framework (if you just need a state transition). A supply chain could be stockflow (stocks and flows), CLD (causal influences), or SCN (inventory and topology). The DSL choice depends on **what you want to verify**, not just what domain you are in. The natural workflow is: **Problem → What do I want to check? → DSL**. Once you pick a DSL, roles and block structure follow more naturally because the DSL embeds domain conventions about what matters. ______________________________________________________________________ ## Decision Matrix | If your system has... | Use | Package | Why | | ------------------------------------------ | ----------------- | ----------------------- | ---------------------------------------------------------------- | | Stocks accumulating over time | **gds-domains** | `gds_domains.stockflow` | Native stock/flow/auxiliary semantics with accumulation dynamics | | State-space dynamics (A, B, C, D matrices) | **gds-domains** | `gds_domains.control` | Control theory mapping with sensors, controllers, plant states | | Strategic agents making decisions | **gds-domains** | `gds_domains.games` | Game-theoretic composition with utility/payoff channels | | Software architecture to formalize | **gds-domains** | `gds_domains.software` | Six diagram types: DFD, SM, Component, C4, ERD, Dependency | | Business processes or supply chains | **gds-domains** | `gds_domains.business` | Causal loop, supply chain network, value stream mapping | | None of the above | **gds-framework** | `gds` | Build your own vocabulary on the composition algebra | ______________________________________________________________________ ## DSL Profiles ### gds-domains (stockflow) **Domain:** System dynamics -- stocks, flows, auxiliaries, converters. **Best for:** Accumulation dynamics, resource pools, population models, anything modeled with stock-flow diagrams. **Example:** SIR epidemic model, Lotka-Volterra predator-prey, inventory management. ``` from gds_domains.stockflow.dsl.elements import Auxiliary, Converter, Flow, Stock from gds_domains.stockflow.dsl.model import StockFlowModel model = StockFlowModel( name="Population", stocks=[Stock(name="Population", initial=1000.0, non_negative=True)], flows=[ Flow(name="births", target="Population"), Flow(name="deaths", source="Population"), ], converters=[Converter(name="birth_rate"), Converter(name="death_rate")], auxiliaries=[ Auxiliary(name="net_growth", inputs=["birth_rate", "death_rate"]), ], ) ``` **Canonical form:** `h = f . g` with |X| = number of stocks, |f| = number of accumulation mechanisms. **Domain checks:** SF-001 (orphan stocks), SF-003 (auxiliary cycles), SF-004 (unused converters), plus 2 more. ______________________________________________________________________ ### gds-domains (control) **Domain:** Feedback control systems -- states, inputs, sensors, controllers. **Best for:** Thermostat-like control loops, PID controllers, any system with plant state, measurement, and actuation. **Example:** Temperature regulation, resource level tracking, robotic control. ``` from gds_domains.control.dsl.elements import Controller, Input, Sensor, State from gds_domains.control.dsl.model import ControlModel model = ControlModel( name="Thermostat", states=[State(name="temperature", initial=20.0)], inputs=[Input(name="heater")], sensors=[Sensor(name="temp_sensor", observes=["temperature"])], controllers=[ Controller(name="pid", reads=["temp_sensor", "heater"], drives=["temperature"]), ], ) ``` **Canonical form:** `h = f . g` with |X| = number of states, full dynamical character. **DSL element mapping:** State -> Mechanism + Entity, Input -> BoundaryAction, Sensor -> Policy, Controller -> Policy. ______________________________________________________________________ ### gds-domains (games / OGS) **Domain:** Game theory -- strategic interactions, decision games, payoff computation. **Best for:** Multi-agent decision problems, mechanism design, auction theory, commons dilemmas. **Example:** Prisoner's dilemma, resource extraction games, insurance contracts. ``` from gds_domains.games.dsl.games import CovariantFunction, DecisionGame from gds_domains.games.dsl.pattern import Pattern, PatternInput from gds_domains.games.dsl.types import InputType, Signature, port agent = DecisionGame( name="Player", signature=Signature( x=(port("Resource Signal"),), y=(port("Extraction Decision"),), r=(port("Player Payoff"),), ), logic="Choose extraction amount", ) ``` **Canonical form:** `h = g` (stateless -- no mechanisms, no state). All game blocks map to Policy. **Key difference:** OGS uses `OpenGame` (a subclass of `AtomicBlock`) with its own `PatternIR`, which projects back to `SystemIR` via `PatternIR.to_system_ir()`. It also has backward (contravariant) channels for utility signals. ______________________________________________________________________ ### gds-domains (software) **Domain:** Software architecture -- six diagram types for formalizing system structure. **Best for:** Documenting and verifying software architectures, data flows, state machines, component interactions. **Diagram types:** | Type | What it models | | ------------------------- | ----------------------------------------- | | DFD (Data Flow Diagram) | Processes, data stores, external entities | | SM (State Machine) | States and transitions | | Component | Provided/required interfaces | | C4 | Context, container, component views | | ERD (Entity-Relationship) | Data model relationships | | Dependency | Module dependencies | ``` from gds_domains.software.dsl.elements import DFDProcess, DFDDataStore, DFDExternalEntity from gds_domains.software.dsl.model import SoftwareModel model = SoftwareModel( name="Order System", diagram_type="dfd", processes=[DFDProcess(name="Process Order")], data_stores=[DFDDataStore(name="Order DB")], external_entities=[DFDExternalEntity(name="Customer")], ) ``` **Canonical form:** Varies by diagram type. DFD with data stores has state (|X| > 0). ERD and Dependency are typically stateless. **Domain checks:** 27 checks across all six diagram types. ______________________________________________________________________ ### gds-domains (business) **Domain:** Business dynamics -- causal loops, supply chains, value streams. **Best for:** Business process modeling, supply chain optimization, value stream analysis. **Diagram types:** | Type | What it models | | -------------------------- | ------------------------------------ | | CLD (Causal Loop Diagram) | Reinforcing/balancing feedback loops | | SCN (Supply Chain Network) | Nodes, links, inventory accumulation | | VSM (Value Stream Map) | Process steps, buffers, cycle times | ``` from gds_domains.business.cld.elements import Variable, CausalLink from gds_domains.business.cld.model import CLDModel model = CLDModel( name="Market Dynamics", variables=[Variable(name="Demand"), Variable(name="Price"), Variable(name="Supply")], links=[ CausalLink(source="Demand", target="Price", polarity="+"), CausalLink(source="Price", target="Supply", polarity="+"), CausalLink(source="Supply", target="Demand", polarity="-"), ], ) ``` **Canonical form:** CLD is stateless (`h = g`). SCN has full dynamics (`h = f . g`). VSM is stateful only with buffers. **Domain checks:** 11 checks across all three diagram types. ______________________________________________________________________ ## Feature Comparison ### State and Canonical Form | DSL | Has State? | Canonical Form | Character | | -------------------------- | ------------------ | -------------- | ---------------------------------------- | | gds-domains (stockflow) | Yes (stocks) | `h = f . g` | Dynamical -- state-dominant accumulation | | gds-domains (control) | Yes (plant states) | `h = f . g` | Dynamical -- full feedback control | | gds-domains (games) | No | `h = g` | Strategic -- pure policy computation | | gds-domains (software) | Varies by diagram | Varies | Diagram-dependent | | gds-domains (business CLD) | No | `h = g` | Stateless -- pure signal relay | | gds-domains (business SCN) | Yes (inventory) | `h = f . g` | Dynamical -- inventory accumulation | | gds-domains (business VSM) | Optional (buffers) | Varies | Stateful only with buffers | The canonical spectrum All DSLs compile to the same `h = f . g` decomposition with varying dimensionality of the state space X. When |X| = 0, the system is stateless and `h = g`. When both |f| > 0 and |g| > 0, the system is fully dynamical. GDS is a **unified transition calculus** -- not just a dynamical systems framework. ### GDS Role Mapping Every DSL maps its elements to the same four GDS roles: | GDS Role | stockflow | control | games | software | business | | -------------- | --------------- | ------------------ | -------------- | ------------------ | ---------------- | | BoundaryAction | Converter | Input | PatternInput | External entity | External source | | Policy | Flow, Auxiliary | Sensor, Controller | All game types | Process, Transform | Variable, Link | | Mechanism | Accumulation | State Dynamics | (none) | Data store update | Inventory update | | ControlAction | (unused) | (unused) | (unused) | (unused) | (unused) | Warning `ControlAction` is unused across all five DSLs. Use `Policy` for all decision, observation, and control logic. ### Verification Depth | DSL | Domain Checks | Generic (G-series) | Semantic (SC-series) | | ----------------------- | -------------------- | ------------------ | -------------------- | | gds-domains (stockflow) | 5 (SF-001..SF-005) | 6 (via SystemIR) | 7 (via GDSSpec) | | gds-domains (control) | Domain validation | 6 (via SystemIR) | 7 (via GDSSpec) | | gds-domains (games) | OGS-specific | 6 (via SystemIR) | 7 (via GDSSpec) | | gds-domains (software) | 27 across 6 diagrams | 6 (via SystemIR) | 7 (via GDSSpec) | | gds-domains (business) | 11 across 3 diagrams | 6 (via SystemIR) | 7 (via GDSSpec) | ______________________________________________________________________ ## When to Use Raw gds-framework Use the framework directly when: 1. **No DSL vocabulary fits.** Your domain does not map cleanly to stocks/flows, control loops, games, software diagrams, or business processes. 1. **You need custom block roles.** The existing roles (BoundaryAction, Policy, Mechanism) work, but you want domain-specific naming or constraints. 1. **You are exploring a new domain.** Build a prototype with raw blocks and composition, then decide if a DSL would help. ``` from gds import ( BoundaryAction, GDSSpec, Mechanism, Policy, compile_system, interface, verify, ) # Build directly with the composition algebra sensor = BoundaryAction(name="Sensor", interface=interface(forward_out=["Reading"])) logic = Policy(name="Logic", interface=interface(forward_in=["Reading"], forward_out=["Command"])) actuator = Mechanism( name="Actuator", interface=interface(forward_in=["Command"]), updates=[("Plant", "value")], ) system = sensor >> logic >> actuator system_ir = compile_system("Custom System", root=system) report = verify(system_ir) ``` Tip If you find yourself repeating the same pattern across multiple raw-framework models, that is a signal to consider creating a DSL. All five existing DSLs started as repeated patterns that were factored into a compiler. ______________________________________________________________________ ## Cross-DSL Interoperability All DSLs compile to `GDSSpec`, which means you can: 1. **Compare models across DSLs** -- the canonical `h = f . g` decomposition works on any spec, regardless of which DSL produced it. 1. **Use the same verification pipeline** -- generic checks (G-001..G-006) and semantic checks (SC-001..SC-007) work on any compiled system or spec. 1. **Query any spec with SpecQuery** -- parameter influence, entity update maps, and dependency graphs work uniformly. ``` from gds import SpecQuery, project_canonical # Same analysis works regardless of which DSL compiled the spec canonical = project_canonical(spec) query = SpecQuery(spec) print(f"State dimension: {len(canonical.state_space)}") print(f"Mechanism count: {len(canonical.mechanisms)}") print(f"Policy count: {len(canonical.policies)}") ``` For a concrete example of the same problem modeled through three different DSL lenses, see the [Rosetta Stone](https://blockscience.github.io/gds-core/guides/rosetta-stone/index.md) guide. ______________________________________________________________________ ## Summary | Question | Answer | | ---------------------------------- | ------------------------------------------------------------- | | "I have accumulating stocks" | Use **gds-domains** (stockflow) | | "I have feedback control loops" | Use **gds-domains** (control) | | "I have strategic agents" | Use **gds-domains** (games) | | "I have software to document" | Use **gds-domains** (software) | | "I have business processes" | Use **gds-domains** (business) | | "None of these fit" | Use **gds-framework** directly | | "I want to compare across domains" | All compile to **GDSSpec** -- use the canonical decomposition | # Best Practices: Composition Patterns & Anti-Patterns Practical guidance for building clean, verifiable GDS specifications. Covers naming, composition patterns, type system tips, verification workflow, and common mistakes to avoid. ______________________________________________________________________ ## Naming Conventions ### Port Names and Token-Based Auto-Wiring The `>>` operator auto-wires blocks by **token overlap**. Port names are tokenized by splitting on `+` (space-plus-space) and `,` (comma-space), then lowercasing each part. Plain spaces are **not** delimiters. ``` from gds import interface # "Heater Command" is ONE token: "heater command" interface(forward_out=["Heater Command"]) # "Temperature + Setpoint" is TWO tokens: "temperature", "setpoint" interface(forward_out=["Temperature + Setpoint"]) # This auto-wires to "Temperature" because they share the "temperature" token interface(forward_in=["Temperature"]) ``` Naming rules for auto-wiring - Use **plain spaces** for multi-word names that should stay as one token: `"Heat Signal"`, `"Order Status"` - Use **`+`** to combine independent signals into a compound port: `"Temperature + Pressure"` - Use **`,`** as an alternative compound delimiter: `"Agent 1, Agent 2"` - Token matching is **case-insensitive**: `"Heat Signal"` matches `"heat signal"` ### Block Names Choose block names that read well in verification reports and diagrams: ``` # Good: descriptive, verb-noun for actions BoundaryAction(name="Data Ingest", ...) Policy(name="Validate Transform", ...) Mechanism(name="Update Temperature", ...) # Bad: generic, unclear role AtomicBlock(name="Block1", ...) Policy(name="Process", ...) ``` Note Block names appear in verification findings, Mermaid diagrams, and `SpecQuery` results. Clear names make debugging significantly easier. ______________________________________________________________________ ## Modeling Decisions Before writing any composition, three choices shape the entire specification: **Role assignment.** Which processes become BoundaryActions (exogenous inputs), Policies (decision/observation logic), or Mechanisms (state updates)? This determines the canonical decomposition `h = f . g`. A temperature sensor could be a BoundaryAction (external data arrives) or a Policy (compute reading from state) — the right answer depends on what you want to verify, not on the physics alone. **State identification.** Which quantities are state variables and which are derived? An SIR model with three state variables (S, I, R) produces a different canonical form than one that derives R = N - S - I and tracks only two. Finer state identification lets SC-001 catch orphan variables; coarser identification creates fewer obligations. **Block granularity.** One large block or several small ones? The algebra composes anything with compatible ports, but finer granularity makes the [hierarchy tree](https://blockscience.github.io/gds-core/framework/guide/composition/index.md) more informative and gives verification more to check. A single-block model passes all structural checks trivially. These are design choices, not discoveries. Different choices lead to different verifiable specifications — neither is "wrong." Start from the question you want to answer ("Does this system avoid write conflicts on state?") and design roles backward from there. ______________________________________________________________________ ## Composition Patterns ### The Three-Tier Pipeline The canonical GDS composition follows a tiered structure that maps directly to the `h = f . g` decomposition: ``` from gds import BoundaryAction, Mechanism, Policy, interface # Tier 1: Exogenous inputs (boundary) and observers ingest = BoundaryAction( name="Data Ingest", interface=interface(forward_out=["Raw Signal"]), ) sensor = Policy( name="Sensor", interface=interface( forward_in=["State Reading"], forward_out=["Observation"], ), ) # Tier 2: Decision logic (policies) controller = Policy( name="Controller", interface=interface( forward_in=["Raw Signal + Observation"], forward_out=["Command"], ), ) # Tier 3: State dynamics (mechanisms) update = Mechanism( name="Update State", interface=interface( forward_in=["Command"], forward_out=["State Reading"], ), updates=[("Plant", "value")], ) # Compose the tiers input_tier = ingest | sensor # parallel: independent inputs forward = input_tier >> controller >> update # sequential: data flows forward system = forward.loop(...) # temporal: state feeds back to observers ``` This pattern recurs across all five DSLs: ``` (exogenous inputs | observers) >> (decision logic) >> (state dynamics) .loop(state dynamics -> observers) ``` ### When to Use Auto-Wiring vs Explicit Wiring **Auto-wiring** (`>>`) works when output and input ports share tokens: ``` # Auto-wires because "Heat Signal" tokens overlap heater = BoundaryAction( name="Heater", interface=interface(forward_out=["Heat Signal"]), ) update = Mechanism( name="Update Temperature", interface=interface(forward_in=["Heat Signal"]), updates=[("Room", "temperature")], ) pipeline = heater >> update # auto-wired via token overlap ``` **Explicit wiring** is needed when port names do not share tokens, or when you need precise control: ``` from gds.blocks.composition import StackComposition, Wiring from gds.ir.models import FlowDirection # Ports don't share tokens -- explicit wiring required tier_transition = StackComposition( name="Cross-Tier", left=policy_tier, right=mechanism_tier, wiring=[ Wiring( source_block="Controller", source_port="Control Output", target_block="Plant Dynamics", target_port="Actuator Input", direction=FlowDirection.COVARIANT, ), ], ) ``` Tip Start with auto-wiring and only switch to explicit wiring when the compiler raises a token overlap error. This keeps compositions readable. ### Feedback vs Temporal Loop Two loop operators serve different purposes: | Operator | Direction | Timing | Use Case | | ------------- | ------------- | -------------------------- | ------------------------------- | | `.feedback()` | CONTRAVARIANT | Within evaluation | Backward utility/reward signals | | `.loop()` | COVARIANT | Across temporal boundaries | State fed back to observers | ``` from gds.blocks.composition import Wiring from gds.ir.models import FlowDirection # Temporal loop: state from evaluation k feeds into observer at evaluation k+1 system_with_loop = forward_pipeline.loop( [ Wiring( source_block="Update State", source_port="State Reading", target_block="Sensor", target_port="State Reading", direction=FlowDirection.COVARIANT, ) ], ) # Feedback loop: backward signal within a single evaluation # Used in game theory for utility/payoff channels system_with_feedback = game_pipeline.feedback( [ Wiring( source_block="Payoff", source_port="Agent Utility", target_block="Decision", target_port="Agent Utility", direction=FlowDirection.CONTRAVARIANT, ) ], ) ``` Warning `.feedback()` is **contravariant** -- it flows backward. `.loop()` is **covariant** -- it flows forward across temporal boundaries. Mixing these up will cause G-003 direction consistency failures. ### Parallel Composition for Independent Subsystems Use `|` to compose blocks that operate independently at the same tier: ``` # Two boundary actions providing independent inputs heater_input = BoundaryAction( name="Heater", interface=interface(forward_out=["Heat Signal"]), ) setpoint_input = BoundaryAction( name="Setpoint", interface=interface(forward_out=["Target Temperature"]), ) # Parallel: no validation needed, ports are independent input_tier = heater_input | setpoint_input ``` Parallel composition does not validate any port relationships -- it simply places blocks side by side. The downstream `>>` composition handles the wiring. ______________________________________________________________________ ## Anti-Patterns ### Don't Use ControlAction `ControlAction` exists in the type system but is **unused across all five DSLs**. Every DSL maps observation and decision logic to `Policy` instead. ``` # Bad: ControlAction is unused and will confuse readers from gds import ControlAction controller = ControlAction(name="Controller", ...) # Good: Use Policy for all decision/observation logic from gds import Policy controller = Policy(name="Controller", ...) ``` ### Don't Put State Updates in Policy Policy blocks compute decisions. Only Mechanism blocks write state. ``` # Bad: Policy should not claim to update state controller = Policy( name="Controller", interface=interface(forward_in=["Signal"], forward_out=["Command"]), # Don't try to work around this -- Mechanism is the only writer ) # Good: Separate decision from state mutation controller = Policy( name="Controller", interface=interface(forward_in=["Signal"], forward_out=["Command"]), ) update = Mechanism( name="Apply Command", interface=interface(forward_in=["Command"]), updates=[("Plant", "value")], # only Mechanism has updates ) ``` ### Don't Skip Verification Even models that compile successfully benefit from verification. The checks catch subtle structural issues that compilation alone does not. ``` from gds import compile_system, verify system_ir = compile_system("My Model", root=pipeline) # Always verify -- even for "simple" models report = verify(system_ir) for finding in report.findings: if not finding.passed: print(f"[{finding.check_id}] {finding.message}") ``` ### Don't Create Circular Sequential Composition The `>>` operator builds a DAG. Cycles in covariant flow are caught by G-006: ``` # Bad: creates a cycle in the covariant flow graph a >> b >> c >> a # G-006 will flag this # Good: use .loop() for cross-boundary recurrence forward = a >> b >> c system = forward.loop([...]) # temporal loop, not a cycle ``` ### Don't Mix Domain Concerns in a Single Block Each block should have a single responsibility aligned with its GDS role: ``` # Bad: one block doing both validation and state update mega_block = AtomicBlock( name="Do Everything", interface=interface( forward_in=["Raw Data"], forward_out=["Clean Data"], ), ) # Good: separate concerns by role validate = Policy( name="Validate Data", interface=interface(forward_in=["Raw Data"], forward_out=["Clean Data"]), ) persist = Mechanism( name="Persist Data", interface=interface(forward_in=["Clean Data"]), updates=[("Dataset", "count")], ) ``` ______________________________________________________________________ ## Type System Tips ### Token Overlap for Auto-Wiring Understanding token splitting is essential for `>>` composition: ``` from gds.types.tokens import tokenize # Plain spaces are NOT delimiters tokenize("Heater Command") # -> {"heater command"} # " + " splits into separate tokens tokenize("Temperature + Setpoint") # -> {"temperature", "setpoint"} # ", " also splits tokenize("Agent 1, Agent 2") # -> {"agent 1", "agent 2"} ``` Two ports auto-wire when their token sets **overlap** (share at least one token): ``` from gds.types.tokens import tokens_overlap # These overlap on "temperature" tokens_overlap("Temperature + Setpoint", "Temperature") # True # These do NOT overlap tokens_overlap("Heat Signal", "Temperature Reading") # False ``` ### TypeDef Constraints Are Runtime Only TypeDef constraints validate data values, not compilation structure. They are never called during `>>` composition or `compile_system()`: ``` from gds import typedef # The constraint is checked only when you call check_value() Temperature = typedef("Temperature", float, constraint=lambda x: -273.15 <= x <= 1000) Temperature.check_value(20.0) # True Temperature.check_value(-300.0) # False -- below absolute zero # This does NOT affect compilation or wiring ``` ### Use Spaces to Define Valid Domains Spaces define the shape of data flowing between blocks. Use them to document the semantic contract: ``` from gds import space, typedef Voltage = typedef("Voltage", float, units="V") Current = typedef("Current", float, units="A") # The space documents what flows through the wire electrical_signal = space("ElectricalSignal", voltage=Voltage, current=Current) ``` ______________________________________________________________________ ## Verification Workflow Run checks in order from fastest/cheapest to most comprehensive: ### Step 1: Domain Checks (DSL-Level) If using a DSL, run its domain-specific checks first. These are the fastest and catch DSL-level errors in domain-native terms: ``` from gds_domains.stockflow.verification.engine import verify as sf_verify report = sf_verify(model) # runs SF-001..SF-005 ``` ### Step 2: Generic Checks on SystemIR After compilation, run the six structural topology checks: ``` from gds import compile_system, verify system_ir = compile_system("My Model", root=pipeline) report = verify(system_ir) # runs G-001..G-006 ``` ### Step 3: Semantic Checks on GDSSpec For full domain property validation: ``` from gds import ( check_canonical_wellformedness, check_completeness, check_determinism, check_parameter_references, check_type_safety, ) for check in [ check_completeness, check_determinism, check_type_safety, check_parameter_references, check_canonical_wellformedness, ]: findings = check(spec) for f in findings: if not f.passed: print(f"[{f.check_id}] {f.message}") ``` G-002 and BoundaryAction G-002 (signature completeness) requires every block to have both inputs and outputs. BoundaryAction blocks have no inputs by design -- they are exogenous. G-002 failures on BoundaryAction blocks are **expected and not a bug**. When running `include_gds_checks=True` in DSL verification, filter G-002 findings for BoundaryAction blocks. ______________________________________________________________________ ## Parameters Parameters (Theta) are **structural metadata**. GDS never assigns values or binds parameters to concrete data. They document what is tunable: ``` from gds import GDSSpec, Policy, interface, typedef spec = GDSSpec(name="Thermostat") # Declare the parameter Setpoint = typedef("Setpoint", float, units="celsius") spec.register_parameter("setpoint", Setpoint) # Reference it from a block controller = Policy( name="Controller", interface=interface(forward_in=["Temperature"], forward_out=["Command"]), params_used=["setpoint"], # structural reference, not a binding ) spec.register_block(controller) ``` Use `SpecQuery.param_to_blocks()` to trace which blocks depend on which parameters: ``` from gds import SpecQuery query = SpecQuery(spec) query.param_to_blocks() # -> {"setpoint": ["Controller"]} ``` Tip Parameters are for documenting tunable constants (learning rate, setpoint, threshold). Don't use them for runtime configuration -- GDS has no execution engine. Parameters exist so that structural queries like "which blocks are affected by this parameter?" can be answered without simulation. ______________________________________________________________________ ## Summary | Do | Don't | | ----------------------------------------------------------- | ---------------------------------------------------------- | | Use the three-tier pattern: boundary >> policy >> mechanism | Create circular sequential compositions | | Name ports for clear token overlap | Use generic names like "Signal" everywhere | | Start with auto-wiring, fall back to explicit | Use explicit wiring when auto-wiring works | | Use `.loop()` for cross-boundary state recurrence | Use `.feedback()` for temporal state (it is contravariant) | | Use Policy for all decision/observation logic | Use ControlAction (unused across all DSLs) | | Run verification even on passing models | Skip verification -- subtle issues hide in structure | | Separate state mutation (Mechanism) from decisions (Policy) | Put state-updating logic in Policy blocks | | Use parameters for tunable constants | Use parameters for runtime configuration | # Rosetta Stone: Cross-Domain Comparison Three views of the same **resource pool** problem, each compiled to a GDS canonical form. This guide demonstrates the central insight of GDS: the same composition algebra underlies stock-flow dynamics, feedback control, and strategic game theory. ## The Resource Pool Scenario A shared resource pool (water reservoir, inventory, commons) that agents interact with through supply, consumption, or extraction. The same real-world system is modeled through three different DSL lenses. ## Canonical Spectrum All three views compile to `GDSSpec` and project to the canonical `h = f . g` decomposition: ``` View |X| |U| |g| |f| Form Character ---------------------------------------------------------------------- Stock-Flow 1 2 3 1 h_theta = f . g Dynamical Control 1 1 2 1 h_theta = f . g Dynamical Game Theory 0 1 3 0 h = g Strategic ``` Key insight: each DSL decomposes the problem differently: - **Stock-Flow**: State X is the resource level, updated by net flow rates. Two exogenous parameters (supply rate, consumption rate) drive the dynamics. - **Control**: State X is the resource level, regulated by a feedback controller that tracks an exogenous reference setpoint. - **Game Theory**: No state -- pure strategic interaction. Two agents simultaneously choose extraction amounts; a payoff function determines the outcome. ______________________________________________________________________ ## Stock-Flow View Models the resource pool as a stock that accumulates via a supply inflow and depletes via a consumption outflow. An auxiliary computes the net rate from supply and consumption parameters. **Canonical facts:** |X|=1, |U|=2, |g|=3, |f|=1, character = Dynamical ### StockFlowModel Declaration ``` from gds_domains.stockflow.dsl.compile import compile_model, compile_to_system from gds_domains.stockflow.dsl.elements import Auxiliary, Converter, Flow, Stock from gds_domains.stockflow.dsl.model import StockFlowModel model = StockFlowModel( name="Resource Pool (Stock-Flow)", stocks=[ Stock(name="ResourceLevel", initial=100.0, non_negative=True), ], flows=[ Flow(name="supply", target="ResourceLevel"), Flow(name="consumption", source="ResourceLevel"), ], auxiliaries=[ Auxiliary(name="net_rate", inputs=["supply_rate", "consumption_rate"]), ], converters=[ Converter(name="supply_rate"), Converter(name="consumption_rate"), ], ) ``` ### Structural Diagram ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b supply_rate([supply_rate]):::boundary consumption_rate([consumption_rate]):::boundary net_rate[net_rate]:::generic supply([supply]):::boundary consumption([consumption]):::boundary ResourceLevel_Accumulation[ResourceLevel Accumulation]:::generic supply_rate --supply_rate Signal--> net_rate consumption_rate --consumption_rate Signal--> net_rate supply --supply Rate--> ResourceLevel_Accumulation consumption --consumption Rate--> ResourceLevel_Accumulation ``` ### Canonical Diagram ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a X_t(["X_t
level"]):::state X_next(["X_{t+1}
level"]):::state Theta{{"Θ
supply_rate, consumption_rate"}}:::param subgraph U ["Boundary (U)"] supply_rate[supply_rate]:::boundary consumption_rate[consumption_rate]:::boundary end subgraph g ["Policy (g)"] net_rate[net_rate]:::policy supply[supply]:::policy consumption[consumption]:::policy end subgraph f ["Mechanism (f)"] ResourceLevel_Accumulation[ResourceLevel Accumulation]:::mechanism end X_t --> U U --> g g --> f ResourceLevel_Accumulation -.-> |ResourceLevel.level| X_next Theta -.-> g Theta -.-> f style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style f fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` ______________________________________________________________________ ## Control View Models the same resource pool as a feedback control system. The resource level is a plant state, a target reference level is an exogenous input, and a controller adjusts the supply rate to track the target. **Canonical facts:** |X|=1, |U|=1, |g|=2, |f|=1, character = Dynamical ### ControlModel Declaration ``` from gds_domains.control.dsl.compile import compile_model, compile_to_system from gds_domains.control.dsl.elements import Controller, Input, Sensor, State from gds_domains.control.dsl.model import ControlModel model = ControlModel( name="Resource Pool (Control)", states=[ State(name="resource_level", initial=100.0), ], inputs=[ Input(name="target_level"), ], sensors=[ Sensor(name="level_sensor", observes=["resource_level"]), ], controllers=[ Controller( name="supply_controller", reads=["level_sensor", "target_level"], drives=["resource_level"], ), ], ) ``` ### Structural Diagram ``` %%{init:{"theme":"neutral"}}%% flowchart TD classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef generic fill:#cbd5e1,stroke:#64748b,stroke-width:1px,color:#1e293b target_level([target_level]):::boundary level_sensor[level_sensor]:::generic supply_controller[supply_controller]:::generic resource_level_Dynamics[resource_level Dynamics]:::generic target_level --target_level Reference--> supply_controller level_sensor --level_sensor Measurement--> supply_controller supply_controller --supply_controller Control--> resource_level_Dynamics resource_level_Dynamics -.resource_level State..-> level_sensor ``` ### Canonical Diagram ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef mechanism fill:#86efac,stroke:#16a34a,stroke-width:2px,color:#14532d classDef param fill:#fdba74,stroke:#ea580c,stroke-width:2px,color:#7c2d12 classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a X_t(["X_t
value"]):::state X_next(["X_{t+1}
value"]):::state Theta{{"Θ
target_level"}}:::param subgraph U ["Boundary (U)"] target_level[target_level]:::boundary end subgraph g ["Policy (g)"] level_sensor[level_sensor]:::policy supply_controller[supply_controller]:::policy end subgraph f ["Mechanism (f)"] resource_level_Dynamics[resource_level Dynamics]:::mechanism end X_t --> U U --> g g --> f resource_level_Dynamics -.-> |resource_level.value| X_next Theta -.-> g Theta -.-> f style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e style f fill:#dcfce7,stroke:#4ade80,stroke-width:1px,color:#166534 ``` ______________________________________________________________________ ## Game Theory View Models the resource pool as a two-player extraction game using the OGS (Open Games) DSL. Two agents simultaneously decide how much to extract from a shared resource. Each agent's payoff depends on how much resource remains after both extractions -- a classic common-pool resource dilemma. This is a stateless strategic interaction: no persistent state updates, pure policy computation. **Canonical facts:** |X|=0, |U|=1, |g|=3, |f|=0, character = Strategic ### OGS Pattern Declaration ``` from gds_domains.games.dsl.games import CovariantFunction, DecisionGame from gds_domains.games.dsl.pattern import Pattern, PatternInput from gds_domains.games.dsl.types import InputType, Signature, port resource_input = PatternInput( name="Resource Availability", input_type=InputType.RESOURCE, schema_hint="float >= 0", target_game="Agent 1 Extraction", flow_label="Resource Signal", ) agent1 = DecisionGame( name="Agent 1 Extraction", signature=Signature( x=(port("Resource Signal"),), y=(port("Agent 1 Decision"),), r=(port("Agent 1 Payoff"),), ), logic="Choose extraction amount based on resource availability", ) agent2 = DecisionGame( name="Agent 2 Extraction", signature=Signature( x=(port("Resource Signal"),), y=(port("Agent 2 Decision"),), r=(port("Agent 2 Payoff"),), ), logic="Choose extraction amount based on resource availability", ) payoff = CovariantFunction( name="Payoff Computation", signature=Signature( x=(port("Agent 1 Decision"), port("Agent 2 Decision")), y=(port("Allocation Result"),), ), logic="Compute payoffs based on total extraction vs available resource", ) # Compose: agents decide in parallel, then payoff computation game_tree = (agent1 | agent2) >> payoff pattern = Pattern( name="Resource Pool (Game)", game=game_tree, inputs=[resource_input], ) ``` ### Canonical Diagram Since there are no mechanisms (|f|=0), the canonical form reduces to **h = g** -- pure policy. ``` %%{init:{"theme":"neutral"}}%% flowchart LR classDef boundary fill:#93c5fd,stroke:#2563eb,stroke-width:2px,color:#1e3a5f classDef policy fill:#fcd34d,stroke:#d97706,stroke-width:2px,color:#78350f classDef state fill:#5eead4,stroke:#0d9488,stroke-width:2px,color:#134e4a X_t(["X_t"]):::state X_next(["X_{t+1}"]):::state subgraph U ["Boundary (U)"] Resource_Availability[Resource Availability]:::boundary end subgraph g ["Policy (g)"] Agent_1_Extraction[Agent 1 Extraction]:::policy Agent_2_Extraction[Agent 2 Extraction]:::policy Payoff_Computation[Payoff Computation]:::policy end X_t --> U U --> g g --> X_next style U fill:#dbeafe,stroke:#60a5fa,stroke-width:1px,color:#1e40af style g fill:#fef3c7,stroke:#fbbf24,stroke-width:1px,color:#92400e ``` ______________________________________________________________________ ## Cross-Domain Comparison The comparison table built programmatically by `comparison.py` reveals the unified transition calculus: ``` h_theta : X -> X where h = f . g ``` - When |f| > 0 and |g| > 0: **Dynamical** system (stock-flow, control) - When |f| = 0 and |g| > 0: **Strategic** system (game theory) - When |g| = 0 and |f| > 0: **Autonomous** system (no policy) This is the "Rosetta Stone" -- the same mathematical structure expressed in different domain languages, all grounded in GDS theory. ``` from gds_examples.rosetta.comparison import canonical_spectrum_table print(canonical_spectrum_table()) ``` ## Interactive Notebook Source code for `packages/gds-examples/notebooks/rosetta.py` Tip: paste this code into an empty cell, and the marimo editor will create cells for you ``` """Cross-Domain Rosetta Stone — interactive marimo notebook. Compares the same resource-pool scenario across three DSL views (Stock-Flow, Control, Game Theory), showing how they all map to the GDS canonical form. Run with: marimo run notebooks/rosetta.py """ # /// script # requires-python = ">=3.12" # dependencies = [ # "gds-examples", # "marimo>=0.20.0", # ] # /// import marimo __generated_with = "0.10.0" app = marimo.App(width="medium", app_title="GDS Rosetta Stone Guide") @app.cell def _(mo): mo.md( r""" # Cross-Domain Rosetta Stone The same **resource pool** scenario modeled in three domain-specific languages, all mapping to the GDS canonical form $h = f \circ g$. | DSL | Perspective | Key Idea | |-----|------------|----------| | **Stock-Flow** | Accumulation | Rates change resource level | | **Control** | Regulation | Controller tracks reference | | **Game Theory** | Strategic | Agents extract from pool | Each DSL compiles to a `GDSSpec`, from which `project_canonical()` extracts the formal decomposition. The table below summarises the **canonical spectrum** across all three views. """ ) return @app.cell def _(mo, comparison): _table = comparison.canonical_spectrum_table() mo.md( f""" ## Canonical Spectrum ``` {\_table} ``` The spectrum reveals how the same real-world concept decomposes differently: - **Stock-Flow** and **Control** are *dynamical* ($f \\neq \\emptyset$) — state evolves over time. - **Game Theory** is *strategic* ($f = \\emptyset$) — pure policy, no persistent state. """ ) return @app.cell def _(mo): mo.md( r""" --- ## Stock-Flow View Models the resource pool as a stock that accumulates via supply inflow and depletes via consumption outflow. An auxiliary computes the net rate. **Key facts:** $|X|=1$, $|U|=2$, $|g|=3$, $|f|=1$, character = *Dynamical* """ ) return @app.cell def _(mo, gds_viz, sf_view): _model = sf_view.build_model() _spec = sf_view.build_spec() _system = sf_view.build_system() _canonical = sf_view.build_canonical() _structural_mermaid = gds_viz.system_to_mermaid(_system) _canonical_mermaid = gds_viz.canonical_to_mermaid(_canonical) mo.vstack( [ mo.md(f"**Model:** {_model.name}"), mo.ui.tabs( { "Structural Diagram": mo.mermaid(_structural_mermaid), "Canonical Diagram": mo.mermaid(_canonical_mermaid), "Formula": mo.md( f"$$\n{_canonical.formula()}\n$$\n\n" f"- State X: `{list(_canonical.state_variables)}`\n" f"- Inputs U: `{list(_canonical.input_ports)}`\n" f"- Policy g: `{list(_canonical.policy_blocks)}`\n" f"- Mechanism f: `{list(_canonical.mechanism_blocks)}`" ), } ), ] ) return @app.cell def _(mo): mo.md( r""" --- ## Control View Models the same resource pool as a feedback control system. A controller adjusts supply to track an exogenous target reference level. **Key facts:** $|X|=1$, $|U|=1$, $|g|=2$, $|f|=1$, character = *Dynamical* """ ) return @app.cell def _(mo, gds_viz, ctrl_view): _model = ctrl_view.build_model() _spec = ctrl_view.build_spec() _system = ctrl_view.build_system() _canonical = ctrl_view.build_canonical() _structural_mermaid = gds_viz.system_to_mermaid(_system) _canonical_mermaid = gds_viz.canonical_to_mermaid(_canonical) mo.vstack( [ mo.md(f"**Model:** {_model.name}"), mo.ui.tabs( { "Structural Diagram": mo.mermaid(_structural_mermaid), "Canonical Diagram": mo.mermaid(_canonical_mermaid), "Formula": mo.md( f"$$\n{_canonical.formula()}\n$$\n\n" f"- State X: `{list(_canonical.state_variables)}`\n" f"- Inputs U: `{list(_canonical.input_ports)}`\n" f"- Policy g: `{list(_canonical.policy_blocks)}`\n" f"- Mechanism f: `{list(_canonical.mechanism_blocks)}`" ), } ), ] ) return @app.cell def _(mo): mo.md( r""" --- ## Game Theory View Models the resource pool as a two-player extraction game. Two agents simultaneously choose extraction amounts; a payoff function computes allocations. No persistent state — pure strategic interaction. **Key facts:** $|X|=0$, $|U|=1$, $|g|=3$, $|f|=0$, character = *Strategic* """ ) return @app.cell def _(mo, gds_viz, game_view): _pattern = game_view.build_pattern() _canonical = game_view.build_canonical() _canonical_mermaid = gds_viz.canonical_to_mermaid(_canonical) mo.vstack( [ mo.md(f"**Pattern:** {_pattern.name}"), mo.ui.tabs( { "Canonical Diagram": mo.mermaid(_canonical_mermaid), "Formula": mo.md( "Since there are no mechanisms ($|f|=0$), the canonical " "form reduces to $h = g$ — pure policy.\n\n" f"- State X: `{list(_canonical.state_variables)}`\n" f"- Inputs U: `{list(_canonical.input_ports)}`\n" f"- Policy g: `{list(_canonical.policy_blocks)}`\n" f"- Mechanism f: `{list(_canonical.mechanism_blocks)}`" ), } ), ] ) return @app.cell def _(mo): mo.md( r""" --- ## Cross-Domain Comparison Select a view below to explore in detail, or browse the tabs for a side-by-side comparison of all three canonical diagrams. """ ) return @app.cell def _(mo, gds_viz, sf_view, ctrl_view, game_view, comparison): _canonicals = comparison.build_all_canonicals() _sf_system = sf_view.build_system() _ctrl_system = ctrl_view.build_system() _sf_canonical_mermaid = gds_viz.canonical_to_mermaid(_canonicals["Stock-Flow"]) _ctrl_canonical_mermaid = gds_viz.canonical_to_mermaid(_canonicals["Control"]) _game_canonical_mermaid = gds_viz.canonical_to_mermaid(_canonicals["Game Theory"]) view_dropdown = mo.ui.dropdown( options=["Stock-Flow", "Control", "Game Theory"], value="Stock-Flow", label="Select view:", ) _comparison_tabs = mo.ui.tabs( { "Stock-Flow Canonical": mo.mermaid(_sf_canonical_mermaid), "Control Canonical": mo.mermaid(_ctrl_canonical_mermaid), "Game Theory Canonical": mo.mermaid(_game_canonical_mermaid), } ) # Build comparison markdown from canonical data _rows = [] for _name, _c in _canonicals.items(): _x = len(_c.state_variables) _u = len(_c.input_ports) _g = len(_c.policy_blocks) _f = len(_c.mechanism_blocks) _char = "Dynamical" if _f > 0 else "Strategic" _form = _c.formula() _rows.append(f"| {_name} | {_x} | {_u} | {_g} | {_f} | {_form} | {_char} |") _comparison_md = mo.md( "### Comparison Data\n\n" "| View | |X| | |U| | |g| | |f| | Formula | Character |\n" "|------|-----|-----|-----|-----|---------|----------|\n" + "\n".join(_rows) ) mo.vstack( [ view_dropdown, _comparison_tabs, _comparison_md, ] ) return (view_dropdown,) @app.cell def _(mo, gds_viz, view_dropdown, sf_view, ctrl_view, game_view): _selected = view_dropdown.value if _selected == "Stock-Flow": _canonical = sf_view.build_canonical() _system = sf_view.build_system() _detail_structural = mo.mermaid(gds_viz.system_to_mermaid(_system)) _detail_canonical = mo.mermaid(gds_viz.canonical_to_mermaid(_canonical)) _detail_content = mo.vstack( [ mo.md(f"### {_selected} — Detail View"), mo.ui.tabs( { "Structural": _detail_structural, "Canonical": _detail_canonical, } ), ] ) elif _selected == "Control": _canonical = ctrl_view.build_canonical() _system = ctrl_view.build_system() _detail_structural = mo.mermaid(gds_viz.system_to_mermaid(_system)) _detail_canonical = mo.mermaid(gds_viz.canonical_to_mermaid(_canonical)) _detail_content = mo.vstack( [ mo.md(f"### {_selected} — Detail View"), mo.ui.tabs( { "Structural": _detail_structural, "Canonical": _detail_canonical, } ), ] ) else: _canonical = game_view.build_canonical() _detail_canonical = mo.mermaid(gds_viz.canonical_to_mermaid(_canonical)) _detail_content = mo.vstack( [ mo.md(f"### {_selected} — Detail View"), mo.md("*(Game theory has no SystemIR structural diagram.)*"), _detail_canonical, ] ) return (_detail_content,) @app.cell def _(): import marimo as mo return (mo,) @app.cell def _(): from gds_examples.rosetta import comparison, game_view from gds_examples.rosetta import control_view as ctrl_view from gds_examples.rosetta import stockflow_view as sf_view import gds_viz return comparison, ctrl_view, game_view, gds_viz, sf_view if __name__ == "__main__": app.run() ``` To run the notebook locally: ``` uv run marimo run packages/gds-examples/notebooks/rosetta.py ``` Run the test suite: ``` uv run --package gds-examples pytest packages/gds-examples/tests/test_rosetta.py -v ``` ## Source Files | File | Purpose | | -------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | | [`stockflow_view.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/rosetta/stockflow_view.py) | Stock-flow DSL model | | [`control_view.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/rosetta/control_view.py) | Control DSL model | | [`game_view.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/rosetta/game_view.py) | Game theory DSL model | | [`comparison.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/rosetta/comparison.py) | Cross-domain canonical comparison | | [`rosetta.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/notebooks/rosetta.py) | Interactive marimo notebook | # Verification Guide A hands-on walkthrough of the three verification layers in GDS, using deliberately broken models to demonstrate what each check catches and how to fix it. ## Three Verification Layers | Layer | Checks | Operates on | Catches | | ------------ | -------------- | ----------- | -------------------------- | | **Generic** | G-001..G-006 | `SystemIR` | Structural topology errors | | **Semantic** | SC-001..SC-009 | `GDSSpec` | Domain property violations | | **Domain** | SF-001..SF-005 | DSL model | DSL-specific errors | Each layer operates on a different representation, and the layers are complementary: a model can pass all generic checks but fail semantic checks (and vice versa). ### What Verification Does Not Cover All three layers check **structural consistency** — does the model obey the rules of its own declared categories? They do not check whether those categories were chosen well. A stock-flow model where "customer satisfaction" is declared as a Stock will pass every check — whether satisfaction actually accumulates like a stock is a judgment call that no formal check can answer. This is the boundary between **verification** (automated, structural) and **validation** (human, domain-specific). Verification asks: "Given the roles and state variables you declared, is the model internally consistent?" Validation asks: "Did you declare the right roles and state variables for this problem?" GDS owns the first question. The modeler owns the second. ______________________________________________________________________ ## Layer 1: Generic Checks (G-series) Generic checks operate on the compiled `SystemIR` -- the flat block graph with typed wirings. They verify **structural topology** independent of any domain semantics. ### G-004: Dangling Wirings A wiring references a block that does not exist in the system. ``` from gds.ir.models import BlockIR, FlowDirection, SystemIR, WiringIR system = SystemIR( name="Dangling Wiring Demo", blocks=[ BlockIR(name="A", signature=("", "Signal", "", "")), BlockIR(name="B", signature=("Signal", "", "", "")), ], wirings=[ WiringIR( source="Ghost", # does not exist! target="B", label="signal", direction=FlowDirection.COVARIANT, ), ], ) ``` ``` from gds.verification.engine import verify from gds.verification.generic_checks import check_g004_dangling_wirings report = verify(system, checks=[check_g004_dangling_wirings]) # -> G-004 FAIL: source 'Ghost' unknown ``` ### G-001/G-005: Type Mismatches Block A outputs `Temperature` but Block B expects `Pressure`. The wiring label does not match either side. ``` system = SystemIR( name="Type Mismatch Demo", blocks=[ BlockIR(name="A", signature=("", "Temperature", "", "")), BlockIR(name="B", signature=("Pressure", "", "", "")), ], wirings=[ WiringIR( source="A", target="B", label="humidity", # matches neither side direction=FlowDirection.COVARIANT, ), ], ) ``` - **G-001** flags: wiring label does not match source out or target in - **G-005** flags: type mismatch in sequential composition ### G-006: Covariant Cycles Three blocks form a cycle via non-temporal covariant wirings -- an algebraic loop that cannot be resolved within a single evaluation. ``` system = SystemIR( name="Covariant Cycle Demo", blocks=[ BlockIR(name="A", signature=("Signal", "Signal", "", "")), BlockIR(name="B", signature=("Signal", "Signal", "", "")), BlockIR(name="C", signature=("Signal", "Signal", "", "")), ], wirings=[ WiringIR(source="A", target="B", label="signal", direction=FlowDirection.COVARIANT), WiringIR(source="B", target="C", label="signal", direction=FlowDirection.COVARIANT), WiringIR(source="C", target="A", label="signal", direction=FlowDirection.COVARIANT), ], ) # -> G-006 FAIL: covariant flow graph contains a cycle ``` ### G-003: Direction Contradictions A wiring marked COVARIANT but also `is_feedback=True` is a contradiction: feedback implies contravariant flow. ``` WiringIR( source="A", target="B", label="command", direction=FlowDirection.COVARIANT, is_feedback=True, # contradiction! ) # -> G-003 FAIL: COVARIANT + is_feedback contradiction ``` ### Fix and Re-verify The core workflow: build a broken model, run checks, inspect findings, fix errors, re-verify. ``` from gds.verification.engine import verify from gds_examples.verification.broken_models import ( dangling_wiring_system, fixed_pipeline_system, ) # Step 1: Broken model broken_report = verify(dangling_wiring_system()) # -> errors >= 1 # Step 2: Fixed model fixed_report = verify(fixed_pipeline_system()) # -> all checks pass, 0 errors ``` ______________________________________________________________________ ## Layer 2: Semantic Checks (SC-series) Semantic checks operate on `GDSSpec` -- the specification registry with types, entities, blocks, and parameters. They verify **domain properties** like completeness, determinism, and canonical well-formedness. ### SC-001: Orphan State Variables Entity `Reservoir` has variable `level` but no mechanism updates it. ``` from gds.blocks.roles import Policy from gds.spec import GDSSpec from gds.state import Entity, StateVariable from gds.types.interface import Interface, port from gds.types.typedef import TypeDef Count = TypeDef(name="Count", python_type=int, constraint=lambda x: x >= 0) spec = GDSSpec(name="Orphan State Demo") spec.register_type(Count) reservoir = Entity( name="Reservoir", variables={ "level": StateVariable(name="level", typedef=Count, symbol="L"), }, ) spec.register_entity(reservoir) # A policy observes but no mechanism updates the reservoir observe = Policy( name="Observe Level", interface=Interface(forward_out=(port("Level Signal"),)), ) spec.register_block(observe) ``` ``` from gds.verification.spec_checks import check_completeness findings = check_completeness(spec) # -> SC-001 WARNING: orphan state variable Reservoir.level ``` ### SC-002: Write Conflicts Two mechanisms both update `Counter.value` within the same wiring -- non-deterministic state transition. ``` from gds.blocks.roles import BoundaryAction, Mechanism inc = Mechanism( name="Increment Counter", interface=Interface(forward_in=(port("Delta Signal"),)), updates=[("Counter", "value")], ) dec = Mechanism( name="Decrement Counter", interface=Interface(forward_in=(port("Delta Signal"),)), updates=[("Counter", "value")], # same variable! ) ``` ``` from gds.verification.spec_checks import check_determinism findings = check_determinism(spec) # -> SC-002 ERROR: write conflict -- Counter.value updated by two mechanisms ``` ### SC-006/SC-007: Empty Canonical Form A spec with no mechanisms and no entities -- empty state transition and empty state space. ``` spec = GDSSpec(name="Empty Canonical Demo") spec.register_block(Policy( name="Observer", interface=Interface( forward_in=(port("Input Signal"),), forward_out=(port("Output Signal"),), ), )) ``` ``` from gds.verification.spec_checks import check_canonical_wellformedness findings = check_canonical_wellformedness(spec) # -> SC-006 FAIL: no mechanisms found -- state transition f is empty # -> SC-007 FAIL: state space X is empty ``` ______________________________________________________________________ ## Layer 3: Domain Checks (SF-series) Domain checks operate on the **DSL model** before compilation. They catch errors that only make sense in the domain semantics -- for example, "orphan stock" is meaningless outside stock-flow. The StockFlow DSL provides SF-001..SF-005, running before GDS compilation for early feedback in domain-native terms. ### SF-001: Orphan Stocks A stock has no connected flows -- nothing fills or drains it. ``` from gds_domains.stockflow.dsl.elements import Flow, Stock from gds_domains.stockflow.dsl.model import StockFlowModel model = StockFlowModel( name="Orphan Stock Demo", stocks=[ Stock(name="Active"), Stock(name="Inventory"), # no flows! ], flows=[ Flow(name="Production", target="Active"), Flow(name="Consumption", source="Active"), ], ) ``` ``` from gds_domains.stockflow.verification.checks import check_sf001_orphan_stocks findings = check_sf001_orphan_stocks(model) # -> SF-001 WARNING: Stock 'Inventory' has no connected flows ``` ### SF-003: Auxiliary Cycles Circular dependency between auxiliaries -- Price depends on Demand, which depends on Price. ``` from gds_domains.stockflow.dsl.elements import Auxiliary, Stock from gds_domains.stockflow.dsl.model import StockFlowModel model = StockFlowModel( name="Cyclic Auxiliary Demo", stocks=[Stock(name="Supply")], auxiliaries=[ Auxiliary(name="Price", inputs=["Demand"]), Auxiliary(name="Demand", inputs=["Price"]), ], ) ``` ``` from gds_domains.stockflow.verification.checks import check_sf003_auxiliary_acyclicity findings = check_sf003_auxiliary_acyclicity(model) # -> SF-003 ERROR: cycle detected in auxiliary dependency graph ``` ### SF-004: Unused Converters A converter is declared but no auxiliary reads from it. ``` from gds_domains.stockflow.dsl.elements import Auxiliary, Converter, Flow, Stock from gds_domains.stockflow.dsl.model import StockFlowModel model = StockFlowModel( name="Unused Converter Demo", stocks=[Stock(name="Revenue")], flows=[Flow(name="Income", target="Revenue")], auxiliaries=[Auxiliary(name="Growth", inputs=["Revenue"])], converters=[Converter(name="Tax Rate")], # unused! ) ``` ``` from gds_domains.stockflow.verification.checks import check_sf004_converter_connectivity findings = check_sf004_converter_connectivity(model) # -> SF-004 WARNING: Converter 'Tax Rate' is NOT referenced by any auxiliary ``` ### Combined Domain + GDS Checks The StockFlow verification engine can run domain checks (SF) **and** generic GDS checks (G) together: ``` from gds_domains.stockflow.verification.engine import verify report = verify(model, include_gds_checks=True) sf_findings = [f for f in report.findings if f.check_id.startswith("SF-")] gds_findings = [f for f in report.findings if f.check_id.startswith("G-")] ``` ______________________________________________________________________ ## Check Reference ### Generic Checks (SystemIR) | ID | Name | Catches | | ----- | ------------------------ | ----------------------------- | | G-001 | Domain/codomain matching | Wiring label vs port mismatch | | G-002 | Signature completeness | Blocks with no ports | | G-003 | Direction consistency | COVARIANT + feedback flag | | G-004 | Dangling wirings | References to missing blocks | | G-005 | Sequential type compat | Mismatched `>>` port types | | G-006 | Covariant acyclicity | Algebraic loops | ### Semantic Checks (GDSSpec) | ID | Name | Catches | | ------ | ------------------ | ------------------------------------- | | SC-001 | Completeness | Orphan state variables | | SC-002 | Determinism | Write conflicts | | SC-003 | Reachability | Unreachable blocks | | SC-004 | Type safety | TypeDef violations | | SC-005 | Parameter refs | Unregistered params | | SC-006 | Canonical f | No mechanisms | | SC-007 | Canonical X | No state space | | SC-008 | Admissibility refs | Invalid boundary block or state deps | | SC-009 | Transition reads | Invalid mechanism reads or block deps | ### Domain Checks (StockFlowModel) | ID | Name | Catches | | ------ | ---------------------- | -------------------- | | SF-001 | Orphan stocks | Stocks with no flows | | SF-003 | Auxiliary cycles | Circular aux deps | | SF-004 | Converter connectivity | Unused converters | ### API Pattern ``` from gds.verification.engine import verify system = compile_system(name="My Model", root=pipeline) report = verify(system) for finding in report.findings: print(f"{finding.check_id}: {finding.message}") ``` ## Interactive Notebook Source code for `packages/gds-examples/notebooks/verification.py` Tip: paste this code into an empty cell, and the marimo editor will create cells for you ```` """GDS Verification Guide — Interactive Marimo Notebook. Explore the three layers of GDS verification by running checks on deliberately broken models, inspecting findings, and watching the fix-and-reverify workflow in action. Run interactively: uv run marimo edit notebooks/verification.py Run as read-only app: uv run marimo run notebooks/verification.py """ # /// script # requires-python = ">=3.12" # dependencies = [ # "gds-examples", # "marimo>=0.20.0", # ] # /// import marimo __generated_with = "0.20.2" app = marimo.App(width="medium", app_title="GDS Verification Guide") # ── Setup ──────────────────────────────────────────────────── @app.cell def imports(): import marimo as mo return (mo,) @app.cell def header(mo): mo.md( """ # GDS Verification Guide GDS has **three layers** of verification checks, each operating on a different representation: | Layer | Checks | Operates on | Catches | |-------|--------|-------------|---------| | Generic | G-001..G-006 | `SystemIR` | Structural topology errors | | Semantic | SC-001..SC-007 | `GDSSpec` | Domain property violations | | Domain | SF-001..SF-005 | DSL model | DSL-specific errors | This notebook walks through each layer with deliberately broken models, showing what each check detects and how to fix it. """ ) return () # ── Section 1: Generic Checks ─────────────────────────────── @app.cell def section_generic_header(mo): mo.md( """ --- ## Section 1: Generic Checks (G-series) Generic checks operate on the compiled `SystemIR` — the flat block graph with typed wirings. They verify **structural topology** independent of any domain semantics. Select a broken model below to see which check catches the error. """ ) return () @app.cell def generic_selector(mo): generic_dropdown = mo.ui.dropdown( options={ "G-004: Dangling Wiring": "dangling", "G-001/G-005: Type Mismatch": "type_mismatch", "G-006: Covariant Cycle": "cycle", "G-003: Direction Contradiction": "direction", "G-002: Incomplete Signature": "signature", }, value="G-004: Dangling Wiring", label="Broken model", ) return (generic_dropdown,) @app.cell def run_generic_check(mo, generic_dropdown): from gds_examples.verification.broken_models import ( covariant_cycle_system, dangling_wiring_system, direction_contradiction_system, incomplete_signature_system, type_mismatch_system, ) from gds.verification.engine import verify from gds.verification.generic_checks import ( check_g001_domain_codomain_matching, check_g002_signature_completeness, check_g003_direction_consistency, check_g004_dangling_wirings, check_g005_sequential_type_compatibility, check_g006_covariant_acyclicity, ) _models = { "dangling": ( dangling_wiring_system, [check_g004_dangling_wirings], "A wiring references block **'Ghost'** which doesn't exist. " "G-004 checks that every wiring source and target names a " "real block in the system.", ), "type_mismatch": ( type_mismatch_system, [ check_g001_domain_codomain_matching, check_g005_sequential_type_compatibility, ], "Block A outputs **'Temperature'** but Block B expects " "**'Pressure'**. The wiring label 'humidity' matches " "neither. G-001 checks label/port matching; G-005 checks " "sequential type compatibility.", ), "cycle": ( covariant_cycle_system, [check_g006_covariant_acyclicity], "Blocks A → B → C → A form a **covariant " "cycle** — an algebraic loop within a single timestep. " "G-006 checks that the covariant flow graph is acyclic.", ), "direction": ( direction_contradiction_system, [check_g003_direction_consistency], "A wiring is marked **COVARIANT** but also " "**is_feedback=True** — a contradiction, since feedback " "implies contravariant flow. G-003 catches this.", ), "signature": ( incomplete_signature_system, [check_g002_signature_completeness], "Block **'Orphan'** has a completely empty signature — " "no forward or backward ports at all. G-002 flags blocks " "with no inputs and no outputs.", ), } _model_fn, _checks, _description = _models[generic_dropdown.value] _system = _model_fn() _report = verify(_system, checks=_checks) _failures = [f for f in _report.findings if not f.passed] _passes = [f for f in _report.findings if f.passed] _findings_rows = [] for _f in _report.findings: _icon = "PASS" if _f.passed else "FAIL" _findings_rows.append( f"| {_icon} | {_f.check_id} | {_f.severity.name} | {_f.message} |" ) _findings_table = ( "| Status | Check | Severity | Message |\n" "|--------|-------|----------|---------|\n" + "\n".join(_findings_rows) ) mo.vstack( [ mo.md(f"### {_system.name}\n\n{_description}"), mo.md(f"**Results:** {len(_passes)} passed, {len(_failures)} failed"), mo.md(_findings_table), ] ) return () # ── Fix and Re-verify ──────────────────────────────────────── @app.cell def fix_reverify_header(mo): mo.md( """ --- ### Fix and Re-verify The core workflow: build a broken model, run checks, inspect findings, fix the errors, re-verify. Below we compare the dangling-wiring system against its fixed counterpart. """ ) return () @app.cell def fix_reverify_demo(mo): from gds_examples.verification.broken_models import ( dangling_wiring_system as _dangling_wiring_system, ) from gds_examples.verification.broken_models import ( fixed_pipeline_system, ) from gds.verification.engine import verify as _verify _broken_report = _verify(_dangling_wiring_system()) _fixed_report = _verify(fixed_pipeline_system()) _broken_failures = [f for f in _broken_report.findings if not f.passed] _fixed_failures = [f for f in _fixed_report.findings if not f.passed] mo.hstack( [ mo.vstack( [ mo.md( "**Broken** (dangling wiring)\n\n" f"- Checks: {_broken_report.checks_total}\n" f"- Errors: {_broken_report.errors}\n" f"- Passed: {_broken_report.checks_passed}" ), mo.md( "\n".join( f"- **{f.check_id}**: {f.message}" for f in _broken_failures ) or "*(no failures)*" ), ] ), mo.vstack( [ mo.md( "**Fixed** (clean pipeline)\n\n" f"- Checks: {_fixed_report.checks_total}\n" f"- Errors: {_fixed_report.errors}\n" f"- Passed: {_fixed_report.checks_passed}" ), mo.md( "\n".join( f"- **{f.check_id}**: {f.message}" for f in _fixed_failures ) or "All checks passed." ), ] ), ], widths="equal", ) return () # ── Section 2: Semantic Checks ────────────────────────────── @app.cell def section_semantic_header(mo): mo.md( """ --- ## Section 2: Semantic Checks (SC-series) Semantic checks operate on `GDSSpec` — the specification registry with types, entities, blocks, and parameters. They verify **domain properties** like completeness, determinism, and canonical well-formedness. These are complementary to generic checks: a model can pass all G-checks but fail SC-checks (and vice versa). """ ) return () @app.cell def semantic_selector(mo): semantic_dropdown = mo.ui.dropdown( options={ "SC-001: Orphan State Variable": "orphan", "SC-002: Write Conflict": "conflict", "SC-006/007: Empty Canonical": "empty", }, value="SC-001: Orphan State Variable", label="Broken spec", ) return (semantic_dropdown,) @app.cell def run_semantic_check(mo, semantic_dropdown): from gds_examples.verification.broken_models import ( empty_canonical_spec, orphan_state_spec, write_conflict_spec, ) from gds.verification.spec_checks import ( check_canonical_wellformedness, check_completeness, check_determinism, ) _specs = { "orphan": ( orphan_state_spec, check_completeness, "Entity **Reservoir** has variable `level` but no " "mechanism updates it. SC-001 flags this as a **WARNING** " "— the state variable is structurally orphaned.", ), "conflict": ( write_conflict_spec, check_determinism, "Two mechanisms both update `Counter.value` within the " "same wiring. SC-002 flags this as an **ERROR** — the " "state transition is non-deterministic.", ), "empty": ( empty_canonical_spec, check_canonical_wellformedness, "A spec with no mechanisms and no entities. SC-006 flags " "the empty state transition (f is empty); SC-007 flags " "the empty state space (X is empty).", ), } _spec_fn, _check_fn, _description = _specs[semantic_dropdown.value] _spec = _spec_fn() _findings = _check_fn(_spec) _failures = [f for f in _findings if not f.passed] _passes = [f for f in _findings if f.passed] _rows = [] for _f in _findings: _icon = "PASS" if _f.passed else "FAIL" _rows.append(f"| {_icon} | {_f.check_id} | {_f.severity.name} | {_f.message} |") _table = ( "| Status | Check | Severity | Message |\n" "|--------|-------|----------|---------|\n" + "\n".join(_rows) ) mo.vstack( [ mo.md(f"### {_spec.name}\n\n{_description}"), mo.md(f"**Results:** {len(_passes)} passed, {len(_failures)} failed"), mo.md(_table), ] ) return () # ── Generic vs Semantic Comparison ─────────────────────────── @app.cell def comparison_header(mo): mo.md( """ --- ### Generic vs Semantic: Complementary Layers A well-formed model must pass **both** layers. Below we run both on the fixed model to confirm they are complementary. """ ) return () @app.cell def comparison_demo(mo): from gds_examples.verification.verification_demo import ( demo_generic_vs_semantic, ) _results = demo_generic_vs_semantic() _generic = _results["generic"] _semantic = _results["semantic"] mo.hstack( [ mo.vstack( [ mo.md( "**Generic (G-series)**\n\n" f"- Total: {_generic.checks_total}\n" f"- Passed: {_generic.checks_passed}\n" f"- Errors: {_generic.errors}" ), ] ), mo.vstack( [ mo.md( "**Semantic (SC-series)**\n\n" f"- Total: {_semantic.checks_total}\n" f"- Passed: {_semantic.checks_passed}\n" f"- Errors: {_semantic.errors}" ), ] ), ], widths="equal", ) return () # ── Section 3: Domain Checks ──────────────────────────────── @app.cell def section_domain_header(mo): mo.md( """ --- ## Section 3: Domain Checks (SF-series) Domain checks operate on the **DSL model** before compilation. They catch errors that only make sense in the domain semantics — e.g., "orphan stock" is meaningless outside stock-flow. The StockFlow DSL provides SF-001..SF-005. These run before GDS compilation, giving early feedback in domain-native terms. """ ) return () @app.cell def domain_selector(mo): domain_dropdown = mo.ui.dropdown( options={ "SF-001: Orphan Stock": "orphan", "SF-003: Auxiliary Cycle": "cycle", "SF-004: Unused Converter": "converter", }, value="SF-001: Orphan Stock", label="Broken model", ) return (domain_dropdown,) @app.cell def run_domain_check(mo, domain_dropdown): from gds_examples.verification.domain_checks_demo import ( cyclic_auxiliary_model, orphan_stock_model, unused_converter_model, ) from gds_domains.stockflow.verification.checks import ( check_sf001_orphan_stocks, check_sf003_auxiliary_acyclicity, check_sf004_converter_connectivity, ) _models = { "orphan": ( orphan_stock_model, check_sf001_orphan_stocks, "Stock **'Inventory'** has no connected flows — nothing " "fills or drains it. SF-001 flags this as a WARNING.", ), "cycle": ( cyclic_auxiliary_model, check_sf003_auxiliary_acyclicity, "Auxiliary **'Price'** depends on 'Demand', which depends " "on 'Price' — a circular dependency. SF-003 flags this " "as an ERROR.", ), "converter": ( unused_converter_model, check_sf004_converter_connectivity, "Converter **'Tax Rate'** is declared but no auxiliary " "reads from it. SF-004 flags this as a WARNING.", ), } _model_fn, _check_fn, _description = _models[domain_dropdown.value] _model = _model_fn() _findings = _check_fn(_model) _failures = [f for f in _findings if not f.passed] _passes = [f for f in _findings if f.passed] _rows = [] for _f in _findings: _icon = "PASS" if _f.passed else "FAIL" _rows.append(f"| {_icon} | {_f.check_id} | {_f.severity.name} | {_f.message} |") _table = ( "| Status | Check | Severity | Message |\n" "|--------|-------|----------|---------|\n" + "\n".join(_rows) ) mo.vstack( [ mo.md(f"### {_model.name}\n\n{_description}"), mo.md(f"**Results:** {len(_passes)} passed, {len(_failures)} failed"), mo.md(_table), ] ) return () # ── Domain + GDS Combined ─────────────────────────────────── @app.cell def combined_header(mo): mo.md( """ --- ### Domain + GDS: Full Verification Stack The StockFlow verification engine can run domain checks (SF) **and** generic GDS checks (G) together. Toggle below to see how a good model passes both, while a broken model shows failures at the domain layer alongside GDS results. """ ) return () @app.cell def combined_selector(mo): combined_dropdown = mo.ui.dropdown( options={ "Good Model (Population)": "good", "Broken Model (Orphan Stock)": "broken", }, value="Good Model (Population)", label="Model", ) return (combined_dropdown,) @app.cell def run_combined(mo, combined_dropdown): from gds_examples.verification.domain_checks_demo import ( good_stockflow_model, ) from gds_examples.verification.domain_checks_demo import ( orphan_stock_model as _orphan_stock_model, ) from gds_domains.stockflow.verification.engine import verify as sf_verify if combined_dropdown.value == "good": _model = good_stockflow_model() else: _model = _orphan_stock_model() _report = sf_verify(_model, include_gds_checks=True) _sf = [f for f in _report.findings if f.check_id.startswith("SF-")] _gds = [f for f in _report.findings if f.check_id.startswith("G-")] _sf_fail = [f for f in _sf if not f.passed] _gds_fail = [f for f in _gds if not f.passed] mo.hstack( [ mo.vstack( [ mo.md( f"**Domain (SF)** — {len(_sf)} checks, {len(_sf_fail)} failures" ), mo.md( "\n".join( f"- {'FAIL' if not f.passed else 'PASS'} " f"**{f.check_id}**: {f.message}" for f in _sf ) or "*(no SF checks)*" ), ] ), mo.vstack( [ mo.md( f"**Generic (G)** — {len(_gds)} checks, " f"{len(_gds_fail)} failures" ), mo.md( "\n".join( f"- {'FAIL' if not f.passed else 'PASS'} " f"**{f.check_id}**: {f.message}" for f in _gds ) or "*(no G checks)*" ), ] ), ], widths="equal", ) return () # ── Reference ──────────────────────────────────────────────── @app.cell def reference(mo): mo.md( """ --- ## Check Reference ### Generic Checks (SystemIR) | ID | Name | Catches | |----|------|---------| | G-001 | Domain/codomain matching | Wiring label vs port mismatch | | G-002 | Signature completeness | Blocks with no ports | | G-003 | Direction consistency | COVARIANT + feedback flag | | G-004 | Dangling wirings | References to missing blocks | | G-005 | Sequential type compat | Mismatched >> port types | | G-006 | Covariant acyclicity | Algebraic loops | ### Semantic Checks (GDSSpec) | ID | Name | Catches | |----|------|---------| | SC-001 | Completeness | Orphan state variables | | SC-002 | Determinism | Write conflicts | | SC-003 | Reachability | Unreachable blocks | | SC-004 | Type safety | TypeDef violations | | SC-005 | Parameter refs | Unregistered params | | SC-006 | Canonical f | No mechanisms | | SC-007 | Canonical X | No state space | ### Domain Checks (StockFlowModel) | ID | Name | Catches | |----|------|---------| | SF-001 | Orphan stocks | Stocks with no flows | | SF-003 | Auxiliary cycles | Circular aux deps | | SF-004 | Converter connectivity | Unused converters | ### API Pattern ```python from gds.verification.engine import verify system = compile_system(name="My Model", root=pipeline) report = verify(system) for finding in report.findings: print(f"{finding.check_id}: {finding.message}") ``` """ ) return () if __name__ == "__main__": app.run() ```` To run the notebook locally: ``` uv run marimo run packages/gds-examples/notebooks/verification.py ``` Run the test suite: ``` uv run --package gds-examples pytest packages/gds-examples/tests/test_verification_guide.py -v ``` ## Source Files | File | Purpose | | --------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------- | | [`broken_models.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/verification/broken_models.py) | Deliberately broken models for each check | | [`verification_demo.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/verification/verification_demo.py) | Generic and semantic check demos | | [`domain_checks_demo.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/verification/domain_checks_demo.py) | StockFlow domain check demos | | [`notebook.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/notebooks/verification.py) | Interactive marimo notebook | # Visualization Guide A feature showcase for `gds-viz`, demonstrating all 6 view types, 5 built-in Mermaid themes, and cross-DSL visualization. Every diagram renders in GitHub, GitLab, VS Code, Obsidian, and mermaid.live. ## The 6 GDS Views Every GDS model can be visualized from 6 complementary perspectives. Each view answers a different question about the system's structure. | # | View | Input | Question Answered | | --- | ---------------------- | -------------- | ----------------------------------------------- | | 1 | Structural | `SystemIR` | What is the compiled block topology? | | 2 | Canonical | `CanonicalGDS` | What is the formal h = f . g decomposition? | | 3 | Architecture by Role | `GDSSpec` | How are blocks organized by GDS role? | | 4 | Architecture by Domain | `GDSSpec` | How are blocks organized by domain ownership? | | 5 | Parameter Influence | `GDSSpec` | If I change a parameter, what is affected? | | 6 | Traceability | `GDSSpec` | What could cause this state variable to change? | ### View 1: Structural The compiled block graph from `SystemIR`. Shows composition topology with role-based shapes and wiring types. **Shape conventions:** - Stadium `([...])` = BoundaryAction (exogenous input, no forward_in) - Double-bracket `[[...]]` = terminal Mechanism (state sink, no forward_out) - Rectangle `[...]` = Policy or other block with both inputs and outputs **Arrow conventions:** - Solid arrow `-->` = covariant forward flow - Dashed arrow `-.->` = temporal loop (cross-boundary) - Thick arrow `==>` = feedback (within-evaluation, contravariant) **API:** `system_to_mermaid(system)` ### View 2: Canonical GDS The mathematical decomposition: X_t --> U --> g --> f --> X\_{t+1}. Shows the abstract dynamical system with state (X), input (U), policy (g), mechanism (f), and parameter space (Theta). **API:** `canonical_to_mermaid(canonical)` ### View 3: Architecture by Role Blocks grouped by GDS role: Boundary (U), Policy (g), Mechanism (f). Entity cylinders show which state variables each mechanism writes. **API:** `spec_to_mermaid(spec)` ### View 4: Architecture by Domain Blocks grouped by domain tag. Shows organizational ownership of blocks. Blocks without the tag go into "Ungrouped". **API:** `spec_to_mermaid(spec, group_by="domain")` ### View 5: Parameter Influence Theta --> blocks --> entities causal map. Answers: "if I change parameter X, which state variables are affected?" Shows parameter hexagons, the blocks they feed, and the entities those blocks transitively update. **API:** `params_to_mermaid(spec)` ### View 6: Traceability Backwards trace from one state variable. Answers: "what blocks and parameters could cause this variable to change?" Direct mechanisms get thick arrows, transitive dependencies get normal arrows, and parameter connections get dashed arrows. **API:** `trace_to_mermaid(spec, entity, variable)` ### Generating All Views ``` from gds.canonical import project_canonical from gds_viz import ( canonical_to_mermaid, params_to_mermaid, spec_to_mermaid, system_to_mermaid, trace_to_mermaid, ) # From any model's build functions: spec = build_spec() system = build_system() canonical = project_canonical(spec) views = { "structural": system_to_mermaid(system), "canonical": canonical_to_mermaid(canonical), "architecture_by_role": spec_to_mermaid(spec), "architecture_by_domain": spec_to_mermaid(spec, group_by="domain"), "parameter_influence": params_to_mermaid(spec), "traceability": trace_to_mermaid(spec, "Susceptible", "count"), } ``` ______________________________________________________________________ ## Theme Customization Every `gds-viz` view function accepts a `theme=` parameter. There are **5 built-in Mermaid themes** that adjust node fills, strokes, text colors, and subgraph backgrounds. | Theme | Best for | | --------- | ----------------------------------------------- | | `neutral` | Light backgrounds (GitHub, docs) -- **default** | | `default` | Mermaid's blue-toned Material style | | `dark` | Dark-mode renderers | | `forest` | Green-tinted, earthy | | `base` | Minimal chrome, very light | ### Usage ``` from gds_viz import system_to_mermaid # Apply any theme to any view mermaid_str = system_to_mermaid(system, theme="dark") ``` ### All Views Support Themes Themes work with every view function: ``` from gds_viz import ( system_to_mermaid, canonical_to_mermaid, spec_to_mermaid, params_to_mermaid, trace_to_mermaid, ) system_to_mermaid(system, theme="forest") canonical_to_mermaid(canonical, theme="dark") spec_to_mermaid(spec, theme="base") params_to_mermaid(spec, theme="default") trace_to_mermaid(spec, "Entity", "variable", theme="neutral") ``` ### Neutral vs Dark Comparison The two most common choices: - **Neutral** (default): muted gray canvas with saturated node fills. Best for light-background rendering (GitHub, VS Code light mode, documentation sites). - **Dark**: dark canvas with saturated fills and light text. Optimized for dark-mode renderers. ______________________________________________________________________ ## Cross-DSL Views The `gds-viz` API is **DSL-neutral** -- it operates on `GDSSpec` and `SystemIR`, which every compilation path produces. Regardless of how a model is built (raw GDS blocks, stockflow DSL, control DSL, or games DSL), the same view functions work unchanged. ### Example: Hand-Built vs DSL-Compiled ``` # Hand-built model (SIR Epidemic) from sir_epidemic.model import build_spec, build_system sir_spec = build_spec() sir_system = build_system() sir_structural = system_to_mermaid(sir_system) # DSL-compiled model (Double Integrator via gds-control) from double_integrator.model import build_spec, build_system di_spec = build_spec() di_system = build_system() di_structural = system_to_mermaid(di_system) # Same API, same function, different models -- works identically ``` Both models decompose into the same `h = f . g` structure, but with different dimensionalities. The SIR model has parameters (Theta); the double integrator may not. The visualization layer does not care about the construction path -- it only sees the compiled IR. ### Supported DSL Sources | Source | Path to GDSSpec | Path to SystemIR | | ----------------------- | ------------------------------------------------------------- | ------------------------------------------------------- | | Raw GDS | Manual `build_spec()` | `compile_system(name, root)` | | gds-domains (stockflow) | `gds_domains.stockflow.dsl.compile.compile_model()` | `gds_domains.stockflow.dsl.compile.compile_to_system()` | | gds-domains (control) | `gds_domains.control.dsl.compile.compile_model()` | `gds_domains.control.dsl.compile.compile_to_system()` | | gds-domains (games) | `gds_domains.games.dsl.spec_bridge.compile_pattern_to_spec()` | via `PatternIR.to_system_ir()` | ______________________________________________________________________ ## API Quick Reference All functions live in `gds_viz` and return Mermaid strings. | Function | Input | View | | ------------------------------------- | -------------- | ------------ | | `system_to_mermaid(system)` | `SystemIR` | Structural | | `canonical_to_mermaid(canonical)` | `CanonicalGDS` | Canonical | | `spec_to_mermaid(spec)` | `GDSSpec` | By role | | `spec_to_mermaid(spec, group_by=...)` | `GDSSpec` | By domain | | `params_to_mermaid(spec)` | `GDSSpec` | Parameters | | `trace_to_mermaid(spec, ent, var)` | `GDSSpec` | Traceability | All accept an optional `theme=` parameter: `"neutral"`, `"default"`, `"dark"`, `"forest"`, `"base"`. ### Usage Pattern ``` from gds_viz import system_to_mermaid from my_model import build_system system = build_system() mermaid_str = system_to_mermaid(system, theme="dark") # Paste into GitHub markdown, mermaid.live, or mo.mermaid() ``` ## Interactive Notebook Source code for `packages/gds-examples/notebooks/visualization.py` Tip: paste this code into an empty cell, and the marimo editor will create cells for you ```` """GDS Visualization Guide — Interactive Marimo Notebook. Explore all 6 gds-viz view types, 5 Mermaid themes, and cross-DSL visualization using interactive controls. Every diagram renders live as you change selections. Run interactively: uv run marimo edit notebooks/visualization.py Run as read-only app: uv run marimo run notebooks/visualization.py """ # /// script # requires-python = ">=3.12" # dependencies = [ # "gds-examples", # "marimo>=0.20.0", # ] # /// import marimo __generated_with = "0.20.2" app = marimo.App(width="medium", app_title="GDS Visualization Guide") # ── Setup ──────────────────────────────────────────────────── @app.cell def imports(): import marimo as mo return (mo,) @app.cell def header(mo): mo.md( """ # GDS Visualization Guide The `gds-viz` package provides **6 complementary views** of any GDS model. Each view answers a different question about the system's structure, from compiled topology to parameter traceability. All views produce **Mermaid** diagrams that render in GitHub, GitLab, VS Code, Obsidian, and here in marimo via `mo.mermaid()`. This notebook is organized into three sections: 1. **All 6 Views** — explore every view type on the SIR Epidemic model 2. **Theme Customization** — see how the 5 built-in themes change the palette 3. **Cross-DSL Views** — same API works on hand-built and DSL-compiled models """ ) return () # ── Section 1: Build the SIR model ────────────────────────── @app.cell def build_sir(): import sys from pathlib import Path # Add stockflow/ and control/ to path for model imports _examples_root = Path(__file__).resolve().parent.parent for _subdir in ("stockflow", "control"): _path = str(_examples_root / _subdir) if _path not in sys.path: sys.path.insert(0, _path) from sir_epidemic.model import build_spec as _sir_build_spec from sir_epidemic.model import build_system as _sir_build_system from gds.canonical import project_canonical as _sir_project_canonical sir_spec = _sir_build_spec() sir_system = _sir_build_system() sir_canonical = _sir_project_canonical(sir_spec) return sir_canonical, sir_spec, sir_system # ── Section 2: All 6 Views ────────────────────────────────── @app.cell def section_all_views_header(mo): mo.md( """ --- ## Section 1: The 6 GDS Views Select a view from the dropdown to see it rendered live. Each view uses a different `gds-viz` function and shows a different perspective on the same SIR Epidemic model. """ ) return () @app.cell def view_selector(mo): view_dropdown = mo.ui.dropdown( options={ "Structural (SystemIR)": "structural", "Canonical GDS (h = f . g)": "canonical", "Architecture by Role": "role", "Architecture by Domain": "domain", "Parameter Influence": "params", "Traceability": "trace", }, value="Structural (SystemIR)", label="Select view", ) return (view_dropdown,) @app.cell def render_selected_view(mo, view_dropdown, sir_spec, sir_system, sir_canonical): from gds_viz import ( canonical_to_mermaid, params_to_mermaid, spec_to_mermaid, system_to_mermaid, trace_to_mermaid, ) _view_id = view_dropdown.value _descriptions = { "structural": ( "### View 1: Structural\n\n" "Compiled block graph from `SystemIR`. Shows composition " "topology with role-based shapes and wiring types.\n\n" "- **Stadium** `([...])` = BoundaryAction (exogenous input)\n" "- **Double-bracket** `[[...]]` = terminal Mechanism (state sink)\n" "- **Solid arrow** = forward covariant flow\n\n" "**API:** `system_to_mermaid(system)`" ), "canonical": ( "### View 2: Canonical GDS\n\n" "Mathematical decomposition: " "X_t → U → g → f → X_{t+1}.\n\n" "Shows the abstract dynamical system with state (X), " "input (U), policy (g), mechanism (f), and parameter " "space (Θ).\n\n" "**API:** `canonical_to_mermaid(canonical)`" ), "role": ( "### View 3: Architecture by Role\n\n" "Blocks grouped by GDS role: Boundary (U), Policy (g), " "Mechanism (f). Entity cylinders show which state variables " "each mechanism writes.\n\n" "**API:** `spec_to_mermaid(spec)`" ), "domain": ( "### View 4: Architecture by Domain\n\n" "Blocks grouped by domain tag (Observation, Decision, " "State Update). Shows organizational ownership.\n\n" "**API:** `spec_to_mermaid(spec, group_by='domain')`" ), "params": ( "### View 5: Parameter Influence\n\n" "Θ → blocks → entities causal map. " "Answers: *if I change parameter X, which state variables " "are affected?*\n\n" "**API:** `params_to_mermaid(spec)`" ), "trace": ( "### View 6: Traceability\n\n" "Traces `Susceptible.count` (S) backwards through the block " "graph. Answers: *what blocks and parameters could cause " "this variable to change?*\n\n" "**API:** `trace_to_mermaid(spec, entity, variable)`" ), } _mermaid_generators = { "structural": lambda: system_to_mermaid(sir_system), "canonical": lambda: canonical_to_mermaid(sir_canonical), "role": lambda: spec_to_mermaid(sir_spec), "domain": lambda: spec_to_mermaid(sir_spec, group_by="domain"), "params": lambda: params_to_mermaid(sir_spec), "trace": lambda: trace_to_mermaid(sir_spec, "Susceptible", "count"), } _mermaid_str = _mermaid_generators[_view_id]() mo.vstack( [ mo.md(_descriptions[_view_id]), mo.mermaid(_mermaid_str), ] ) return () # ── All 6 views at once (tabs) ────────────────────────────── @app.cell def all_views_tabs_header(mo): mo.md( """ --- ### All 6 Views at a Glance Use the tabs below to quickly compare all views side-by-side. """ ) return () @app.cell def all_views_tabs(mo, sir_spec, sir_system, sir_canonical): from gds_viz import ( canonical_to_mermaid as _canonical_to_mermaid, ) from gds_viz import ( params_to_mermaid as _params_to_mermaid, ) from gds_viz import ( spec_to_mermaid as _spec_to_mermaid, ) from gds_viz import ( system_to_mermaid as _system_to_mermaid, ) from gds_viz import ( trace_to_mermaid as _trace_to_mermaid, ) _tabs = mo.ui.tabs( { "1. Structural": mo.mermaid(_system_to_mermaid(sir_system)), "2. Canonical": mo.mermaid(_canonical_to_mermaid(sir_canonical)), "3. By Role": mo.mermaid(_spec_to_mermaid(sir_spec)), "4. By Domain": mo.mermaid(_spec_to_mermaid(sir_spec, group_by="domain")), "5. Parameters": mo.mermaid(_params_to_mermaid(sir_spec)), "6. Traceability": mo.mermaid( _trace_to_mermaid(sir_spec, "Susceptible", "count") ), } ) return (_tabs,) # ── Section 3: Theme Customization ────────────────────────── @app.cell def section_themes_header(mo): mo.md( """ --- ## Section 2: Theme Customization Every `gds-viz` view function accepts a `theme=` parameter. There are **5 built-in Mermaid themes** — select one below to see how it changes the palette. Themes affect node fills, strokes, text colors, and subgraph backgrounds. Choose based on your rendering context: | Theme | Best for | |-------|----------| | `neutral` | Light backgrounds (GitHub, docs) | | `default` | Mermaid's blue-toned Material style | | `dark` | Dark-mode renderers | | `forest` | Green-tinted, earthy | | `base` | Minimal chrome, very light | """ ) return () @app.cell def theme_controls(mo): theme_dropdown = mo.ui.dropdown( options=["neutral", "default", "dark", "forest", "base"], value="neutral", label="Theme", ) theme_view_dropdown = mo.ui.dropdown( options={ "Structural": "structural", "Architecture by Role": "role", }, value="Structural", label="View", ) mo.hstack([theme_dropdown, theme_view_dropdown], justify="start", gap=1) return theme_dropdown, theme_view_dropdown @app.cell def render_themed_view(mo, theme_dropdown, theme_view_dropdown, sir_spec, sir_system): from gds_viz import spec_to_mermaid as _spec_to_mermaid from gds_viz import system_to_mermaid as _system_to_mermaid _theme = theme_dropdown.value _view = theme_view_dropdown.value if _view == "structural": _mermaid = _system_to_mermaid(sir_system, theme=_theme) else: _mermaid = _spec_to_mermaid(sir_spec, theme=_theme) mo.vstack( [ mo.md(f"**Theme: `{_theme}`** | **View: {_view}**"), mo.mermaid(_mermaid), ] ) return () @app.cell def theme_comparison_header(mo): mo.md( """ ### Side-by-Side: Neutral vs Dark The two most common choices compared on the structural view. """ ) return () @app.cell def theme_side_by_side(mo, sir_system): from gds_viz import system_to_mermaid as _system_to_mermaid _neutral = _system_to_mermaid(sir_system, theme="neutral") _dark = _system_to_mermaid(sir_system, theme="dark") mo.hstack( [ mo.vstack([mo.md("**Neutral**"), mo.mermaid(_neutral)]), mo.vstack([mo.md("**Dark**"), mo.mermaid(_dark)]), ], widths="equal", ) return () # ── Section 4: Cross-DSL Views ────────────────────────────── @app.cell def section_cross_dsl_header(mo): mo.md( """ --- ## Section 3: Cross-DSL Views The `gds-viz` API is **DSL-neutral** — it operates on `GDSSpec` and `SystemIR`, which every compilation path produces. Compare the **SIR Epidemic** (hand-built with GDS primitives) against the **Double Integrator** (built via the `gds-control` DSL). The same view functions work unchanged on both. """ ) return () @app.cell def build_double_integrator(): from double_integrator.model import build_spec as _di_build_spec from double_integrator.model import build_system as _di_build_system from gds.canonical import project_canonical as _di_project_canonical di_spec = _di_build_spec() di_system = _di_build_system() di_canonical = _di_project_canonical(di_spec) return di_canonical, di_spec, di_system @app.cell def cross_dsl_controls(mo): model_dropdown = mo.ui.dropdown( options={ "SIR Epidemic (hand-built)": "sir", "Double Integrator (control DSL)": "di", }, value="SIR Epidemic (hand-built)", label="Model", ) cross_view_dropdown = mo.ui.dropdown( options={ "Structural": "structural", "Canonical": "canonical", "Architecture by Role": "role", "Parameter Influence": "params", "Traceability": "trace", }, value="Structural", label="View", ) mo.hstack([model_dropdown, cross_view_dropdown], justify="start", gap=1) return cross_view_dropdown, model_dropdown @app.cell def render_cross_dsl_view( mo, model_dropdown, cross_view_dropdown, sir_spec, sir_system, sir_canonical, di_spec, di_system, di_canonical, ): from gds_viz import ( canonical_to_mermaid as _canonical_to_mermaid, ) from gds_viz import ( params_to_mermaid as _params_to_mermaid, ) from gds_viz import ( spec_to_mermaid as _spec_to_mermaid, ) from gds_viz import ( system_to_mermaid as _system_to_mermaid, ) from gds_viz import ( trace_to_mermaid as _trace_to_mermaid, ) _model = model_dropdown.value _view = cross_view_dropdown.value if _model == "sir": _spec, _system, _canonical = sir_spec, sir_system, sir_canonical _trace_entity, _trace_var = "Susceptible", "count" _label = "SIR Epidemic" else: _spec, _system, _canonical = di_spec, di_system, di_canonical _trace_entity, _trace_var = "position", "value" _label = "Double Integrator" _generators = { "structural": lambda: _system_to_mermaid(_system), "canonical": lambda: _canonical_to_mermaid(_canonical), "role": lambda: _spec_to_mermaid(_spec), "params": lambda: _params_to_mermaid(_spec), "trace": lambda: _trace_to_mermaid(_spec, _trace_entity, _trace_var), } mo.vstack( [ mo.md(f"**{_label}** | **{_view}**"), mo.mermaid(_generators[_view]()), ] ) return () # ── Canonical Comparison ───────────────────────────────────── @app.cell def canonical_comparison_header(mo): mo.md( """ ### Canonical Comparison Both models decompose into the same `h = f . g` structure, but with different dimensionalities. The SIR model has parameters (Θ); the double integrator does not. """ ) return () @app.cell def canonical_comparison(mo, sir_canonical, di_canonical): from gds_viz import canonical_to_mermaid as _canonical_to_mermaid mo.hstack( [ mo.vstack( [ mo.md(f"**SIR Epidemic** — `{sir_canonical.formula()}`"), mo.mermaid(_canonical_to_mermaid(sir_canonical)), ] ), mo.vstack( [ mo.md(f"**Double Integrator** — `{di_canonical.formula()}`"), mo.mermaid(_canonical_to_mermaid(di_canonical)), ] ), ], widths="equal", ) return () # ── API Reference ──────────────────────────────────────────── @app.cell def api_reference(mo): mo.md( """ --- ## API Quick Reference All functions live in `gds_viz` and return Mermaid strings. | Function | Input | View | |----------|-------|------| | `system_to_mermaid(system)` | `SystemIR` | Structural | | `canonical_to_mermaid(canonical)` | `CanonicalGDS` | Canonical | | `spec_to_mermaid(spec)` | `GDSSpec` | By role | | `spec_to_mermaid(spec, group_by=...)` | `GDSSpec` | By domain | | `params_to_mermaid(spec)` | `GDSSpec` | Parameters | | `trace_to_mermaid(spec, ent, var)` | `GDSSpec` | Traceability | All accept an optional `theme=` parameter: `"neutral"`, `"default"`, `"dark"`, `"forest"`, `"base"`. ### Usage Pattern ```python from gds_viz import system_to_mermaid from my_model import build_system system = build_system() mermaid_str = system_to_mermaid(system, theme="dark") # Paste into GitHub markdown, mermaid.live, or mo.mermaid() ``` """ ) return () if __name__ == "__main__": app.run() ```` To run the notebook locally: ``` uv run marimo run packages/gds-examples/notebooks/visualization.py ``` Run the test suite: ``` uv run --package gds-examples pytest packages/gds-examples/tests/test_visualization_guide.py -v ``` ## Source Files | File | Purpose | | ------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------- | | [`all_views_demo.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/visualization/all_views_demo.py) | All 6 view types on the SIR model | | [`theme_customization.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/visualization/theme_customization.py) | 5 built-in theme demos | | [`cross_dsl_views.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/gds_examples/visualization/cross_dsl_views.py) | Cross-DSL visualization comparison | | [`visualization.py`](https://github.com/BlockScience/gds-core/blob/main/packages/gds-examples/notebooks/visualization.py) | Interactive marimo notebook | # Real-World Patterns GDS examples like SIR epidemic and thermostat demonstrate the theory well, but production systems need different patterns. This guide shows how to model common software engineering concerns — data pipelines, state machines, human-in-the-loop workflows, and large type systems — using the GDS composition algebra. Every example below is complete and runnable. Import, build, verify. ______________________________________________________________________ ## Pattern 1: Data Pipeline (ETL) **Use case:** An ingestion service reads raw records from an external source, validates and transforms them, then writes clean data to a store. This is the bread and butter of backend systems — ETL, event processing, message consumers. **GDS mapping:** - `BoundaryAction` — the data arrives from outside the system (an API, a queue, a file) - `Policy` — validation and transformation logic (pure decision: accept, reject, reshape) - `Mechanism` — the only thing that writes state (the persisted dataset) ``` from gds import ( BoundaryAction, GDSSpec, Mechanism, Policy, SpecWiring, Wire, compile_system, entity, interface, space, state_var, typedef, verify, ) # ── Types ────────────────────────────────────────────────── # TypeDefs with constraints enforce data quality at the type level. # These aren't just labels — check_value() validates real data. RawPayload = typedef("RawPayload", str, description="Unvalidated JSON string") CleanRecord = typedef( "CleanRecord", dict, constraint=lambda x: "id" in x and "amount" in x, description="Validated record with required fields", ) RecordCount = typedef( "RecordCount", int, constraint=lambda x: x >= 0, description="Running count of ingested records", ) # ── Entity ───────────────────────────────────────────────── # State that persists across pipeline runs. The dataset is the # accumulator — every successful record increments the count. dataset = entity( "Dataset", record_count=state_var(RecordCount, symbol="N"), ) # ── Spaces ───────────────────────────────────────────────── # Transient signals flowing through the pipeline within one run. raw_space = space("RawIngestion", payload=RawPayload) write_space = space("WriteCommand", record=CleanRecord) # ── Blocks ───────────────────────────────────────────────── ingest = BoundaryAction( name="Data Ingest", interface=interface(forward_out=["Raw Ingestion"]), ) validate_transform = Policy( name="Validate Transform", interface=interface( forward_in=["Raw Ingestion"], forward_out=["Write Command"], ), params_used=["schema_version"], ) persist_record = Mechanism( name="Persist Record", interface=interface(forward_in=["Write Command"]), updates=[("Dataset", "record_count")], ) # ── Spec ─────────────────────────────────────────────────── def build_etl_spec() -> GDSSpec: spec = GDSSpec(name="ETL Pipeline", description="Ingest → validate → persist") spec.collect( RawPayload, CleanRecord, RecordCount, raw_space, write_space, dataset, ingest, validate_transform, persist_record, ) spec.register_parameter("schema_version", typedef("SchemaVersion", int)) spec.register_wiring(SpecWiring( name="Ingestion Flow", block_names=["Data Ingest", "Validate Transform", "Persist Record"], wires=[ Wire(source="Data Ingest", target="Validate Transform", space="RawIngestion"), Wire(source="Validate Transform", target="Persist Record", space="WriteCommand"), ], )) errors = spec.validate_spec() assert errors == [], f"Spec validation failed: {errors}" return spec # ── System ───────────────────────────────────────────────── def build_etl_system(): pipeline = ingest >> validate_transform >> persist_record return compile_system(name="ETL Pipeline", root=pipeline) ``` **Why this decomposition matters:** The Policy block contains all the validation logic but has no access to state. It cannot write to the dataset — only the Mechanism can. This separation is exactly what you want in a data pipeline: the transform step is pure, testable, and replaceable. If your validation rules change, only the Policy changes. The ingestion source (BoundaryAction) and the persistence logic (Mechanism) are stable. ______________________________________________________________________ ## Pattern 2: State Machine **Use case:** An order progresses through a lifecycle: `PENDING` → `APPROVED` or `REJECTED`. Only valid transitions are allowed. This pattern appears in every workflow engine, approval system, and document lifecycle. **GDS mapping:** - `TypeDef` with a constraint — the status enum is a constrained string type - `Policy` — encodes the transition table (which transitions are valid) - `Mechanism` — applies the validated transition to state ``` from gds import ( BoundaryAction, GDSSpec, Mechanism, Policy, SpecWiring, Wire, compile_system, entity, interface, space, state_var, typedef, verify, ) # ── Types ────────────────────────────────────────────────── # The status type constrains values to a known set. The # constraint lambda replaces a traditional enum check. VALID_STATUSES = {"PENDING", "APPROVED", "REJECTED", "CANCELLED"} OrderStatus = typedef( "OrderStatus", str, constraint=lambda x: x in VALID_STATUSES, description="Order lifecycle status", ) OrderID = typedef("OrderID", str, description="Unique order identifier") TransitionRequest = typedef( "TransitionRequest", str, constraint=lambda x: x in VALID_STATUSES, description="Requested target status", ) TransitionValid = typedef( "TransitionValid", bool, description="Whether the requested transition is allowed", ) # ── Transition table ─────────────────────────────────────── # This is the business logic — which status transitions are # allowed. Defined as data, referenced by the Policy block # via params_used. ALLOWED_TRANSITIONS: dict[str, set[str]] = { "PENDING": {"APPROVED", "REJECTED", "CANCELLED"}, "APPROVED": {"CANCELLED"}, "REJECTED": set(), "CANCELLED": set(), } # ── Entity ───────────────────────────────────────────────── order = entity( "Order", status=state_var(OrderStatus, symbol="S"), ) # ── Spaces ───────────────────────────────────────────────── request_space = space("TransitionRequest", order_id=OrderID, target_status=TransitionRequest) decision_space = space("TransitionDecision", order_id=OrderID, new_status=OrderStatus, valid=TransitionValid) # ── Blocks ───────────────────────────────────────────────── receive_request = BoundaryAction( name="Receive Transition Request", interface=interface(forward_out=["Transition Request"]), ) # The Policy block is where the transition table lives. It # reads the requested transition and the current status, # then decides whether to allow it. No state mutation here. validate_transition = Policy( name="Validate Transition", interface=interface( forward_in=["Transition Request"], forward_out=["Transition Decision"], ), params_used=["transition_rules"], ) apply_transition = Mechanism( name="Apply Transition", interface=interface(forward_in=["Transition Decision"]), updates=[("Order", "status")], ) # ── Spec ─────────────────────────────────────────────────── def build_state_machine_spec() -> GDSSpec: spec = GDSSpec( name="Order State Machine", description="Status lifecycle with validated transitions", ) spec.collect( OrderStatus, OrderID, TransitionRequest, TransitionValid, request_space, decision_space, order, receive_request, validate_transition, apply_transition, ) # The transition rules are a parameter — they configure # the Policy but don't change the structural decomposition. TransitionRules = typedef("TransitionRules", dict, description="Allowed transition map") spec.register_parameter("transition_rules", TransitionRules) spec.register_wiring(SpecWiring( name="Order Lifecycle", block_names=[ "Receive Transition Request", "Validate Transition", "Apply Transition", ], wires=[ Wire(source="Receive Transition Request", target="Validate Transition", space="TransitionRequest"), Wire(source="Validate Transition", target="Apply Transition", space="TransitionDecision"), ], )) errors = spec.validate_spec() assert errors == [], f"Spec validation failed: {errors}" return spec # ── System ───────────────────────────────────────────────── def build_state_machine_system(): pipeline = receive_request >> validate_transition >> apply_transition return compile_system(name="Order State Machine", root=pipeline) ``` **Design insight:** The transition table is a parameter, not hardcoded into the Policy. This means you can analyze the spec structurally — "which blocks depend on transition_rules?" — without executing the validation logic. The Mechanism only applies transitions that the Policy has already validated. State mutation is always guarded by a decision layer. ______________________________________________________________________ ## Pattern 3: Human-in-the-Loop **Use case:** A document requires human approval before it can be published. A reviewer reads the document, decides to approve or reject, and the system applies the decision. This pattern covers approval workflows, content moderation, manual QA gates. **GDS mapping:** - `BoundaryAction` — the human decision enters from outside the system boundary - `Policy` — business rules that validate the approval (e.g., reviewer has authority) - `Mechanism` — applies the approved decision to state The key insight: the human reviewer is **outside the system boundary**. GDS models this naturally with `BoundaryAction` — the system does not control or predict what the human decides. ``` from gds import ( BoundaryAction, GDSSpec, Mechanism, Policy, SpecWiring, Wire, compile_system, entity, interface, space, state_var, typedef, verify, ) # ── Types ────────────────────────────────────────────────── REVIEW_DECISIONS = {"APPROVE", "REJECT", "REQUEST_CHANGES"} ReviewDecision = typedef( "ReviewDecision", str, constraint=lambda x: x in REVIEW_DECISIONS, description="Human reviewer's decision", ) DOC_STATUSES = {"DRAFT", "IN_REVIEW", "APPROVED", "REJECTED", "CHANGES_REQUESTED"} DocumentStatus = typedef( "DocumentStatus", str, constraint=lambda x: x in DOC_STATUSES, description="Document lifecycle status", ) ReviewerID = typedef("ReviewerID", str, description="Reviewer identifier") DocumentID = typedef("DocumentID", str, description="Document identifier") ReviewNotes = typedef("ReviewNotes", str, description="Free-text reviewer comments") # ── Entity ───────────────────────────────────────────────── document = entity( "Document", status=state_var(DocumentStatus, symbol="S"), ) # ── Spaces ───────────────────────────────────────────────── # The human input space — what crosses the system boundary. review_input_space = space( "ReviewInput", reviewer=ReviewerID, document=DocumentID, decision=ReviewDecision, notes=ReviewNotes, ) # Internal decision space — after business rule validation. validated_decision_space = space( "ValidatedDecision", document=DocumentID, new_status=DocumentStatus, ) # ── Blocks ───────────────────────────────────────────────── # BoundaryAction: the human reviewer. GDS treats this as an # exogenous input — the system cannot predict or control # what the reviewer decides. This is the right abstraction: # the boundary separates "things we model" from "things we # accept as given." human_review = BoundaryAction( name="Human Review", interface=interface(forward_out=["Review Input"]), options=["APPROVE", "REJECT", "REQUEST_CHANGES"], ) # Policy: validates the review against business rules. # Does the reviewer have authority? Is the document in a # reviewable state? The Policy answers these questions # without touching state. validate_review = Policy( name="Validate Review", interface=interface( forward_in=["Review Input"], forward_out=["Validated Decision"], ), params_used=["required_reviewer_role", "min_review_time"], ) # Mechanism: the only block that writes state. Applies the # validated decision to the document's status. apply_review = Mechanism( name="Apply Review Decision", interface=interface(forward_in=["Validated Decision"]), updates=[("Document", "status")], ) # ── Spec ─────────────────────────────────────────────────── def build_review_spec() -> GDSSpec: spec = GDSSpec( name="Document Review", description="Human-in-the-loop approval workflow", ) spec.collect( ReviewDecision, DocumentStatus, ReviewerID, DocumentID, ReviewNotes, review_input_space, validated_decision_space, document, human_review, validate_review, apply_review, ) RoleType = typedef("RoleType", str, description="Required reviewer role") DurationType = typedef("DurationType", int, constraint=lambda x: x >= 0, description="Minimum review time in seconds") spec.register_parameter("required_reviewer_role", RoleType) spec.register_parameter("min_review_time", DurationType) spec.register_wiring(SpecWiring( name="Review Pipeline", block_names=["Human Review", "Validate Review", "Apply Review Decision"], wires=[ Wire(source="Human Review", target="Validate Review", space="ReviewInput"), Wire(source="Validate Review", target="Apply Review Decision", space="ValidatedDecision"), ], )) errors = spec.validate_spec() assert errors == [], f"Spec validation failed: {errors}" return spec # ── System ───────────────────────────────────────────────── def build_review_system(): pipeline = human_review >> validate_review >> apply_review return compile_system(name="Document Review", root=pipeline) ``` **Why BoundaryAction for humans:** The GDS boundary is not just a modeling convenience — it is a formal claim about what the system controls. By placing the human reviewer outside the boundary, you state that the system does not model human cognition, bias, or decision-making. It only models what happens once a decision arrives. This is the correct abstraction for any system that interacts with humans, external APIs, or third-party services. ______________________________________________________________________ ## Pattern 4: Large Enum Types **Use case:** A system classifies items into one of many categories — product types, geographic regions, compliance codes. With dozens or hundreds of valid values, you need a pattern that keeps the type system manageable. **GDS mapping:** - `TypeDef` with `constraint=lambda x: x in VALID_SET` — validates membership in a set - Organize large enum sets as module-level constants grouped by domain - Reference the same TypeDef across multiple spaces and entities ``` from gds import ( BoundaryAction, GDSSpec, Mechanism, Policy, SpecWiring, Wire, compile_system, entity, interface, space, state_var, typedef, verify, ) # ── Organizing large value sets ──────────────────────────── # Group valid values by domain. These are plain Python sets — # GDS doesn't prescribe how you organize them, but convention # matters when you have 50+ categories. # Product categories — hierarchical naming keeps things navigable ELECTRONICS = {"LAPTOP", "PHONE", "TABLET", "MONITOR", "KEYBOARD", "MOUSE"} CLOTHING = {"SHIRT", "PANTS", "JACKET", "SHOES", "HAT", "SCARF"} FURNITURE = {"DESK", "CHAIR", "BOOKSHELF", "TABLE", "CABINET"} FOOD = {"PRODUCE", "DAIRY", "MEAT", "BAKERY", "FROZEN", "CANNED"} ALL_CATEGORIES = ELECTRONICS | CLOTHING | FURNITURE | FOOD # Warehouse zones ZONES = {"ZONE_A", "ZONE_B", "ZONE_C", "ZONE_D", "ZONE_E", "COLD_STORAGE", "OVERSIZED"} # Priority levels PRIORITIES = {"CRITICAL", "HIGH", "MEDIUM", "LOW", "DEFERRED"} # ── Types ────────────────────────────────────────────────── # Each typedef validates against its set. The constraint is a # membership check — O(1) for sets. ProductCategory = typedef( "ProductCategory", str, constraint=lambda x: x in ALL_CATEGORIES, description=f"Product category ({len(ALL_CATEGORIES)} valid values)", ) WarehouseZone = typedef( "WarehouseZone", str, constraint=lambda x: x in ZONES, description="Physical warehouse zone", ) Priority = typedef( "Priority", str, constraint=lambda x: x in PRIORITIES, description="Processing priority level", ) ItemID = typedef("ItemID", str, description="SKU or item identifier") Quantity = typedef( "Quantity", int, constraint=lambda x: x > 0, description="Positive item quantity", ) # ── Derived type: zone-category mapping ──────────────────── # Sometimes you need a compound constraint — "this category # belongs in this zone." Define the mapping as data and # reference it from a space or a Policy parameter. ZONE_ASSIGNMENT: dict[str, str] = { **{cat: "ZONE_A" for cat in ELECTRONICS}, **{cat: "ZONE_B" for cat in CLOTHING}, **{cat: "ZONE_C" for cat in FURNITURE}, **{cat: "COLD_STORAGE" for cat in {"PRODUCE", "DAIRY", "MEAT", "FROZEN"}}, **{cat: "ZONE_D" for cat in {"BAKERY", "CANNED"}}, } # ── Entity ───────────────────────────────────────────────── inventory_item = entity( "InventoryItem", category=state_var(ProductCategory, symbol="C"), zone=state_var(WarehouseZone, symbol="Z"), ) # ── Spaces ───────────────────────────────────────────────── intake_space = space( "ItemIntake", item_id=ItemID, category=ProductCategory, quantity=Quantity, ) classification_space = space( "ClassificationResult", item_id=ItemID, category=ProductCategory, zone=WarehouseZone, priority=Priority, ) # ── Blocks ───────────────────────────────────────────────── receive_item = BoundaryAction( name="Receive Item", interface=interface(forward_out=["Item Intake"]), ) classify_and_route = Policy( name="Classify And Route", interface=interface( forward_in=["Item Intake"], forward_out=["Classification Result"], ), params_used=["zone_assignment_rules", "priority_rules"], ) update_inventory = Mechanism( name="Update Inventory", interface=interface(forward_in=["Classification Result"]), updates=[("InventoryItem", "category"), ("InventoryItem", "zone")], ) # ── Spec ─────────────────────────────────────────────────── def build_inventory_spec() -> GDSSpec: spec = GDSSpec( name="Inventory Classification", description="Categorize and route items to warehouse zones", ) spec.collect( ProductCategory, WarehouseZone, Priority, ItemID, Quantity, intake_space, classification_space, inventory_item, receive_item, classify_and_route, update_inventory, ) ZoneRules = typedef("ZoneRules", dict, description="Category-to-zone mapping") PriorityRules = typedef("PriorityRules", dict, description="Priority assignment rules") spec.register_parameter("zone_assignment_rules", ZoneRules) spec.register_parameter("priority_rules", PriorityRules) spec.register_wiring(SpecWiring( name="Intake Pipeline", block_names=["Receive Item", "Classify And Route", "Update Inventory"], wires=[ Wire(source="Receive Item", target="Classify And Route", space="ItemIntake"), Wire(source="Classify And Route", target="Update Inventory", space="ClassificationResult"), ], )) errors = spec.validate_spec() assert errors == [], f"Spec validation failed: {errors}" return spec # ── System ───────────────────────────────────────────────── def build_inventory_system(): pipeline = receive_item >> classify_and_route >> update_inventory return compile_system(name="Inventory Classification", root=pipeline) ``` **Patterns for scaling type systems:** 1. **Group constants by domain** — `ELECTRONICS`, `CLOTHING`, etc. Union them into `ALL_CATEGORIES` for the typedef constraint. 1. **Keep constraints as set membership** — `lambda x: x in VALID_SET` is O(1) and readable. 1. **Separate mapping data from types** — `ZONE_ASSIGNMENT` maps categories to zones but is not itself a TypeDef. It is a parameter value that the Policy block references. 1. **One TypeDef per semantic role** — even if `ProductCategory` and `WarehouseZone` are both constrained strings, they are different types because they mean different things. This lets GDS type-check spaces: a space field typed as `WarehouseZone` will not accept a raw `ProductCategory` value. ______________________________________________________________________ ## Running verification Every pattern above produces a valid spec and a compilable system. You can verify both layers: ``` # Spec-level validation (structural) spec = build_etl_spec() errors = spec.validate_spec() assert errors == [] # System-level verification (generic checks on compiled IR) system = build_etl_system() report = verify(system) for finding in report.findings: print(f"[{finding.severity}] {finding.check_id}: {finding.message}") ``` The spec-level `validate_spec()` catches registration errors: unregistered types in spaces, missing blocks in wirings, mechanisms updating nonexistent entities, and unregistered parameter references. The system-level `verify()` runs the six generic checks (G-001 through G-006) on the compiled IR, catching structural issues like domain/codomain mismatches, direction inconsistencies, dangling wirings, and cycles. ______________________________________________________________________ ## Summary | Pattern | BoundaryAction | Policy | Mechanism | Key Insight | | ----------------- | ------------------ | -------------------- | ------------------- | ----------------------------------------------- | | ETL Pipeline | Data ingestion | Validate + transform | Write to store | Policy is pure; Mechanism is the only writer | | State Machine | Transition request | Validate transitions | Apply status change | Transition table is a parameter, not hardcoded | | Human-in-the-Loop | Human decision | Business rule check | Apply decision | Humans are outside the system boundary | | Large Enum Types | Item intake | Classify + route | Update inventory | Group constants by domain; one TypeDef per role | The recurring structure across all four patterns is the same three-tier pipeline: ``` BoundaryAction >> Policy >> Mechanism ``` This is not a coincidence. GDS decomposes every transition function as `h = f . g` — exogenous input enters at the boundary, decision logic lives in `g` (Policy), and state updates live in `f` (Mechanism). The patterns above show that this decomposition applies as naturally to order workflows and data pipelines as it does to epidemic models and thermostats. # Troubleshooting Common errors, verification failures, and debugging strategies. Organized by where you encounter the problem: compilation, verification, or runtime. ______________________________________________________________________ ## Compilation Errors ### Token Overlap Required **Error:** Sequential composition `>>` fails because output and input ports share no tokens. **Cause:** The `>>` operator auto-wires by token overlap. Port names are tokenized by splitting on `+` and `,`, then lowercasing. If no tokens overlap between the left block's `forward_out` and the right block's `forward_in`, composition fails. ``` # This FAILS: "Temperature" and "Pressure" share no tokens sensor = BoundaryAction( name="Sensor", interface=interface(forward_out=["Temperature"]), ) actuator = Mechanism( name="Actuator", interface=interface(forward_in=["Pressure"]), updates=[("Plant", "value")], ) pipeline = sensor >> actuator # ERROR: no token overlap ``` **Fix options:** 1. **Rename ports** so they share at least one token: ``` sensor = BoundaryAction( name="Sensor", interface=interface(forward_out=["Pressure Reading"]), ) actuator = Mechanism( name="Actuator", interface=interface(forward_in=["Pressure Reading"]), updates=[("Plant", "value")], ) pipeline = sensor >> actuator # OK: tokens overlap on "pressure reading" ``` 1. **Use explicit wiring** when renaming is not appropriate: ``` from gds.blocks.composition import StackComposition, Wiring from gds.ir.models import FlowDirection pipeline = StackComposition( name="Sensor to Actuator", left=sensor, right=actuator, wiring=[ Wiring( source_block="Sensor", source_port="Temperature", target_block="Actuator", target_port="Pressure", direction=FlowDirection.COVARIANT, ), ], ) ``` ### Port Not Found **Error:** A wiring references a port name that does not exist on the specified block. **Cause:** Typo in the port name, or the port was defined on a different block than expected. **Fix:** Check the exact port names on both blocks. Port names in `Wiring` must match the strings used in `interface()`: ``` # Check what ports a block actually has print(sensor.interface.forward_out) # inspect the actual port names ``` ### Duplicate Block Name **Error:** Two blocks in the same composition tree have the same name. **Cause:** Block names must be unique within a composition. The compiler flattens the tree and uses names as identifiers. **Fix:** Give each block a unique, descriptive name: ``` # Bad: duplicate names sensor_a = Policy(name="Sensor", ...) sensor_b = Policy(name="Sensor", ...) # name collision # Good: unique names sensor_a = Policy(name="Temperature Sensor", ...) sensor_b = Policy(name="Pressure Sensor", ...) ``` ______________________________________________________________________ ## Generic Check Failures (G-Series) Generic checks operate on the compiled `SystemIR` and verify structural topology. ### G-001: Domain/Codomain Matching **What it checks:** For every covariant wiring, the wiring label must be consistent with the source block's `forward_out` or the target block's `forward_in` (token subset). **When it fails:** A wiring label does not match either the source output ports or the target input ports. **Fix:** Ensure the wiring label shares tokens with the connected ports. If using auto-wiring, this is handled automatically. If using explicit wiring, check that your `Wiring.label` (or the port names it derives from) match the block interfaces. ### G-002: Signature Completeness **What it checks:** Every block must have at least one non-empty input slot and at least one non-empty output slot. **When it fails:** A block has no inputs, no outputs, or neither. BoundaryAction blocks will always fail G-002 BoundaryAction has no `forward_in` ports by design -- it represents exogenous input. This is **expected behavior**, not a bug. When running verification with `include_gds_checks=True` in DSL engines, filter G-002 findings for BoundaryAction blocks: ``` # Filter out expected G-002 failures on BoundaryAction blocks real_failures = [ f for f in report.findings if not f.passed and not ( f.check_id == "G-002" and any("BoundaryAction" in elem or "no inputs" in f.message for elem in f.source_elements) ) ] ``` ### G-003: Direction Consistency **What it checks:** Two validations: - Flag consistency: COVARIANT + `is_feedback` is a contradiction (feedback implies contravariant). CONTRAVARIANT + `is_temporal` is also a contradiction (temporal implies covariant). - Contravariant port-slot matching: for contravariant wirings, the label must match backward ports. **When it fails:** Direction flags contradict each other, or contravariant wiring labels do not match backward ports. **Fix:** Ensure `.feedback()` wirings use `FlowDirection.CONTRAVARIANT` and `.loop()` wirings use `FlowDirection.COVARIANT`. ### G-004: Dangling Wirings **What it checks:** Every wiring's source and target must reference a block that exists in the system. **When it fails:** A wiring references a block name that is not in the compiled system -- typically a typo or a block that was removed from the composition. ``` # This will fail G-004: "Ghost" does not exist WiringIR(source="Ghost", target="B", label="signal", ...) # -> G-004 FAIL: source 'Ghost' unknown ``` **Fix:** Check that all block names in wirings match actual block names in the composition tree. ### G-005: Sequential Type Compatibility **What it checks:** In stack composition (non-temporal, covariant wirings), the wiring label must be a token subset of **both** the source's `forward_out` and the target's `forward_in`. **When it fails:** A wiring connects blocks with incompatible port types in sequential composition. **Fix:** Rename ports so they share tokens, or use explicit wiring with correct labels. ### G-006: Covariant Acyclicity **What it checks:** The covariant (non-temporal, non-contravariant) flow graph must be a DAG -- no cycles within a single evaluation. **When it fails:** Three or more blocks form a cycle via covariant wirings, creating an algebraic loop that cannot be resolved within one evaluation. ``` A -> B -> C -> A # cycle detected! ``` **Fix:** Break the cycle by using `.loop()` (temporal, across temporal boundaries) for one of the edges instead of `>>` (sequential, within evaluation). ______________________________________________________________________ ## Semantic Check Failures (SC-Series) Semantic checks operate on `GDSSpec` and verify domain properties. ### SC-001: Completeness (Orphan State Variables) **What it checks:** Every entity variable is updated by at least one mechanism. **When it fails:** An entity has a state variable but no mechanism's `updates` list references it. The variable can never change -- likely a specification error. **Fix:** Add a Mechanism that updates the orphan variable, or remove the variable if it is not needed. ### SC-002: Determinism (Write Conflicts) **What it checks:** Within each wiring, no two mechanisms update the same entity variable. **When it fails:** Two mechanisms both claim to update `Counter.value` -- non-deterministic state transition. **Fix:** Consolidate the updates into a single mechanism, or separate them into different wirings that execute at different times. ### SC-003: Reachability **What it checks:** Can signals reach from one block to another through wiring connections? **When it fails:** A block is isolated -- no path connects it to the rest of the system. **Fix:** Add wiring connections or check that the block is included in the correct composition. ### SC-004: Type Safety **What it checks:** Wire spaces match source and target block expectations. Space references on wires correspond to registered spaces. **When it fails:** A wire references a space that is not registered, or source/target blocks are connected to incompatible spaces. **Fix:** Register all spaces with `spec.register_space()` or `spec.collect()` before referencing them in wirings. ### SC-005: Parameter References **What it checks:** Every `params_used` entry on blocks corresponds to a registered parameter in the spec's `parameter_schema`. **When it fails:** A block claims to use parameter `"learning_rate"` but no such parameter is registered. **Fix:** Register the parameter: ``` LearningRate = typedef("LearningRate", float, constraint=lambda x: 0 < x < 1) spec.register_parameter("learning_rate", LearningRate) ``` ### SC-006: Canonical f (No Mechanisms) **What it checks:** At least one mechanism exists in the spec, so the state transition function f is non-empty. **When it fails:** The spec has no Mechanism blocks. The canonical `h = f . g` degenerates to `h = g`. Note This is a **warning**, not necessarily an error. Game-theoretic models (OGS) are intentionally stateless -- `h = g` is their correct canonical form. If you expect state dynamics, add Mechanism blocks. ### SC-007: Canonical X (No State Space) **What it checks:** The state space X is non-empty -- at least one entity with variables exists. **When it fails:** No entities are registered, so there is no state to transition. **Fix:** Register entities with state variables if your model has state. If the model is intentionally stateless, this warning can be ignored. ______________________________________________________________________ ## Common Gotchas ### Token Matching Rules Token splitting only happens on `+` (space-plus-space) and `,` (comma-space). Plain spaces within a name are **not** delimiters: ``` from gds.types.tokens import tokenize tokenize("Heater Command") # {"heater command"} -- ONE token tokenize("Heater + Command") # {"heater", "command"} -- TWO tokens tokenize("Temperature + Setpoint") # {"temperature", "setpoint"} tokenize("Agent 1, Agent 2") # {"agent 1", "agent 2"} ``` ### Feedback is Contravariant, Loop is Covariant These are not interchangeable: | Operator | Direction | Timing | Purpose | | ------------- | ------------- | -------------------------- | ------------------------------- | | `.feedback()` | CONTRAVARIANT | Within evaluation | Backward utility/reward signals | | `.loop()` | COVARIANT | Across temporal boundaries | State feedback to observers | Using `.feedback()` for temporal state feedback will cause G-003 failures. ### Frozen Pydantic Models DSL elements (Stock, Flow, Sensor, etc.) and GDS value objects (TypeDef, Space, Entity, StateVariable) are **frozen** Pydantic models. You cannot mutate them after creation: ``` from gds import typedef t = typedef("Temperature", float) t.name = "Pressure" # ERROR: frozen model, cannot assign # Instead, create a new instance p = typedef("Pressure", float) ``` ### collect() vs register_wiring() `GDSSpec.collect()` type-dispatches objects by their Python type: TypeDef, Space, Entity, Block, ParameterDef. It does **not** handle SpecWiring: ``` spec = GDSSpec(name="My Spec") # These go through collect() spec.collect(Temperature, HeaterCommand, room, sensor, controller) # SpecWiring must use register_wiring() explicitly spec.register_wiring(SpecWiring( name="Main Pipeline", block_names=["Sensor", "Controller"], wires=[Wire(source="Sensor", target="Controller", space="SignalSpace")], )) ``` ### Spec Validation vs System Verification These are different operations on different objects: ``` # Spec validation: checks registration consistency (missing types, blocks, etc.) errors = spec.validate_spec() # returns list[str] # System verification: runs G-001..G-006 on compiled IR report = verify(system_ir) # returns VerificationReport # Semantic checks: runs SC-001..SC-007 on the spec findings = check_completeness(spec) # returns list[Finding] ``` ______________________________________________________________________ ## Debug Workflow ### Step 1: Inspect the Compiled IR After compilation, print the blocks and wirings to see what the compiler produced: ``` from gds import compile_system system_ir = compile_system("Debug Model", root=pipeline) print("=== Blocks ===") for block in system_ir.blocks: print(f" {block.name}: {block.signature}") print("\n=== Wirings ===") for wiring in system_ir.wirings: print(f" {wiring.source} --{wiring.label}--> {wiring.target} ({wiring.direction})") ``` ### Step 2: Visualize with gds-viz Generate Mermaid diagrams for visual inspection: ``` from gds_viz.mermaid import system_to_mermaid mermaid_str = system_to_mermaid(system_ir) print(mermaid_str) # Paste into any Mermaid renderer (mkdocs, GitHub, mermaid.live) ``` ### Step 3: Run Verification with Individual Checks Run checks one at a time to isolate the issue: ``` from gds.verification.engine import verify from gds.verification.generic_checks import ( check_g001_domain_codomain_matching, check_g004_dangling_wirings, check_g006_covariant_acyclicity, ) # Run one check at a time for check in [ check_g001_domain_codomain_matching, check_g004_dangling_wirings, check_g006_covariant_acyclicity, ]: report = verify(system_ir, checks=[check]) failures = [f for f in report.findings if not f.passed] if failures: print(f"\n{check.__name__}:") for f in failures: print(f" [{f.check_id}] {f.message}") ``` ### Step 4: Use SpecQuery for Structural Analysis `SpecQuery` answers questions about information flow without running the model: ``` from gds import SpecQuery query = SpecQuery(spec) # What blocks affect a specific entity variable? query.blocks_affecting("Room", "temperature") # -> ['Update Temperature', 'Controller', 'Sensor'] # What parameters influence which blocks? query.param_to_blocks() # -> {'setpoint': ['Controller']} # Full block dependency graph query.dependency_graph() ``` ______________________________________________________________________ ## Quick Reference: Error to Fix | Error | Likely Cause | Fix | | ---------------------------- | ----------------------------------- | ---------------------------------------------------------------- | | Token overlap required | `>>` ports share no tokens | Rename ports or use explicit wiring | | Port not found | Typo in wiring port name | Check `block.interface` for exact names | | Duplicate block name | Two blocks with same name | Use unique descriptive names | | G-001 FAIL | Wiring label mismatches ports | Align wiring labels with port tokens | | G-002 FAIL on BoundaryAction | Expected -- no inputs by design | Filter or ignore for boundary blocks | | G-003 FAIL | Direction flag contradiction | Match `.feedback()` with CONTRAVARIANT, `.loop()` with COVARIANT | | G-004 FAIL | Wiring references missing block | Fix block name typo | | G-005 FAIL | Sequential port type mismatch | Ensure `>>` ports share tokens on both sides | | G-006 FAIL | Cycle in covariant flow | Break cycle with `.loop()` for temporal edge | | SC-001 WARNING | Orphan state variable | Add a Mechanism that updates it | | SC-002 ERROR | Two mechanisms update same variable | Consolidate into one Mechanism | | SC-005 FAIL | Unregistered parameter | Call `spec.register_parameter()` | | SC-006/007 WARNING | No mechanisms or entities | Add state if expected, or ignore for stateless models | | Cannot mutate frozen model | Pydantic frozen=True | Create a new instance instead | # Interoperability: From Specification to Computation > GDS specifications are not just documentation — they are structured representations that project cleanly onto domain-specific computation. This guide demonstrates two concrete projections: **Nash equilibrium computation** (game theory) and **iterated tournament simulation** (evolutionary dynamics), both built on the same OGS game structure without modifying the framework. ______________________________________________________________________ ## The Thesis A GDS specification captures the **structural skeleton** of a system: blocks, roles, interfaces, composition, and wiring. By design, the framework provides no execution semantics — blocks describe *what* a system is, not *how* it runs. This is a feature, not a limitation. It means the same specification can be consumed by multiple independent tools: ``` ┌─ Nash equilibrium solver (Nashpy) OGS Pattern ─┬─ PatternIR ─┤ │ └─ Mermaid visualization (ogs.viz) │ └─ get_payoff() ─┬─ Iterated match simulator ├─ Round-robin tournament └─ Evolutionary dynamics engine ``` The specification is the **single source of truth**. The computations are **thin projections** that extract what they need and add domain-specific logic on top. ______________________________________________________________________ ## Case Study: Prisoner's Dilemma The Prisoner's Dilemma is formalized as an OGS composition: ``` (Alice Decision | Bob Decision) >> Payoff Computation .feedback([payoff -> decisions]) ``` This encodes: - **Two players** as `DecisionGame` blocks with (X, Y, R, S) port signatures - **Payoff computation** as a `CovariantFunction` that maps action pairs to payoffs - **Feedback** carrying payoffs back to decision nodes for iterated play - **Terminal conditions** declaring all four action profiles and their payoffs - **Action spaces** enumerating each player's available moves The specification exists in two concrete instantiations: | Variant | Payoffs (R, T, S, P) | Purpose | | ------------------------ | -------------------- | ---------------------------------------- | | `prisoners_dilemma_nash` | (3, 5, 0, 1) | Standard PD — Nash equilibrium analysis | | `evolution_of_trust` | (2, 3, -1, 0) | Nicky Case variant — iterated simulation | Both share the identical OGS composition tree. Only the payoff parameters differ. ______________________________________________________________________ ## Projection 1: Nash Equilibrium Computation **Source:** `packages/gds-examples/games/prisoners_dilemma_nash/model.py` The Nash equilibrium solver extracts payoff matrices from `PatternIR` metadata and delegates to [Nashpy](https://nashpy.readthedocs.io/) for equilibrium computation. ### What the specification provides The OGS Pattern declares `action_spaces` and `terminal_conditions` as structured metadata: ``` action_spaces=[ ActionSpace(game="Alice Decision", actions=["Cooperate", "Defect"]), ActionSpace(game="Bob Decision", actions=["Cooperate", "Defect"]), ] terminal_conditions=[ TerminalCondition( name="Mutual Cooperation", actions={"Alice Decision": "Cooperate", "Bob Decision": "Cooperate"}, payoff_description="R=3 each", ), # ... 3 more conditions ] ``` ### What the projection adds A thin extraction layer (`build_payoff_matrices`) parses terminal conditions into numpy arrays, then Nashpy computes equilibria via support enumeration: ``` def build_payoff_matrices(ir: PatternIR): # Extract from PatternIR metadata → numpy arrays ... def compute_nash_equilibria(ir: PatternIR): alice_payoffs, bob_payoffs = build_payoff_matrices(ir) game = nashpy.Game(alice_payoffs, bob_payoffs) return list(game.support_enumeration()) ``` ### What the projection verifies Cross-references computed equilibria against hand-annotated terminal conditions — the specification declares which outcomes are Nash equilibria, and the solver confirms or refutes them: ``` verification = verify_terminal_conditions(ir, equilibria) # → matches: [Mutual Defection (confirmed)] # → mismatches: [] (none — declared NE matches computed NE) ``` Additional analyses extracted from the same specification: - **Dominant strategies** — Defect strictly dominates for both players - **Pareto optimality** — Mutual Cooperation is Pareto optimal but not a NE - **The dilemma** — the unique NE is not Pareto optimal No framework changes required The Nash solver is 100 lines of pure Python + Nashpy. It reads from `PatternIR` metadata using the existing API. No modifications to gds-framework, gds-games, or any IR layer were needed. ______________________________________________________________________ ## Projection 2: Iterated Tournament Simulation **Source:** `packages/gds-examples/games/evolution_of_trust/` The tournament simulator uses the same game structure but projects it differently — instead of computing equilibria, it *plays the game* repeatedly with concrete strategies. ### What the specification provides The payoff matrix parameters (R=2, T=3, S=-1, P=0) and the game structure define the rules. A direct lookup function is derived from the specification: ``` def get_payoff(action_a: str, action_b: str) -> tuple[int, int]: """Direct payoff lookup from action pair.""" return { ("Cooperate", "Cooperate"): (R, R), ("Cooperate", "Defect"): (S, T), ("Defect", "Cooperate"): (T, S), ("Defect", "Defect"): (P, P), }[(action_a, action_b)] ``` ### What the projection adds Three layers of simulation logic, each consuming only `get_payoff()`: **Layer 1 — Strategies.** Eight strategy implementations as a `Strategy` protocol: `choose(history, round_num) → action`. These are pure Python — no GDS dependency: | Strategy | Logic | Character | | ---------------- | ------------------------------------ | ------------------------- | | Always Cooperate | Always C | Naive cooperator | | Always Defect | Always D | Pure exploiter | | Tit for Tat | Copy opponent's last move | Retaliatory but forgiving | | Grim Trigger | C until betrayed, then D forever | Unforgiving | | Detective | Probe C,D,C,C then exploit or TfT | Strategic prober | | Tit for Two Tats | C unless opponent D'd twice in a row | Extra forgiving | | Pavlov | Win-stay, lose-shift | Adaptive | | Random | 50/50 coin flip | Baseline noise | **Layer 2 — Tournament.** `play_match()` runs iterated rounds between two strategies. `play_round_robin()` runs all-pairs competition. Both use `get_payoff()` as the sole interface to the game specification. **Layer 3 — Evolutionary dynamics.** `run_evolution()` runs generational selection: each generation plays a tournament, the worst performer loses a member, the best gains one. Population dynamics emerge from repeated tournament play. ### The simulation stack ``` ┌──────────────────────────────────────────────┐ │ Evolutionary dynamics │ │ run_evolution(populations, generations, ...) │ ├──────────────────────────────────────────────┤ │ Round-robin tournament │ │ play_round_robin(strategies, rounds, noise) │ ├──────────────────────────────────────────────┤ │ Iterated match │ │ play_match(strategy_a, strategy_b, rounds) │ ├──────────────────────────────────────────────┤ │ Payoff lookup │ │ get_payoff(action_a, action_b) → (int, int) │ ├──────────────────────────────────────────────┤ │ OGS specification (R=2, T=3, S=-1, P=0) │ │ build_game() → build_pattern() → build_ir() │ └──────────────────────────────────────────────┘ ``` Each layer is independently testable. The simulation code knows nothing about OGS composition trees, PatternIR, or GDS blocks — it only knows payoff values. Thin runner, not a general simulator This is **not** a GDS execution engine. It is a domain-specific simulation that uses the GDS specification as its source of truth for game rules. The strategies, match logic, and evolutionary dynamics are all hand-written Python specific to the iterated PD. A general `gds-sim` would require solving the much harder problem of executing arbitrary GDS specifications — see [Research Boundaries](https://blockscience.github.io/gds-core/research/research-boundaries/#research-question-2-what-does-a-timestep-mean-across-dsls). ______________________________________________________________________ ## The Pattern: Specification as Interoperability Layer Both projections follow the same architectural pattern: ``` 1. Build OGS specification → Pattern + PatternIR 2. Extract domain-relevant → Payoff matrices (Nash) data from the specification or payoff lookup (simulation) 3. Add domain-specific logic → Nashpy solver / Strategy protocol 4. Produce domain-specific → Equilibria / Tournament results / results Evolutionary trajectories ``` The specification serves as the **interoperability contract** between different analytical tools. Each tool consumes the subset it needs: | Consumer | What it reads from the specification | What it adds | | ------------------- | ------------------------------------------------------- | ---------------------------------------------------------- | | Nash solver | Action spaces, terminal conditions, payoff descriptions | Support enumeration, dominance analysis, Pareto optimality | | Tournament | Payoff parameters (R, T, S, P) | Strategy implementations, match replay, noise model | | Evolutionary engine | Payoff parameters | Population dynamics, generational selection | | Mermaid visualizer | Game tree structure, flows, feedback | Diagram rendering | | OGS reports | Full PatternIR (games, flows, metadata) | Jinja2 text reports | No consumer modifies the specification. No consumer needs to understand the full OGS type system. Each extracts a projection and operates in its own domain vocabulary. ______________________________________________________________________ ## Why This Matters ### For game theorists GDS provides a **compositional specification language** for games that separates structure from analysis. The same game structure supports both closed-form equilibrium computation and agent-based simulation without duplication. New analytical tools (e.g., correlated equilibria, mechanism design verifiers) can be added as additional projections without modifying the game definition. ### For simulation engineers GDS specifications serve as **machine-readable game rules** that simulation engines can consume. The specification defines the action spaces, payoff structure, and composition topology. The simulator provides strategies, scheduling, and dynamics. The boundary is clean: the specification says *what the game is*, the simulator says *how it plays out*. ### For software teams The OGS composition tree is a **formal architecture diagram** that happens to be executable by analytical tools. The same `(Alice | Bob) >> Payoff .feedback(...)` description generates Mermaid diagrams for documentation, payoff matrices for game theorists, and payoff lookup functions for simulators. One source, multiple views. ### For the GDS ecosystem This validates GDS as an **interoperability substrate**, not just a modeling framework. The canonical form `h = f ∘ g` with varying dimensionality of X absorbs game-theoretic (stateless), control-theoretic (stateful), and stock-flow (state-dominant) formalisms. Each domain projects what it needs from the shared representation without architectural changes. ______________________________________________________________________ ## Running the Examples ### Nash Equilibrium Analysis Source code for `packages/gds-examples/notebooks/nash_equilibrium.py` Tip: paste this code into an empty cell, and the marimo editor will create cells for you ``` """Nash Equilibrium in the Prisoner's Dilemma — Interactive Marimo Notebook. Demonstrates the full pipeline: OGS game structure -> payoff matrices -> Nash equilibrium computation -> dominance and Pareto analysis. Run interactively: uv run marimo edit notebooks/nash_equilibrium.py Run as read-only app: uv run marimo run notebooks/nash_equilibrium.py """ # /// script # requires-python = ">=3.12" # dependencies = [ # "gds-examples", # "nashpy>=0.0.41", # "marimo>=0.20.0", # ] # /// import marimo __generated_with = "0.20.2" app = marimo.App(width="medium", app_title="Nash Equilibrium: Prisoner's Dilemma") # ── Imports ────────────────────────────────────────────────── @app.cell def imports(): import marimo as mo return (mo,) # ── Model Imports & Path Setup ─────────────────────────────── @app.cell def model_setup(): import sys from pathlib import Path _examples_root = Path(__file__).resolve().parent.parent _games_path = str(_examples_root / "games") if _games_path not in sys.path: sys.path.insert(0, _games_path) from prisoners_dilemma_nash.model import ( P, R, S, T, analyze_game, build_ir, build_payoff_matrices, compute_nash_equilibria, verify_terminal_conditions, ) from gds_domains.games.viz import ( architecture_by_domain_to_mermaid, structural_to_mermaid, terminal_conditions_to_mermaid, ) ir = build_ir() return ( R, S, T, P, analyze_game, architecture_by_domain_to_mermaid, build_payoff_matrices, compute_nash_equilibria, ir, structural_to_mermaid, terminal_conditions_to_mermaid, verify_terminal_conditions, ) # ── Header ─────────────────────────────────────────────────── @app.cell def header(mo): mo.md( """ # Nash Equilibrium: Prisoner's Dilemma The **Prisoner's Dilemma** is the canonical example of a game where individually rational decisions lead to a collectively suboptimal outcome. Two players simultaneously choose to **Cooperate** or **Defect**, and the payoff structure creates a tension between self-interest and mutual benefit. This notebook walks through the full analysis pipeline: 1. **Game Structure** — the OGS composition tree and metadata 2. **Payoff Matrices** — extracted from PatternIR terminal conditions 3. **Nash Equilibria** — computed via Nashpy support enumeration 4. **Game Analysis** — dominance, Pareto optimality, and the dilemma itself """ ) return () # ── Game Structure ─────────────────────────────────────────── @app.cell def game_structure( mo, ir, structural_to_mermaid, terminal_conditions_to_mermaid, architecture_by_domain_to_mermaid, ): _tabs = mo.ui.tabs( { "Structural": mo.vstack( [ mo.md( "Full game topology: Alice and Bob make simultaneous " "decisions, feeding into a payoff computation with " "feedback loops carrying payoffs back to each player." ), mo.mermaid(structural_to_mermaid(ir)), ] ), "Terminal Conditions": mo.vstack( [ mo.md( "State diagram of all possible outcomes. Each terminal " "state is an action profile with associated payoffs." ), mo.mermaid(terminal_conditions_to_mermaid(ir)), ] ), "By Domain": mo.vstack( [ mo.md( "Architecture grouped by domain: **Alice**, **Bob**, and " "**Environment**. Shows the symmetric structure of the game." ), mo.mermaid(architecture_by_domain_to_mermaid(ir)), ] ), } ) mo.vstack( [ mo.md( """\ --- ## Game Structure The Prisoner's Dilemma is built from OGS primitives: two `DecisionGame` blocks (Alice, Bob) composed in parallel, sequenced into a `CovariantFunction` (payoff computation), with feedback loops carrying payoffs back to the decision nodes. ``` (Alice | Bob) >> Payoff .feedback([payoff -> decisions]) ``` """ ), _tabs, ] ) return () # ── Payoff Matrices ────────────────────────────────────────── @app.cell def payoff_matrices(mo, ir, R, T, S, P, build_payoff_matrices): _alice_payoffs, _bob_payoffs = build_payoff_matrices(ir) mo.vstack( [ mo.md( f"""\ --- ## Payoff Matrices Extracted from PatternIR terminal conditions. The standard PD parameters satisfy **T > R > P > S** and **2R > T + S**: | Parameter | Value | Meaning | |-----------|-------|---------| | R (Reward) | {R} | Mutual cooperation | | T (Temptation) | {T} | Defect while other cooperates | | S (Sucker) | {S} | Cooperate while other defects | | P (Punishment) | {P} | Mutual defection | """ ), mo.md( "**Alice's Payoffs:**\n\n" "| | Bob: Coop | Bob: Defect |\n" "|---|---|---|\n" f"| **Cooperate** | {_alice_payoffs[0, 0]:.0f} (R) " f"| {_alice_payoffs[0, 1]:.0f} (S) |\n" f"| **Defect** | {_alice_payoffs[1, 0]:.0f} (T) " f"| {_alice_payoffs[1, 1]:.0f} (P) |\n\n" "**Bob's Payoffs:**\n\n" "| | Bob: Coop | Bob: Defect |\n" "|---|---|---|\n" f"| **Cooperate** | {_bob_payoffs[0, 0]:.0f} (R) " f"| {_bob_payoffs[0, 1]:.0f} (T) |\n" f"| **Defect** | {_bob_payoffs[1, 0]:.0f} (S) " f"| {_bob_payoffs[1, 1]:.0f} (P) |" ), ] ) return () # ── Nash Equilibria ────────────────────────────────────────── @app.cell def nash_equilibria(mo, ir, compute_nash_equilibria, verify_terminal_conditions): import numpy as _np equilibria = compute_nash_equilibria(ir) verification = verify_terminal_conditions(ir, equilibria) _actions = ["Cooperate", "Defect"] _eq_lines = [] for _i, (_alice_strat, _bob_strat) in enumerate(equilibria): _alice_action = _actions[int(_np.argmax(_alice_strat))] _bob_action = _actions[int(_np.argmax(_bob_strat))] _eq_lines.append( f"- **NE {_i + 1}:** Alice = {_alice_action}, Bob = {_bob_action}" ) _match_lines = [] for _m in verification["matches"]: _match_lines.append(f"- **{_m.name}**: {_m.outcome}") _mismatch_lines = [] for _mm in verification["mismatches"]: _mismatch_lines.append(f"- **{_mm.name}**: {_mm.outcome}") _match_text = "\n".join(_match_lines) if _match_lines else "- None" _mismatch_text = "\n".join(_mismatch_lines) if _mismatch_lines else "- None" mo.vstack( [ mo.md( f"""\ --- ## Nash Equilibria Computed via **Nashpy** support enumeration on the extracted payoff matrices. ### Computed Equilibria ({len(equilibria)} found) {"\\n".join(_eq_lines)} ### Verification Against Declared Terminal Conditions Cross-referencing computed equilibria against the hand-annotated terminal conditions in the OGS Pattern: **Matches** (declared NE confirmed by computation): {_match_text} **Mismatches** (declared NE not confirmed): {_mismatch_text} """ ), ] ) return (equilibria,) # ── Game Analysis ──────────────────────────────────────────── @app.cell def game_analysis(mo, ir, analyze_game): analysis = analyze_game(ir) _alice_dom = analysis["alice_dominant_strategy"] _bob_dom = analysis["bob_dominant_strategy"] _pareto = analysis["pareto_optimal"] _pareto_rows = [] for _o in _pareto: _pareto_rows.append( f"| {_o['alice_action']} | {_o['bob_action']} | " f"{_o['alice_payoff']:.0f} | {_o['bob_payoff']:.0f} |" ) _pareto_table = "\n".join(_pareto_rows) mo.vstack( [ mo.md( f"""\ --- ## Game Analysis ### Dominant Strategies A strategy is **strictly dominant** if it yields a higher payoff regardless of the opponent's choice. | Player | Dominant Strategy | |--------|-------------------| | Alice | **{_alice_dom or "None"}** | | Bob | **{_bob_dom or "None"}** | **Defect** strictly dominates for both players: no matter what the opponent does, defecting always yields a higher payoff (T > R and P > S). ### Pareto Optimal Outcomes ({len(_pareto)} of 4) An outcome is **Pareto optimal** if no other outcome makes one player better off without making the other worse off. | Alice | Bob | Alice Payoff | Bob Payoff | |-------|-----|-------------|------------| {_pareto_table} The Nash equilibrium (Defect, Defect) with payoffs (P, P) = (1, 1) is **not** Pareto optimal — both players could do better with (Cooperate, Cooperate) yielding (R, R) = (3, 3). """ ), ] ) return () # ── The Dilemma ────────────────────────────────────────────── @app.cell def the_dilemma(mo): mo.md( """ --- ## The Dilemma The Prisoner's Dilemma is defined by this tension: > **The unique Nash equilibrium is not Pareto optimal.** Each player's dominant strategy (Defect) leads to a collectively worse outcome than mutual cooperation. This is the fundamental problem of non-cooperative game theory: **individual rationality does not imply collective rationality.** | Property | Outcome | |----------|---------| | Nash Equilibrium | (Defect, Defect) — payoff (1, 1) | | Pareto Optimum | (Cooperate, Cooperate) — payoff (3, 3) | | Dominant Strategy | Defect (for both players) | The OGS formalization makes this structure explicit: the game is **stateless** (h = g, no mechanism layer), all computation lives in the policy layer, and the feedback loop carries payoff information — not state updates. """ ) return () if __name__ == "__main__": app.run() ``` To run locally: ``` uv sync --all-packages --extra nash uv run marimo run packages/gds-examples/notebooks/nash_equilibrium.py ``` ``` # Run tests (22 tests) uv run --package gds-examples pytest \ packages/gds-examples/games/prisoners_dilemma_nash/ -v ``` ### Evolution of Trust Simulation Source code for `packages/gds-examples/notebooks/evolution_of_trust.py` Tip: paste this code into an empty cell, and the marimo editor will create cells for you ``` """The Evolution of Trust — Iterated Prisoner's Dilemma Interactive Notebook. Inspired by Nicky Case's "The Evolution of Trust" (https://ncase.me/trust/). Demonstrates 8 strategies, round-robin tournaments, and evolutionary dynamics built on an OGS game structure. Run interactively: uv run marimo edit notebooks/evolution_of_trust.py Run as read-only app: uv run marimo run notebooks/evolution_of_trust.py """ # /// script # requires-python = ">=3.12" # dependencies = [ # "gds-examples", # "plotly>=5.0", # "marimo>=0.20.0", # ] # /// import marimo __generated_with = "0.20.2" app = marimo.App(width="medium", app_title="The Evolution of Trust") # ── Imports ────────────────────────────────────────────────── @app.cell def imports(): import marimo as mo return (mo,) # ── Model Imports & Path Setup ─────────────────────────────── @app.cell def model_setup(): import sys from pathlib import Path _examples_root = Path(__file__).resolve().parent.parent _games_path = str(_examples_root / "games") if _games_path not in sys.path: sys.path.insert(0, _games_path) from evolution_of_trust.model import ( P, R, S, T, build_ir, get_payoff, ) from evolution_of_trust.strategies import ( ALL_STRATEGIES, AlwaysCooperate, AlwaysDefect, Detective, GrimTrigger, Pavlov, RandomStrategy, TitForTat, TitForTwoTats, ) from evolution_of_trust.tournament import ( head_to_head, play_round_robin, run_evolution, ) from gds_domains.games.viz import ( architecture_by_domain_to_mermaid, structural_to_mermaid, terminal_conditions_to_mermaid, ) ir = build_ir() strategy_map = { "Always Cooperate": AlwaysCooperate, "Always Defect": AlwaysDefect, "Tit for Tat": TitForTat, "Grim Trigger": GrimTrigger, "Detective": Detective, "Tit for Two Tats": TitForTwoTats, "Pavlov": Pavlov, "Random": RandomStrategy, } # Nicky Case color palette COLORS = { "Always Cooperate": "#FF75FF", "Always Defect": "#52537F", "Tit for Tat": "#4089DD", "Grim Trigger": "#EFC701", "Detective": "#F6B24C", "Tit for Two Tats": "#88A8CE", "Pavlov": "#86C448", "Random": "#FF5E5E", } return ( R, S, T, P, ir, get_payoff, ALL_STRATEGIES, AlwaysCooperate, AlwaysDefect, Detective, GrimTrigger, Pavlov, RandomStrategy, TitForTat, TitForTwoTats, head_to_head, play_round_robin, run_evolution, architecture_by_domain_to_mermaid, structural_to_mermaid, terminal_conditions_to_mermaid, strategy_map, COLORS, ) # ── Header ─────────────────────────────────────────────────── @app.cell def header(mo): mo.md( """ # The Evolution of Trust *Inspired by [Nicky Case's interactive guide]( https://ncase.me/trust/) to game theory and the evolution of cooperation.* This notebook explores the **iterated Prisoner's Dilemma**: 1. **Head-to-Head** — watch two strategies face off round by round 2. **Tournament** — round-robin competition among all 8 strategies 3. **Evolution** — populations compete and evolve over generations """ ) return () # ── Game Structure (accordion) ─────────────────────────────── @app.cell def game_structure( mo, ir, R, T, S, P, structural_to_mermaid, terminal_conditions_to_mermaid, architecture_by_domain_to_mermaid, ): _struct_tab = mo.ui.tabs( { "Structural": mo.vstack( [ mo.md( "Full game topology: simultaneous " "decisions feeding into payoff " "computation with feedback." ), mo.mermaid(structural_to_mermaid(ir)), ] ), "Terminal Conditions": mo.vstack( [ mo.md("All four action profiles with their payoffs."), mo.mermaid(terminal_conditions_to_mermaid(ir)), ] ), "By Domain": mo.vstack( [ mo.md( "Architecture grouped by domain: " "**Alice**, **Bob**, " "and **Environment**." ), mo.mermaid(architecture_by_domain_to_mermaid(ir)), ] ), } ) _payoff_detail = mo.md( f""" **Payoff Matrix** — Nicky Case's non-zero-sum variant where mutual defection yields zero: | | Cooperate | Defect | |---|---|---| | **Cooperate** | ({R}, {R}) | ({S}, {T}) | | **Defect** | ({T}, {S}) | ({P}, {P}) | | Parameter | Value | Meaning | |---|---|---| | R (Reward) | {R} | Mutual cooperation | | T (Temptation) | {T} | Defect while other cooperates | | S (Sucker) | {S} | Cooperate while other defects | | P (Punishment) | {P} | Mutual defection | S = {S} (negative!) means being exploited actually *costs* you, making the stakes higher than standard PD. """ ) mo.accordion( { "Under the Hood: OGS Game Structure": _struct_tab, "Under the Hood: Payoff Matrix": _payoff_detail, } ) return () # ── Strategy Catalog (accordion) ───────────────────────────── @app.cell def strategy_catalog(mo, COLORS): def _badge(name, ncase_name, logic): _c = COLORS[name] return mo.md( f'' f"" f"**{name}** ({ncase_name}) — {logic}" ) _cards = mo.vstack( [ _badge( "Always Cooperate", "Always Cooperate", "Always C", ), _badge( "Always Defect", "Always Cheat", "Always D", ), _badge( "Tit for Tat", "Copycat", "C first, then copy opponent's last", ), _badge( "Grim Trigger", "Grudger", "C until opponent D's, then D forever", ), _badge( "Detective", "Detective", "Probe C,D,C,C; exploit or TfT", ), _badge( "Tit for Two Tats", "Copykitten", "C unless opponent D'd last 2 rounds", ), _badge( "Pavlov", "Simpleton", "Win-stay, lose-shift", ), _badge( "Random", "Random", "50/50 coin flip", ), ], gap=0.25, ) mo.accordion({"The 8 Strategies": _cards}) return () # ── Head-to-Head Controls ──────────────────────────────────── @app.cell def head_to_head_controls(mo, strategy_map): _names = list(strategy_map.keys()) dropdown_a = mo.ui.dropdown( options=_names, value="Tit for Tat", label="Strategy A", ) dropdown_b = mo.ui.dropdown( options=_names, value="Always Defect", label="Strategy B", ) slider_rounds = mo.ui.slider(start=5, stop=50, step=5, value=10, label="Rounds") mo.vstack( [ mo.md("---\n\n## Head-to-Head Match"), mo.hstack( [dropdown_a, dropdown_b, slider_rounds], gap=1, ), ] ) return (dropdown_a, dropdown_b, slider_rounds) # ── Head-to-Head Result ────────────────────────────────────── @app.cell def head_to_head_result( mo, dropdown_a, dropdown_b, slider_rounds, strategy_map, head_to_head, COLORS, ): import plotly.graph_objects as _go _cls_a = strategy_map[dropdown_a.value] _cls_b = strategy_map[dropdown_b.value] _h2h = head_to_head(_cls_a(), _cls_b(), rounds=slider_rounds.value) _details = _h2h["round_details"] _result = _h2h["result"] _name_a = _result.strategy_a _name_b = _result.strategy_b _color_a = COLORS.get(_name_a, "#888") _color_b = COLORS.get(_name_b, "#888") # Scoreboard stats _winner = ( _name_a if _result.score_a > _result.score_b else (_name_b if _result.score_b > _result.score_a else "Tie") ) _stats = mo.hstack( [ mo.stat( value=str(_result.score_a), label=_name_a, bordered=True, ), mo.stat( value=_winner, label="Winner", bordered=True, ), mo.stat( value=str(_result.score_b), label=_name_b, bordered=True, ), ], justify="center", gap=1, ) # Cumulative score chart _rounds = [d["round"] for d in _details] _fig = _go.Figure() _fig.add_trace( _go.Scatter( x=_rounds, y=_h2h["cumulative_a"], mode="lines+markers", name=_name_a, line=dict(color=_color_a, width=3), marker=dict(size=8, color=_color_a), fill="tozeroy", fillcolor=( f"rgba({int(_color_a[1:3], 16)}, {int(_color_a[3:5], 16)}, " f"{int(_color_a[5:7], 16)}, 0.13)" ), ) ) _fig.add_trace( _go.Scatter( x=_rounds, y=_h2h["cumulative_b"], mode="lines+markers", name=_name_b, line=dict(color=_color_b, width=3), marker=dict(size=8, color=_color_b), fill="tozeroy", fillcolor=( f"rgba({int(_color_b[1:3], 16)}, {int(_color_b[3:5], 16)}, " f"{int(_color_b[5:7], 16)}, 0.13)" ), ) ) _fig.update_layout( title=dict( text=f"{_name_a} vs {_name_b}", font=dict(size=18), ), xaxis=dict( title="Round", dtick=1, gridcolor="#eee", ), yaxis=dict( title="Cumulative Score", gridcolor="#eee", zeroline=True, zerolinecolor="#ccc", ), plot_bgcolor="white", paper_bgcolor="white", legend=dict( orientation="h", yanchor="bottom", y=1.02, xanchor="center", x=0.5, ), margin=dict(t=60, b=40, l=50, r=20), height=350, ) # Action grid — colored C/D per round _action_rows = [] for _d in _details: _ca = "**C**" if _d["action_a"] == "Cooperate" else "D" _cb = "**C**" if _d["action_b"] == "Cooperate" else "D" _action_rows.append( f"| {_d['round']} | {_ca} | {_cb} " f"| {_d['payoff_a']:+d} | {_d['payoff_b']:+d} |" ) _action_table = mo.md( "| Round | A | B | A pts | B pts |\n" "|:---:|:---:|:---:|:---:|:---:|\n" + "\n".join(_action_rows) ) mo.vstack( [ _stats, mo.ui.plotly(_fig), mo.accordion({"Round-by-Round Details": _action_table}), ], gap=0.5, ) return () # ── Tournament ─────────────────────────────────────────────── @app.cell def tournament_result(mo, strategy_map, play_round_robin, COLORS): import plotly.graph_objects as _go _instances = [cls() for cls in strategy_map.values()] _tournament = play_round_robin(_instances, rounds_per_match=10) _avg = _tournament.avg_scores # Sort by avg score _sorted = sorted(_avg.items(), key=lambda x: x[1]) _names = [s[0] for s in _sorted] _scores = [s[1] for s in _sorted] _bar_colors = [COLORS.get(n, "#888") for n in _names] _fig = _go.Figure() _fig.add_trace( _go.Bar( x=_scores, y=_names, orientation="h", marker=dict( color=_bar_colors, line=dict(color="#333", width=1), ), text=[f"{s:.1f}" for s in _scores], textposition="outside", textfont=dict(size=12), ) ) _fig.update_layout( title=dict( text="Round-Robin Tournament — Average Score per Match", font=dict(size=18), ), xaxis=dict( title="Average Score", gridcolor="#eee", zeroline=True, zerolinecolor="#999", zerolinewidth=2, ), yaxis=dict( title="", tickfont=dict(size=13), ), plot_bgcolor="white", paper_bgcolor="white", margin=dict(t=50, b=40, l=130, r=60), height=380, showlegend=False, ) # Winner callout _best = _sorted[-1] _worst = _sorted[0] mo.vstack( [ mo.md( "---\n\n## Round-Robin Tournament\n\n" "Every strategy plays every other " "(plus itself) for 10 rounds." ), mo.ui.plotly(_fig), mo.hstack( [ mo.callout( mo.md(f"**{_best[0]}** leads with {_best[1]:.1f} avg points"), kind="success", ), mo.callout( mo.md( f"**{_worst[0]}** trails with {_worst[1]:.1f} avg points" ), kind="warn", ), ], gap=1, ), ], gap=0.5, ) return () # ── Evolution Controls ─────────────────────────────────────── @app.cell def evolution_controls(mo): slider_gens = mo.ui.slider( start=10, stop=100, step=10, value=30, label="Generations", ) slider_noise = mo.ui.slider( start=0.0, stop=0.2, step=0.01, value=0.05, label="Noise", ) slider_match_rounds = mo.ui.slider( start=5, stop=30, step=5, value=10, label="Rounds per Match", ) mo.vstack( [ mo.md( "---\n\n## Evolutionary Dynamics\n\n" "Strategies compete over generations. " "Each generation, the worst performer " "loses a member and the best gains one." ), mo.hstack( [ slider_gens, slider_noise, slider_match_rounds, ], gap=1, ), ] ) return (slider_gens, slider_noise, slider_match_rounds) # ── Evolution Result ───────────────────────────────────────── @app.cell def evolution_result( mo, slider_gens, slider_noise, slider_match_rounds, strategy_map, run_evolution, COLORS, ): import plotly.graph_objects as _go _initial_pops = {name: 5 for name in strategy_map} _snapshots = run_evolution( initial_populations=_initial_pops, strategy_factories=strategy_map, generations=slider_gens.value, rounds_per_match=slider_match_rounds.value, noise=slider_noise.value, seed=42, ) _gens = [s.generation for s in _snapshots] _strat_names = list(strategy_map.keys()) _fig = _go.Figure() # Stacked area — add in reverse so first strategy # is on top visually for _name in reversed(_strat_names): _pops = [s.populations.get(_name, 0) for s in _snapshots] _c = COLORS.get(_name, "#888") _fig.add_trace( _go.Scatter( x=_gens, y=_pops, mode="lines", name=_name, line=dict(width=0.5, color=_c), stackgroup="one", fillcolor=( f"rgba({int(_c[1:3], 16)}, {int(_c[3:5], 16)}, " f"{int(_c[5:7], 16)}, 0.8)" ), hovertemplate=( f"{_name}
Gen %{{x}}: %{{y}} members" ), ) ) _fig.update_layout( title=dict( text="Population Over Generations", font=dict(size=18), ), xaxis=dict( title="Generation", gridcolor="#eee", ), yaxis=dict( title="Population", gridcolor="#eee", ), plot_bgcolor="white", paper_bgcolor="white", legend=dict( orientation="h", yanchor="bottom", y=1.02, xanchor="center", x=0.5, font=dict(size=11), ), margin=dict(t=80, b=40, l=50, r=20), height=500, hovermode="x unified", ) # Find survivors at final generation _final = _snapshots[-1].populations _survivors = {k: v for k, v in _final.items() if v > 0} _survivor_text = ", ".join( f"**{k}** ({v})" for k, v in sorted( _survivors.items(), key=lambda x: x[1], reverse=True, ) ) mo.vstack( [ mo.ui.plotly(_fig), mo.callout( mo.md(f"After {slider_gens.value} generations: {_survivor_text}"), kind="info", ), ], gap=0.5, ) return () # ── The Lesson ─────────────────────────────────────────────── @app.cell def the_lesson(mo): mo.vstack( [ mo.md("---\n\n## The Lesson"), mo.callout( mo.md( '*"What the game of trust teaches us ' "is that the success of trust depends " "not just on individual character, but " "on the environment — how many " "interactions there are, whether there " "are mistakes, and what the payoff " 'structure looks like."*' ), kind="neutral", ), mo.md( """ - **Tit-for-Tat** succeeds not by winning individual matches, but by building cooperation and retaliating against exploitation - **Always Defect** thrives in short games but loses when cooperation has time to establish - **Noise** (mistakes) favors forgiving strategies like **Tit-for-Two-Tats** and **Pavlov** - The structure of the game matters as much as the strategies --- *Built with [OGS](https://github.com/BlockScience/gds-core) and inspired by [The Evolution of Trust](https://ncase.me/trust/) by Nicky Case.* """ ), ] ) return () if __name__ == "__main__": app.run() ``` To run locally: ``` uv run marimo run packages/gds-examples/notebooks/evolution_of_trust.py ``` ``` # Run tests (71 tests) uv run --package gds-examples pytest \ packages/gds-examples/games/evolution_of_trust/ -v ``` ### Source files | File | Purpose | | ---------------------------------------- | ------------------------------------------ | | `games/prisoners_dilemma_nash/model.py` | OGS structure + Nash solver + verification | | `games/evolution_of_trust/model.py` | OGS structure with Nicky Case payoffs | | `games/evolution_of_trust/strategies.py` | 8 strategy implementations | | `games/evolution_of_trust/tournament.py` | Match, tournament, evolutionary dynamics | | `notebooks/nash_equilibrium.py` | Interactive Nash analysis notebook | | `notebooks/evolution_of_trust.py` | Interactive simulation notebook | All paths relative to `packages/gds-examples/`. ______________________________________________________________________ ## Connection to Research Boundaries This work provides concrete evidence for two open questions in [Research Boundaries](https://blockscience.github.io/gds-core/research/research-boundaries/index.md): **RQ2 (Timestep semantics):** The tournament simulator implements a specific execution model — synchronous iterated play with optional noise — on top of a structural specification that encodes no execution semantics. This is exactly the pattern anticipated in RQ2: "Each DSL defines its own execution contract if/when it adds simulation." **RQ3 (OGS as degenerate dynamical system):** Both projections confirm that OGS games are pure policy (`h = g`, `f = ∅`). The Nash solver computes equilibria over the policy layer. The simulator plays strategies through the policy layer. Neither requires a state transition mechanism. The "iterated" aspect of the tournament is handled entirely by the simulation harness, not by GDS temporal loops. **RQ4 (Cross-lens analysis):** The two projections operate on different analytical lenses — equilibrium (static, game-theoretic) vs. tournament dynamics (iterated, evolutionary). The specification supports both simultaneously. Whether the Nash equilibrium (Defect, Defect) is also an evolutionary stable strategy is answerable by running both tools on the same specification — a concrete instance of the cross-lens analysis envisioned in RQ4. # Milestone: Layer 0 Stabilization ## What Changed Three structural gaps between the architecture document and the codebase have been closed, bringing Layer 0 (Composition Core) into formal alignment with the documented contract. ### 1. Typed `InputIR` `SystemIR.inputs` changed from `list[dict[str, Any]]` (untyped) to `list[InputIR]` (typed Pydantic model). ``` class InputIR(BaseModel, frozen=True): name: str metadata: dict[str, Any] = Field(default_factory=dict) ``` - Layer 0 defines only `name` + a generic `metadata` bag - Domain packages store their richer fields in `metadata` when projecting to SystemIR - OGS's `to_system_ir()` now produces `InputIR` objects (previously built raw dicts, silently dropping the `shape` field) - `compile_system()` accepts an optional `inputs` parameter; Layer 0 never infers inputs - G-004 (dangling wirings) now uses typed attribute access (`inp.name`) instead of defensive `isinstance(inp, dict)` guards ### 2. Real G-003 Direction Consistency The G-003 check was previously a stub that always passed with INFO severity. It now validates: **A) Flag consistency** (ERROR severity): - COVARIANT + `is_feedback=True` is a contradiction (feedback implies contravariant) - CONTRAVARIANT + `is_temporal=True` is a contradiction (temporal implies covariant) **B) Contravariant port-slot matching** (ERROR severity): - For CONTRAVARIANT wirings, the label must be a token-subset of the source's `backward_out` or the target's `backward_in` - This complements G-001, which only covers covariant wirings ### 3. Unified `sanitize_id` Five duplicated copies of `_sanitize_id` across two packages (with one variant using a subtly different regex) have been replaced by a single canonical `sanitize_id()` in `gds.ir.models`: ``` def sanitize_id(name: str) -> str: sanitized = re.sub(r"[^A-Za-z0-9_]", "_", name) if sanitized and sanitized[0].isdigit(): sanitized = "_" + sanitized return sanitized ``` Exported from `gds.__init__` and used by both gds-framework and gds-games. ## Why It Matters - **Layer 0 is now formally consistent with the architecture document.** Every IR model is typed, every generic check validates real invariants, and shared utilities are consolidated. - **New DSL development can proceed with confidence.** Domain packages (e.g., `gds-stockflow`, `gds-control`) can depend on a stable, well-typed Layer 0 contract. - **The OGS interop bridge is cleaner.** `PatternIR.to_system_ir()` produces fully typed IR with no data loss. ## Acceptance Criteria - [x] `SystemIR.inputs` is `list[InputIR]` (not `list[dict]`) - [x] G-003 detects flag contradictions and contravariant port-slot mismatches - [x] G-004 recognizes `InputIR` names as valid wiring endpoints - [x] `sanitize_id` has a single definition used by all packages - [x] All tests pass: 373 (gds-framework) + 162 (gds-games) + 57 (gds-viz) + 168 (gds-examples) - [x] Lint clean across all packages ## Downstream Impact **gds-games:** OGS `to_system_ir()` now produces `InputIR` objects and includes the previously-dropped `shape` field in metadata. OGS visualization and verification code uses the canonical `sanitize_id`. No behavioral changes to OGS tests. **Future DSLs:** Any new domain package can import `InputIR` and `sanitize_id` from `gds` and depend on a complete, typed Layer 0 IR. # DSL Roadmap ## Current Status Layer 0 (Composition Core) is stable and sealed. Three domain DSLs are implemented and validated: | DSL | Domain | Package | Tests | Canonical | | ------------- | ------------------------- | --------------- | ------ | ------------------------------------------------------------------- | | **OGS** | Compositional game theory | `gds-games` | Mature | Projected via `to_system_ir()` | | **StockFlow** | System dynamics | `gds-stockflow` | 215 | Clean — verified by cross-built equivalence | | **Control** | State-space control | `gds-control` | 117 | Clean — verified by cross-built equivalence + parametric invariants | All three compile to `GDSSpec` / `SystemIR` and produce clean canonical decompositions without modification to the core. ### Validated Architectural Claims - **Canonical `h = f ∘ g` is correctly minimal.** Three independent domains have not required extensions. - **No DSL compiler generates `ControlAction` blocks.** All non-state-updating blocks across all DSLs map to `Policy`. The `ControlAction` role serves as the output map `y = C(x, d)` for explicit inter-system composition at `>>` boundaries. See [Controller-Plant Duality](https://blockscience.github.io/gds-core/framework/design/controller-plant-duality/index.md). - **A convergent composition pattern has emerged:** ``` (peripheral observers | exogenous inputs) >> (decision logic) >> (state dynamics) .loop(state → observers) ``` - **GDS IR is the IR.** No domain has needed a parallel IR stack. Domain-specific IR (OGS `PatternIR`) projects to `SystemIR` for cross-cutting tools. ## DSL Contract Checklist Every new DSL must answer the **Semantic Layer Contract** — eight questions that fully specify how a domain maps onto the [composition algebra](https://blockscience.github.io/gds-core/framework/guide/architecture/index.md): 1. **Block semantics** — what does a block represent? 1. **Sequential (`>>`)** — what operation is induced? 1. **Parallel (`|`)** — what is the product structure? 1. **Feedback (`.feedback()`)** — what fixed-point or closure? 1. **Temporal (`.loop()`)** — what cross-step semantics? 1. **Execution model** — static? discrete-time? continuous? equilibrium? 1. **Validation layer** — what domain invariants? What checks? 1. **Canonical projection** (recommended) — what is the domain normal form? ### Implementation Checklist For each new DSL package: - [ ] Subclass `AtomicBlock` with domain-specific block types (or reuse GDS roles) - [ ] Define domain IR models (extending or wrapping `BlockIR`, `WiringIR`) — or compile directly to `GDSSpec` - [ ] Implement compilation using `compile_model()` → `GDSSpec` and `compile_to_system()` → `SystemIR` - [ ] Implement domain verification checks (pluggable `Callable[[DomainModel], list[Finding]]`) - [ ] Optional: delegate to G-001..G-006 via `compile_to_system()` + `verify()` - [ ] Validate canonical projection (parametric invariant tests recommended) - [ ] Add cross-built equivalence tests (DSL-compiled vs hand-built GDSSpec) - [ ] Add per-package `CLAUDE.md` documenting architecture - [ ] Add tests covering compilation, verification, and projection ### Release Gates Before the first external release of a new DSL: - [ ] All tests pass (domain + GDS generic checks via projection) - [ ] Lint clean (`ruff check`) - [ ] Architecture document updated with the filled-out semantic layer contract - [ ] At least one worked example model in `gds-examples` - [ ] API docs generated via mkdocstrings - [ ] Independent PyPI publishing via tag-based workflow ## Potential Future DSLs These formalisms are in-scope per the [architecture](https://blockscience.github.io/gds-core/framework/guide/architecture/index.md) scope boundary and should compile to the existing substrate without architectural changes: | Formalism | Expected Mapping | Complexity | | ---------------------- | ------------------------------------------------------------------ | ---------- | | Signal processing | Blocks = filters/transforms, `>>` = pipeline, `\|` = multichannel | Low | | Compartmental models | Blocks = compartments/transfers, similar to stockflow | Low | | Queueing networks | Blocks = queues/servers, `>>` = routing, `\|` = parallel servers | Medium | | Bayesian networks | Blocks = conditional distributions, `>>` = dependency chain | Medium | | Reinforcement learning | Blocks = agent/environment, follows convergent composition pattern | Medium | If a candidate DSL cannot compile to `GDSSpec` without modifying canonical or the role system, that is a signal that the scope boundary has been reached — not that the architecture needs extension. ## Layer Interaction Rules From the architecture document — these are non-negotiable: - Layer 0 must not import Layer 1 or Layer 2 - Layer 1 may depend on Layer 0 - Layer 2 may depend on Layer 1 - Layer 3 (models) depends on Layer 2 - Layer 4 (views) may depend on any lower layer but must not modify them - Architectural acyclicity is required ## Open Research Directions See [Research Boundaries and Open Questions](https://blockscience.github.io/gds-core/research/research-boundaries/index.md) for detailed analysis of: 1. **MIMO semantics** — scalar ports vs vector-valued spaces 1. **Timestep semantics** — what `.loop()` means across DSLs when execution is introduced # Research Boundaries and Open Questions > Design note documenting the architectural boundary between structural compositional modeling (validated) and dynamical execution/analysis (next frontier). Written after the third independent DSL (gds-control) compiled cleanly to GDSSpec with no canonical modifications. ______________________________________________________________________ ## Status: What Has Been Validated Six independent DSLs now compile to the same algebraic core: | DSL | Domain | Decision layer (g) | Update layer (f) | Canonical | | --------------- | --------------- | ----------------------------------------------- | ----------------------- | --------------------------------------- | | gds-stockflow | System dynamics | Auxiliaries + Flows | Accumulation mechanisms | Clean | | gds-control | Control theory | Sensors + Controllers | Plant dynamics | Clean | | gds-games (OGS) | Game theory | All games (observation → decision → evaluation) | ∅ (no state update) | Clean — via `compile_pattern_to_spec()` | All three reduce to the same canonical form without modification: ``` d = g(x, z) x' = f(x, d) ``` Key structural facts: - Canonical `h = f ∘ g` has survived three domains with no extensions required. - No DSL compiler emits `ControlAction` blocks -- all non-state-updating blocks map to `Policy`. The `ControlAction` role serves as the output map `y = C(x, d)` for explicit inter-system composition at `>>` boundaries. - Role partition (boundary, policy, mechanism) is complete and disjoint in every case. - Cross-built equivalence (DSL-compiled vs hand-built) has been verified at Spec, Canonical, and SystemIR levels for all validated DSLs. - OGS canonical validation confirms `f = ∅`, `X = ∅` — compositional game theory is a **degenerate dynamical system** where `h = g` (pure policy, no state transition). See [RQ3](#research-question-3-ogs-as-degenerate-dynamical-system) below. A canonical composition pattern has emerged across DSLs: ``` (peripheral observers | exogenous inputs) >> (decision logic) >> (state dynamics) .loop(state → observers) ``` This motif appears in system dynamics, state-space control, and (structurally) reinforcement learning. It is not prescribed by the algebra — it is a convergent pattern discovered through independent DSL development. ______________________________________________________________________ ## Research Question 1: MIMO Semantics in a Compositional Dynamical Substrate ### Background The current architecture represents multi-input multi-output (MIMO) systems structurally as collections of scalar ports. Cross-coupling is encoded inside block-local semantics (e.g., update functions), not in the wiring topology. For example in gds-control: - Each state variable is its own Entity. - Each controller output is a separate port. - Each dynamics mechanism reads multiple scalar control ports. - Coupling (e.g., matrix A or B terms) is embedded inside `f`. This is sufficient for structural modeling and canonical decomposition. However, classical control theory treats `x ∈ R^n`, `u ∈ R^m`, `y ∈ R^p` as vector spaces with explicit matrix semantics. This raises a fundamental architectural question. ### The Question **Should MIMO structure remain decomposed into scalar channels (structural MIMO), or should vector-valued spaces become first-class citizens in the type system (algebraic MIMO)?** ### Option A — Structural MIMO (Current Design) Each dimension is modeled as an independent scalar port. Vector structure emerges from parallel composition. **Properties:** - Canonical remains dimension-agnostic - TypeDef and Space remain scalar - Coupling lives in block-local semantics - Dimensionality is implicit (count of states, inputs, etc.) **Advantages:** - Minimal extension to Layer 0 - Canonical remains purely structural - DSLs remain lightweight - Works across stockflow, games, and control without special treatment **Limitations:** - No static dimension checking - Cannot extract A, B, C, D matrices directly from structure - No structural controllability/observability analysis - No rank-based reasoning - Numerical coupling invisible at IR level **Interpretation:** This treats GDS as a structural substrate, not a linear algebra system. ### Option B — First-Class Vector Spaces Introduce vector-valued spaces with explicit dimensionality: ``` StateSpace(n) InputSpace(m) OutputSpace(p) ``` Ports carry structured types, not scalars. **Properties:** - Dimensionality becomes explicit - Wiring validates dimension compatibility - Canonical operates over vector-valued X and U - Matrix structure potentially extractable **Advantages:** - Enables structural controllability tests - Enables matrix extraction - Enables symbolic linearization - Closer alignment with classical control theory **Costs:** - Type system complexity increases - Cross-DSL consistency must be preserved - Potential leakage of numerical semantics into structural core - Requires careful integration with canonical ### Deeper Structural Question Is GDS intended to be: 1. A compositional topology algebra (structure only), or 1. A compositional linear-algebra-aware modeling language? If (1), structural MIMO is sufficient. If (2), vector semantics become necessary. ### Possible Hybrid Approach - Keep scalar structural core (Layer 0 unchanged) - Add optional dimensional metadata to spaces - Build matrix extraction as a canonical post-processing tool (Layer 4) This preserves architectural purity while enabling analysis. The metadata would be inert — stripped at compile time like tags — but available to projection tools that know how to interpret it. ### Current Recommendation Stay with structural MIMO. The scalar decomposition is correct for the current purpose (structural modeling and canonical decomposition). Vector semantics should be explored only when a concrete analysis tool (e.g., structural controllability) demonstrates that scalar decomposition is genuinely insufficient, not merely inconvenient. ______________________________________________________________________ ## Research Question 2: What Does a Timestep Mean Across DSLs? ### Background Temporal recurrence is represented structurally via `.loop()` and temporal wirings. This operator is used in multiple DSLs with different semantic intentions: | DSL | Temporal Meaning | What `.loop()` Represents | | --------- | ----------------- | --------------------------------------------------- | | StockFlow | State persistence | Stock level at t feeds auxiliaries at t+1 | | Control | State observation | Plant state at t feeds sensors at t+1 | | OGS | Round iteration | Decisions at round t feed observations at round t+1 | At the IR level, all temporal wirings are identical: ``` source → target (temporal, covariant) ``` Canonical treats recurrence purely algebraically — `x' = f(x, g(x, z))` — without encoding evaluation scheduling, delay, or sampling semantics. This is correct structurally. But it is incomplete for execution. ### The Question **What is the formal meaning of a timestep in GDS, and should execution semantics be standardized across DSLs?** ### Current Implicit Assumption All current DSLs assume synchronous discrete-time execution (Moore-style): 1. Compute `d = g(x[t], z[t])` 1. Compute `x[t+1] = f(x[t], d)` 1. All observation and control occur within one step ### Where This Breaks Down Different domains could legitimately interpret `.loop()` differently: | Domain | Temporal Interpretation | | --------------- | ------------------------------------- | | StockFlow | Accumulation (state += flow * dt) | | Control | Sampling (sensor reads current state) | | Delayed control | `x[t-1]` feeds controller, not `x[t]` | | Hybrid systems | Mode-dependent recurrence | | Continuous-time | Integration over dt | The algebra does not distinguish these. The structural fact "information flows from state output to sensor input across timesteps" is the same in all cases. The semantic question "is this observation delayed?" is invisible at the IR level. ### The Core Tension `.loop()` encodes **structural recurrence** but not **scheduling semantics**. If simulation is introduced, the following questions must be answered: - Is temporal wiring zero-delay or one-step delay? - Are updates synchronous or staged? - Does observation occur before or after state update? - Is the timestep uniform across all temporal wirings? Without explicit execution semantics, different DSLs may assume incompatible timestep meanings while sharing the same IR. ### Option A — Canonical Execution Model Define execution directly from canonical: ``` d = g(x, z) # observation + decision x_next = f(x, d) # state update ``` All DSLs must conform to this synchronous discrete-time semantics. A timestep is always: observe, decide, update. No delays. No staging. **Advantages:** - Minimal - Uniform - Canonical becomes directly executable **Limitations:** - Cannot express delayed observation without additional state variables - Continuous-time requires external discretization - Hybrid timing needs extensions beyond `.loop()` ### Option B — Execution Semantics Layer Introduce an explicit execution contract as metadata, not as part of the IR: ``` @dataclass(frozen=True) class ExecutionSemantics: synchronous: bool = True observation_delay: int = 0 integration_scheme: str = "explicit" ``` Keep IR structural. Attach semantics externally. Each DSL declares its own execution contract. A simulation harness reads the contract and dispatches accordingly. **Advantages:** - Clean separation of structure and dynamics - Supports multiple scheduling regimes - Preserves canonical purity - DSLs remain composable at the structural level even with different execution semantics **Costs:** - Additional abstraction layer - Increased conceptual surface area - Cross-DSL simulation becomes a compatibility question rather than a guarantee ### Deeper Question Is GDS: 1. A structural modeling algebra only? 1. Or a full dynamical execution framework? If (1), temporal semantics remain external and domain-local (consistent with the current architecture principle: "Simulation is domain-local"). If (2), a principled shared timestep model must be defined. ### Current Recommendation Temporal semantics should remain external. The architecture document already states: "The protocol provides no execution semantics." This is correct. The right approach is: 1. Keep `.loop()` as purely structural recurrence (no scheduling meaning at Layer 0). 1. Each DSL defines its own execution contract if/when it adds simulation. 1. A shared discrete-time runner (if built) operates on canonical form and assumes synchronous Moore semantics as the default. 1. DSLs that need different timing (delays, continuous, hybrid) declare it explicitly and are not required to be cross-simulatable. ______________________________________________________________________ ## Research Question 3: OGS as Degenerate Dynamical System ### Finding Canonical projection of OGS patterns produces: ``` X = ∅ (no state variables — games have no persistent entities) U = inputs (PatternInput → BoundaryAction) D = all game forward_out ports g = all games (observation → decision → evaluation) f = ∅ (no mechanisms — games don't update state) ``` The canonical decomposition reduces to `h = g`. There is no state transition. The system is **pure policy**. This is not a failure of the projection — it is the correct structural characterization of compositional game theory within GDS. ### Why X = ∅ Is Expected Games compute equilibria. They do not write to persistent state variables. The game-theoretic objects (strategies, utilities, coutilities) flow through the composition as signals, not as state updates. Even corecursive loops (repeated games) carry information forward as observations, not as entity mutations. In category-theoretic terms: open games are morphisms in a symmetric monoidal category with feedback. They are maps, not state machines. The "state" of a repeated game is the sequence of past plays — which in OGS is modeled as observations flowing through the composition (the History game), not as Entity variables. ### Why f = ∅ Is Semantically Correct No OGS game type performs a state update: | Game Type | Port Structure | Role | | --------------------- | ------------------------------- | ------------------------------ | | DecisionGame | (X,Y,R,S) → full 4-port | Policy — strategic choice | | CovariantFunction | (X,Y) → forward only | Policy — observation transform | | ContravariantFunction | (R,S) → backward only | Policy — utility transform | | DeletionGame | (X,∅) → discard | Policy — information loss | | DuplicationGame | (X, X×X) → broadcast | Policy — information copy | | CounitGame | (X,∅,∅,X) → future conditioning | Policy — temporal reference | All six map to `Policy`. None updates an Entity. Therefore `f` is empty and the mechanism layer is vacuous. ### The Spectrum of Canonical Dimensionality Three domains now provide three distinct points on the canonical spectrum: | Domain | |X| | |f| | |g| | Canonical Form | Interpretation | |---|---|---|---|---|---| | OGS (games) | 0 | 0 | all | `h = g` | Stateless — pure equilibrium computation | | Control | n | n | sensors + controllers | `h = f ∘ g` | Full — observation, decision, state update | | StockFlow | n | n | auxiliaries + flows | `h = f ∘ g` | State-dominant — accumulation dynamics | This reveals that `h = f ∘ g` is not merely "a decomposition of dynamical systems." It is a **transition calculus** that gracefully degenerates: - When `f = ∅`: the system is pure policy (games, decision logic, signal processing) - When `g` is thin: the system is state-dominant (accumulation, diffusion) - When both are substantial: the system is a full feedback dynamical system The unifying abstraction is `(x, u) ↦ x'` with varying dimensionality of X. All three domains are specializations of this map. ### Structural Gap That Was Bridged OGS originally had no path to canonical: 1. OGS blocks subclassed `OpenGame(Block)`, not GDS roles (`Policy`/`Mechanism`/`BoundaryAction`) 1. OGS produced `PatternIR → SystemIR`, never `GDSSpec` 1. `project_canonical()` classifies blocks via `isinstance` against role classes The bridge (`compile_pattern_to_spec()`) resolves this by: - Mapping all atomic games to `Policy` blocks (preserving their GDS Interface) - Mapping `PatternInput` to `BoundaryAction` - Resolving flows via the existing compiler, then registering as `SpecWiring` This is a parallel path — `PatternIR` remains for OGS-specific tooling (reports, visualization, game-theoretic vocabulary). The bridge enables canonical projection without replacing the existing pipeline. ### Implication for PatternIR `PatternIR` is no longer required for semantic correctness. Its remaining justifications: 1. **Report generation** — Jinja2 templates reference `OpenGameIR` fields (game_type, signature as X/Y/R/S) 1. **Game-theoretic vocabulary** — `FlowType.OBSERVATION` vs `FlowType.UTILITY_COUTILITY` carries domain meaning 1. **Visualization** — Mermaid generators use game-specific metadata These are view-layer concerns (Layer 4). Whether to consolidate `PatternIR` into `GDSSpec` + metadata is a refactoring question, not a correctness question. The bridge proves they produce equivalent canonical results. ______________________________________________________________________ ## Research Question 4: Cross-Lens Analysis — When Equilibrium and Reachability Disagree ### Background With six DSLs compiling to GDSSpec, the framework now supports two independent analytical lenses on the same system: 1. **Game-theoretic lens** (via PatternIR) — equilibria, incentive compatibility, strategic structure, utility propagation 1. **Dynamical lens** (via GDSSpec/CanonicalGDS) — reachability, controllability, stability, state-space structure These lenses are orthogonal. Neither subsumes the other: - Game equilibrium does not imply dynamical stability (a Nash equilibrium can be an unstable fixed point) - Dynamical stability does not imply strategic optimality (a stable attractor can be Pareto-dominated) - Reachability does not imply incentive compatibility (a reachable state may require irrational agent behavior) ### The Question **When the two lenses disagree for a concrete system, what does that disagreement mean — and which lens, if either, should be treated as normative?** ### Why Neither Lens Can Be Normative If the game-theoretic lens is normative ("redesign dynamics to enforce equilibrium"), you assume the equilibrium concept is correct for the domain. But Nash equilibria can be dynamically unstable, Pareto-dominated, or unreachable from feasible initial conditions. If the dynamical lens is normative ("redesign incentives to force stability"), you assume the target attractor is desirable. But stable attractors can be socially inefficient or represent lock-in traps. ### The Architectural Answer GDS is a **diagnostic instrument**, not a normative engine. The framework's value is in surfacing the disagreement. When equilibrium and reachability conflict, that conflict is information: - "Your incentive design has unintended dynamical consequences" (equilibrium exists but is unreachable) - "Your dynamics have unintended strategic consequences" (stable point exists but is not an equilibrium) The modeler resolves the tension using domain knowledge. The framework provides the structured vocabulary to state the problem precisely. ### Implications for Architecture This means the two-lens architecture must remain genuinely parallel: ``` Pattern ├─ PatternIR → game-theoretic analysis (equilibria, incentives) └─ GDSSpec → dynamical analysis (reachability, stability) ``` Neither representation should absorb the other. If canonical were extended to encode equilibrium concepts, or if PatternIR were extended to encode reachability, the lenses would collapse and the diagnostic power would be lost. The correct architectural move is to build **cross-lens queries** — analyses that take both representations as input and report on their (dis)agreement: - "Is this Nash equilibrium a stable fixed point of the state dynamics?" - "Is this stable attractor consistent with individual rationality?" - "Does this reachable state satisfy incentive compatibility?" These are research-level questions that require both lenses simultaneously. ### Connection to Timestep Semantics (RQ2) Cross-lens disagreement can also arise from implicit timestep incompatibility. If the game-theoretic lens assumes simultaneous play but the dynamical lens assumes sequential evaluation, "equilibrium" and "stability" may refer to different execution models operating on the same structural specification. This reinforces the RQ2 recommendation: temporal semantics must remain explicit and domain-local. Cross-lens analysis must verify that both lenses assume compatible execution semantics before comparing their conclusions. ### Trigger This question becomes concrete when: | Trigger | What It Reveals | | -------------------------------------------------------- | ---------------------------------------------------------------------- | | Building a game-theoretic + dynamical co-analysis tool | Whether the two lenses can be queried simultaneously | | A concrete system where equilibrium ≠ stable fixed point | Whether the framework can express the disagreement | | Mechanism design applications | Whether the framework supports prescriptive (not just descriptive) use | | Lean/formal verification exports | Whether canonical's analytical lossyness causes proof gaps | ### Current Recommendation Do not attempt to resolve the tension architecturally. Keep the lenses parallel. Build cross-lens analysis as a separate concern that consumes both representations. The framework's role is to make the question askable, not to answer it. ______________________________________________________________________ ## Strategic Assessment These questions mark the boundary between: - **Structural compositional modeling** — validated by six DSLs, canonical proven stable - **Dynamical execution and control-theoretic analysis** — the next frontier They are the first genuine architectural fork points after validating canonical centrality. ### What This Means for Development Priority Neither question requires immediate resolution. Both are triggered by concrete future work: | Trigger | Research Question Activated | | ------------------------------------------------------------- | --------------------------- | | Building a structural controllability analyzer | RQ1 (MIMO semantics) | | Building a shared simulation harness | RQ2 (timestep semantics) | | Adding a continuous-time DSL | RQ1 + RQ2 | | Adding a hybrid systems DSL | RQ1 + RQ2 | | Extracting state-space matrices (A, B, C, D) | RQ1 | | Consolidating OGS PatternIR into GDSSpec | RQ3 (refactoring decision) | | Adding a stateless DSL (signal processing, Bayesian networks) | RQ3 (validates X=∅ pattern) | Until one of these triggers occurs, the current architecture is complete and correct for its stated purpose: structural compositional modeling with formal verification and canonical decomposition. ### The Stability Claim After three independent domains with three distinct canonical profiles (`h = g`, `h = f ∘ g` full, `h = f ∘ g` state-dominant): - The composition algebra (Layer 0) is validated and should not change. - The canonical projection (`h = f ∘ g`) is correctly minimal — and gracefully degenerates when `f = ∅`. - The role system (Boundary, Policy, Mechanism) covers all three domains without `ControlAction`. - The type/space system handles semantic separation across all three domains. - The temporal loop pattern is structurally uniform and semantically adequate for structural modeling. - Cross-built equivalence holds at Spec, Canonical, and SystemIR levels for all validated DSLs. The canonical form `(x, u) ↦ x'` with varying dimensionality of X now functions as a **unified transition calculus** — not merely a decomposition of dynamical systems, but a typed algebra of transition structure that absorbs stateless (games), stateful (control), and state-dominant (stockflow) formalisms under one composition substrate. Further DSLs (signal processing, compartmental models, queueing networks) should compile to this same substrate without architectural changes. If they don't, that is a signal that the boundary has been reached — not that the architecture needs extension. # View Stratification After Canonical Integration ## Context With `compile_pattern_to_spec()` proven across three DSLs, we now have three distinct representations of a composed system. Each carries information at a different abstraction level. Views (reports, visualizations, dashboards) should consume from the representation that is authoritative for their concern. ## The Three Representations ``` Domain Model (Pattern, StockFlowModel, ControlModel) ├─ compile_to_ir() → Domain IR → domain vocabulary ├─ compile_*_to_spec() → GDSSpec → semantic classification │ └─ project_canonical() → CanonicalGDS └─ compile_to_system() → SystemIR → structural topology ``` | Representation | Abstraction Level | Authoritative For | | ------------------------------- | --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | | **Domain IR** (PatternIR, etc.) | Domain-specific | Domain vocabulary: game types, signatures, flow types, stock/flow semantics, control matrices, action spaces, terminal conditions, domain tags | | **GDSSpec / CanonicalGDS** | Semantic (GDS theory) | Role classification (Policy/Mechanism/BoundaryAction), state variables, canonical decomposition h = f ∘ g, update map, decision/input ports | | **SystemIR** | Structural (topology) | Block graph, wiring connections, hierarchy tree, composition operators | ### Why Three Representations? None of these representations is the system itself — each is a map that hides something by design. Domain IR hides composition structure. GDSSpec hides domain vocabulary. SystemIR hides roles and state. Keeping them separate is not duplication. Each answers a different class of question, and a view that tries to answer one layer's question using another layer's data is working from the wrong map. ## Authority Rules ### CanonicalGDS is the semantic authority `project_canonical()` is a derivation, not a projection. When it classifies a block as Policy, that is a structural consequence of the block's interface and role type. Domain-level classifications (game_type, flow_type, stock vs auxiliary) are refinements within canonical categories, not alternatives to them. The relationship is **refinement**: ``` CanonicalGDS: "This is a Policy block" ← universal (GDS layer) Domain IR: "This is a DecisionGame(X,Y,R,S)" ← domain-specific (OGS layer) "This is an Auxiliary computing net flow" ← domain-specific (StockFlow layer) "This is a Sensor reading plant state" ← domain-specific (Control layer) ``` Every DecisionGame is a Policy. Every Auxiliary is a Policy. Every Sensor is a Policy. Not every Policy is a DecisionGame. Domain IRs refine within canonical categories. ### What must never be inferred from domain IRs - MSML role classification (Policy / Mechanism / BoundaryAction / ControlAction) - State variable identification - The canonical decomposition (which blocks are in f vs g) - The update map (which mechanisms update which state variables) These are GDS-layer semantics. Deriving them from domain-specific type enums (game_type, element kind, etc.) is a layer violation: it reimplements `project_canonical()` ad-hoc and can drift from the authoritative source. ### What must always come from domain IRs Each DSL's IR carries vocabulary that has no GDS counterpart: - **OGS**: `game_type`, `signature (X,Y,R,S)`, `flow_type`, `terminal_conditions`, `action_spaces`, `initialization`, `is_corecursive`, domain `tags` - **StockFlow**: stock/flow/auxiliary distinctions, accumulation semantics, flow equations - **Control**: plant/sensor/controller structure, state-space matrices (A,B,C,D when present) These fields are the domain's analytical vocabulary. They exist only in domain IRs and are invisible to GDSSpec and SystemIR. ### What should come from SystemIR - Block-to-block wiring graph - Hierarchy tree (composition nesting) - Composition type at each hierarchy node - Feedback and temporal loop detection SystemIR is the structural truth. Domain IRs may carry a projection (e.g., `PatternIR.to_system_ir()`), but views that only need topology should consume SystemIR directly. ## View Classification by Source Views built on domain models naturally fall into categories by data source: | View Concern | Authoritative Source | Examples | | -------------------------- | -------------------- | ------------------------------------------------------------------------------------ | | Domain semantics | Domain IR | Game signatures, flow types, stock/flow diagrams, action spaces, terminal conditions | | Role classification (MSML) | CanonicalGDS | Policy/Mechanism/BoundaryAction partitioning, state variables, update map | | Structural topology | SystemIR | Hierarchy tree, wiring graph, composition operators, feedback detection | | Formal verification | SystemIR + Domain IR | Generic checks (G-001..G-006) on SystemIR; domain checks on domain IR | | Cross-domain analysis | Domain IR (tags) | Domain coupling matrices, cross-domain flow detection | The key migration target: any view that currently infers MSML roles from domain-specific type enums should be refactored to read from `CanonicalGDS`. ## Invariant No view should ever re-derive what `project_canonical()` computes. If a view needs to know whether a block is a Policy or a Mechanism, it asks `CanonicalGDS`. If it needs to know whether a Policy is specifically a DecisionGame, it asks the domain IR. If it needs to know what that block connects to, it asks SystemIR. Three sources. Three concerns. No overlap in authority. ## Architectural Guard Rails ### Canonical is analytical, not executable `CanonicalGDS` is a structural projection. It tells you the decomposition `h = f ∘ g` — which blocks observe/decide (g) and which update state (f). It does not encode: - Temporal ordering (which block evaluates first) - Scheduling semantics (synchronous vs staged) - Constraint satisfaction (feasibility of transitions) - Composition topology (which specific wires connect blocks) If downstream tools (simulation, formal verification exports) depend on canonical, they must augment it with execution semantics from the domain layer. Canonical alone is insufficient for execution. This boundary must remain explicit to avoid the illusion that `h = f ∘ g` is a runnable program. See [RQ2 (temporal boundary semantics)](https://blockscience.github.io/gds-core/research/research-boundaries/#research-question-2-what-does-a-timestep-mean-across-dsls) and [RQ4 (cross-lens analysis)](https://blockscience.github.io/gds-core/research/research-boundaries/#research-question-4-cross-lens-analysis-when-equilibrium-and-reachability-disagree). ### Semantic enrichment must remain opt-in The three-source architecture creates pressure to enrich representations: add fields to GDSSpec for game vocabulary, add fields to domain IRs for canonical results, add fields to SystemIR for domain metadata. Resist this. Each representation should carry exactly what it is authoritative for and no more. The principle: > Structural composition is mandatory. Semantic enrichment is opt-in. If a view needs data from two sources, it should receive two arguments — not a merged super-representation that conflates concerns. ### Domain IRs are not being demoted This stratification does not reduce the role of domain IRs. It clarifies them. Domain IRs are the authoritative source for domain-specific vocabulary — the only place where game-theoretic signatures, stock-flow accumulation semantics, or control-theoretic state-space structure exist. Removing or collapsing them would destroy domain semantics that no other representation carries. The change is: stop asking domain IRs questions they shouldn't answer (role classification), and start asking them questions only they can answer (domain semantics). # OWL (gds-interchange) # gds-owl **OWL/Turtle, SHACL, and SPARQL for GDS specifications** — semantic web interoperability for compositional systems. ## What is this? `gds-owl` exports GDS specifications to RDF/OWL and imports them back, enabling interoperability with semantic web tooling. It provides: - **OWL ontology** — class hierarchy mirroring GDS types (blocks, roles, entities, spaces, parameters) - **RDF export/import** — lossless round-trip for structural fields (Pydantic → Turtle → Pydantic) - **SHACL shapes** — constraint validation on exported RDF graphs (structural + semantic) - **SPARQL queries** — pre-built query templates for common GDS analysis patterns - **Formal representability analysis** — documented classification of what survives the OWL boundary ## Architecture ``` gds-framework (pip install gds-framework) | | Domain-neutral composition algebra, typed spaces, | state model, verification engine, flat IR compiler. | +-- gds-owl (pip install gds-interchange) | | OWL ontology (TBox), RDF export/import (ABox), | SHACL validation, SPARQL query templates. | +-- Your application | | Ontology browsers, SPARQL endpoints, | cross-tool interoperability. ``` ## Key Concepts ### Representability Tiers Not everything in a GDS specification can be represented in OWL: | Tier | What | Formalism | Example | | ------ | -------------------------- | ----------- | -------------------------------------------------------- | | **R1** | Fully representable | OWL + SHACL | Block interfaces, role partition, wiring topology | | **R2** | Structurally representable | SPARQL | Cycle detection, completeness, determinism | | **R3** | Not representable | Python only | Transition functions, constraint predicates, auto-wiring | The canonical decomposition `h = f . g` is the boundary: `g` (policy mapping) is entirely R1, `f` splits into structural (R1) and behavioral (R3). ### Round-Trip Guarantees The export/import cycle preserves all structural fields. Known lossy fields: - `TypeDef.constraint` — arbitrary `Callable`, imported as `None` - `TypeDef.python_type` — falls back to `str` for unmapped types - `AdmissibleInputConstraint.constraint` — same as TypeDef.constraint ### Four Export Targets | Function | Input | Output | | ---------------------- | -------------------- | ---------------- | | `spec_to_graph()` | `GDSSpec` | RDF graph (ABox) | | `system_ir_to_graph()` | `SystemIR` | RDF graph (ABox) | | `canonical_to_graph()` | `CanonicalGDS` | RDF graph (ABox) | | `report_to_graph()` | `VerificationReport` | RDF graph (ABox) | ## Installation ``` pip install gds-interchange # With SHACL validation support: pip install gds-interchange[shacl] ``` ## Quick Example ``` from gds import GDSSpec from gds_interchange.owl import spec_to_graph, to_turtle, graph_to_spec # Export a spec to Turtle spec = GDSSpec(name="My System") graph = spec_to_graph(spec) print(to_turtle(graph)) # Import back spec2 = graph_to_spec(graph) assert spec2.name == spec.name ``` # Getting Started ## Installation ``` pip install gds-interchange ``` For SHACL validation: ``` pip install gds-interchange[shacl] ``` ## Build an Ontology The core ontology defines OWL classes and properties for all GDS concepts: ``` from gds_interchange.owl import build_core_ontology, to_turtle ontology = build_core_ontology() print(to_turtle(ontology)) ``` This produces a Turtle document with classes like `gds-core:GDSSpec`, `gds-core:Mechanism`, `gds-core:Policy`, etc. ## Export a Spec to RDF ``` from gds import GDSSpec, typedef, entity, state_var from gds.blocks.roles import Mechanism from gds.types.interface import Interface, port from gds_interchange.owl import spec_to_graph, to_turtle # Build a minimal spec Float = typedef("Float", float) spec = GDSSpec(name="Example") spec.collect( Float, entity("Tank", level=state_var(Float)), Mechanism( name="Fill", interface=Interface(forward_in=(port("Flow Rate"),)), updates=[("Tank", "level")], ), ) # Export to RDF graph = spec_to_graph(spec) print(to_turtle(graph)) ``` ## Import Back from RDF ``` from rdflib import Graph from gds_interchange.owl import graph_to_spec, to_turtle, spec_to_graph # Round-trip: Pydantic -> Turtle -> Pydantic graph = spec_to_graph(spec) turtle_str = to_turtle(graph) g2 = Graph() g2.parse(data=turtle_str, format="turtle") spec2 = graph_to_spec(g2) assert spec2.name == "Example" assert "Tank" in spec2.entities ``` ## Validate with SHACL ``` from gds_interchange.owl import build_all_shapes, validate_graph, spec_to_graph graph = spec_to_graph(spec) shapes = build_all_shapes() conforms, results_graph, results_text = validate_graph(graph, shapes) print(f"Conforms: {conforms}") if not conforms: print(results_text) ``` ## Query with SPARQL ``` from gds_interchange.owl import TEMPLATES, spec_to_graph graph = spec_to_graph(spec) # List all blocks for template in TEMPLATES: if template.name == "list_blocks": results = graph.query(template.query) for row in results: print(row) ``` # Representability The formal representability analysis classifies which GDS concepts can and cannot be represented in OWL/SHACL/SPARQL. See the full analysis: [formal-representability.md](https://blockscience.github.io/gds-core/research/formal-representability/index.md) ## Key Results The canonical decomposition `h = f . g` is the representation boundary: - **g** (policy mapping) is entirely R1 — fully representable in OWL - **f_struct** (update map: "who updates what") is R1 - **f_behav** (transition function: "how values change") is R3 — not representable ## Verification Check Classification | Tier | Checks | Count | | ---------------- | ---------------------------- | ----- | | R1 (SHACL-core) | G-002, SC-005..SC-009 | 6 | | R2 (SPARQL) | G-004, G-006, SC-001..SC-004 | 6 | | R3 (Python-only) | G-001, G-005 | 2 | | Mixed | G-003 (flags R1, ports R3) | 1 | # gds_interchange.owl.export Pydantic to RDF graph export functions. Export GDS Pydantic models to RDF graphs (ABox instance data). Mirrors the pattern in `gds.serialize.spec_to_dict()` but targets `rdflib.Graph` instead of plain dicts. ## `spec_to_graph(spec, *, base_uri=DEFAULT_BASE_URI)` Export a GDSSpec to an RDF graph (ABox instance data). Source code in `packages/gds-interchange/gds_interchange/owl/export.py` ``` def spec_to_graph( spec: GDSSpec, *, base_uri: str = DEFAULT_BASE_URI, ) -> Graph: """Export a GDSSpec to an RDF graph (ABox instance data).""" g = Graph() _bind(g) ns = _ns(base_uri, spec.name) g.bind("inst", ns) spec_uri = ns["spec"] g.add((spec_uri, RDF.type, GDS_CORE["GDSSpec"])) g.add((spec_uri, GDS_CORE["name"], Literal(spec.name))) g.add((spec_uri, GDS_CORE["description"], Literal(spec.description))) # Types type_uris: dict[str, URIRef] = {} for name, t in spec.types.items(): type_uris[name] = _typedef_to_rdf(g, ns, t) g.add((spec_uri, GDS_CORE["hasType"], type_uris[name])) # Also export parameter typedefs that may not be in spec.types for p in spec.parameter_schema.parameters.values(): if p.typedef.name not in type_uris: type_uris[p.typedef.name] = _typedef_to_rdf(g, ns, p.typedef) # Spaces space_uris: dict[str, URIRef] = {} for name, s in spec.spaces.items(): space_uris[name] = _space_to_rdf(g, ns, s, type_uris) g.add((spec_uri, GDS_CORE["hasSpace"], space_uris[name])) # Entities entity_uris: dict[str, URIRef] = {} for name, e in spec.entities.items(): entity_uris[name] = _entity_to_rdf(g, ns, e, type_uris) g.add((spec_uri, GDS_CORE["hasEntity"], entity_uris[name])) # Parameters param_uris: dict[str, URIRef] = {} for name, p in spec.parameter_schema.parameters.items(): param_uris[name] = _parameter_to_rdf(g, ns, p, type_uris) g.add((spec_uri, GDS_CORE["hasParameter"], param_uris[name])) # Blocks block_uris: dict[str, URIRef] = {} for name, b in spec.blocks.items(): block_uris[name] = _block_to_rdf(g, ns, b, param_uris, entity_uris) g.add((spec_uri, GDS_CORE["hasBlock"], block_uris[name])) # Wirings for _name, w in spec.wirings.items(): w_uri = _wiring_to_rdf(g, ns, w, block_uris, space_uris) g.add((spec_uri, GDS_CORE["hasWiring"], w_uri)) # Admissibility constraints for ac_name, ac in spec.admissibility_constraints.items(): ac_uri = _uri(ns, "admissibility", ac_name) g.add((ac_uri, RDF.type, GDS_CORE["AdmissibleInputConstraint"])) g.add((ac_uri, GDS_CORE["name"], Literal(ac_name))) g.add( ( ac_uri, GDS_CORE["constraintBoundaryBlock"], Literal(ac.boundary_block), ) ) if ac.boundary_block in block_uris: g.add( ( ac_uri, GDS_CORE["constrainsBoundary"], block_uris[ac.boundary_block], ) ) g.add( ( ac_uri, GDS_CORE["admissibilityHasConstraint"], Literal(ac.constraint is not None, datatype=XSD.boolean), ) ) g.add((ac_uri, GDS_CORE["description"], Literal(ac.description))) for entity_name, var_name in ac.depends_on: dep = BNode() g.add((dep, RDF.type, GDS_CORE["AdmissibilityDep"])) g.add((dep, GDS_CORE["depEntity"], Literal(entity_name))) g.add((dep, GDS_CORE["depVariable"], Literal(var_name))) g.add((ac_uri, GDS_CORE["hasDependency"], dep)) g.add((spec_uri, GDS_CORE["hasAdmissibilityConstraint"], ac_uri)) # Transition signatures for mname, ts in spec.transition_signatures.items(): ts_uri = _uri(ns, "transition_sig", mname) g.add((ts_uri, RDF.type, GDS_CORE["TransitionSignature"])) g.add((ts_uri, GDS_CORE["name"], Literal(mname))) g.add((ts_uri, GDS_CORE["signatureMechanism"], Literal(ts.mechanism))) if ts.mechanism in block_uris: g.add( ( ts_uri, GDS_CORE["signatureForMechanism"], block_uris[ts.mechanism], ) ) for bname in ts.depends_on_blocks: g.add((ts_uri, GDS_CORE["dependsOnBlock"], Literal(bname))) if ts.preserves_invariant: g.add( ( ts_uri, GDS_CORE["preservesInvariant"], Literal(ts.preserves_invariant), ) ) for entity_name, var_name in ts.reads: entry = BNode() g.add((entry, RDF.type, GDS_CORE["TransitionReadEntry"])) g.add((entry, GDS_CORE["readEntity"], Literal(entity_name))) g.add((entry, GDS_CORE["readVariable"], Literal(var_name))) g.add((ts_uri, GDS_CORE["hasReadEntry"], entry)) g.add((spec_uri, GDS_CORE["hasTransitionSignature"], ts_uri)) # State metrics for sm_name, sm in spec.state_metrics.items(): sm_uri = _uri(ns, "state_metric", sm_name) g.add((sm_uri, RDF.type, GDS_CORE["StateMetric"])) g.add((sm_uri, GDS_CORE["name"], Literal(sm_name))) if sm.metric_type: g.add((sm_uri, GDS_CORE["metricType"], Literal(sm.metric_type))) g.add( ( sm_uri, GDS_CORE["metricHasDistance"], Literal(sm.distance is not None, datatype=XSD.boolean), ) ) g.add((sm_uri, GDS_CORE["description"], Literal(sm.description))) for entity_name, var_name in sm.variables: entry = BNode() g.add((entry, RDF.type, GDS_CORE["MetricVariableEntry"])) g.add((entry, GDS_CORE["metricEntity"], Literal(entity_name))) g.add((entry, GDS_CORE["metricVariable"], Literal(var_name))) g.add((sm_uri, GDS_CORE["hasMetricVariable"], entry)) g.add((spec_uri, GDS_CORE["hasStateMetric"], sm_uri)) return g ``` ## `system_ir_to_graph(system, *, base_uri=DEFAULT_BASE_URI)` Export a SystemIR to an RDF graph. Source code in `packages/gds-interchange/gds_interchange/owl/export.py` ``` def system_ir_to_graph( system: SystemIR, *, base_uri: str = DEFAULT_BASE_URI, ) -> Graph: """Export a SystemIR to an RDF graph.""" g = Graph() _bind(g) ns = _ns(base_uri, system.name) g.bind("inst", ns) sys_uri = ns["system"] g.add((sys_uri, RDF.type, GDS_IR["SystemIR"])) g.add((sys_uri, GDS_CORE["name"], Literal(system.name))) g.add( ( sys_uri, GDS_IR["compositionTypeSystem"], Literal(system.composition_type.value), ) ) if system.source: g.add((sys_uri, GDS_IR["sourceLabel"], Literal(system.source))) for b in system.blocks: b_uri = _block_ir_to_rdf(g, ns, b) g.add((sys_uri, GDS_IR["hasBlockIR"], b_uri)) for idx, w in enumerate(system.wirings): w_uri = _wiring_ir_to_rdf(g, ns, w, idx) g.add((sys_uri, GDS_IR["hasWiringIR"], w_uri)) for inp in system.inputs: inp_uri = _uri(ns, "input", inp.name) g.add((inp_uri, RDF.type, GDS_IR["InputIR"])) g.add((inp_uri, GDS_CORE["name"], Literal(inp.name))) g.add((sys_uri, GDS_IR["hasInputIR"], inp_uri)) if system.hierarchy: h_uri = _hierarchy_to_rdf(g, ns, system.hierarchy) g.add((sys_uri, GDS_IR["hasHierarchy"], h_uri)) return g ``` ## `canonical_to_graph(canonical, *, base_uri=DEFAULT_BASE_URI, name='canonical')` Export a CanonicalGDS to an RDF graph. Source code in `packages/gds-interchange/gds_interchange/owl/export.py` ``` def canonical_to_graph( canonical: CanonicalGDS, *, base_uri: str = DEFAULT_BASE_URI, name: str = "canonical", ) -> Graph: """Export a CanonicalGDS to an RDF graph.""" g = Graph() _bind(g) ns = _ns(base_uri, name) g.bind("inst", ns) can_uri = ns["canonical"] g.add((can_uri, RDF.type, GDS_CORE["CanonicalGDS"])) g.add((can_uri, GDS_CORE["formula"], Literal(canonical.formula()))) # State variables for entity_name, var_name in canonical.state_variables: sv_uri = _uri(ns, "state_var", f"{entity_name}.{var_name}") g.add((sv_uri, RDF.type, GDS_CORE["StateVariable"])) g.add((sv_uri, GDS_CORE["name"], Literal(var_name))) g.add((sv_uri, GDS_CORE["description"], Literal(f"{entity_name}.{var_name}"))) g.add((can_uri, GDS_CORE["hasVariable"], sv_uri)) # Block role partitions for bname in canonical.boundary_blocks: g.add((can_uri, GDS_CORE["boundaryBlock"], Literal(bname))) for bname in canonical.control_blocks: g.add((can_uri, GDS_CORE["controlBlock"], Literal(bname))) for bname in canonical.policy_blocks: g.add((can_uri, GDS_CORE["policyBlock"], Literal(bname))) for bname in canonical.mechanism_blocks: g.add((can_uri, GDS_CORE["mechanismBlock"], Literal(bname))) # Update map for mech_name, updates in canonical.update_map: for entity_name, var_name in updates: entry = BNode() g.add((entry, RDF.type, GDS_CORE["UpdateMapEntry"])) g.add((entry, GDS_CORE["name"], Literal(mech_name))) g.add((entry, GDS_CORE["updatesEntity"], Literal(entity_name))) g.add((entry, GDS_CORE["updatesVariable"], Literal(var_name))) g.add((can_uri, GDS_CORE["updatesEntry"], entry)) return g ``` ## `report_to_graph(report, *, base_uri=DEFAULT_BASE_URI)` Export a VerificationReport to an RDF graph. Source code in `packages/gds-interchange/gds_interchange/owl/export.py` ``` def report_to_graph( report: VerificationReport, *, base_uri: str = DEFAULT_BASE_URI, ) -> Graph: """Export a VerificationReport to an RDF graph.""" g = Graph() _bind(g) ns = _ns(base_uri, report.system_name) g.bind("inst", ns) report_uri = ns["report"] g.add((report_uri, RDF.type, GDS_VERIF["VerificationReport"])) g.add((report_uri, GDS_VERIF["systemName"], Literal(report.system_name))) for idx, f in enumerate(report.findings): f_uri = _uri(ns, "finding", f"{f.check_id}-{idx}") g.add((f_uri, RDF.type, GDS_VERIF["Finding"])) g.add((f_uri, GDS_VERIF["checkId"], Literal(f.check_id))) g.add((f_uri, GDS_VERIF["severity"], Literal(f.severity.value))) g.add((f_uri, GDS_VERIF["message"], Literal(f.message))) g.add((f_uri, GDS_VERIF["passed"], Literal(f.passed, datatype=XSD.boolean))) for elem in f.source_elements: g.add((f_uri, GDS_VERIF["sourceElement"], Literal(elem))) if f.exportable_predicate: g.add( ( f_uri, GDS_VERIF["exportablePredicate"], Literal(f.exportable_predicate), ) ) g.add((report_uri, GDS_VERIF["hasFinding"], f_uri)) return g ``` # gds_interchange.owl.import\_ RDF graph to Pydantic import functions. Import RDF graphs back into GDS Pydantic models (round-trip support). Reconstructs GDSSpec, SystemIR, CanonicalGDS, and VerificationReport from RDF graphs produced by the export functions. Known lossy fields: - TypeDef.constraint: Python callable, not serializable *unless* `constraint_kind` is set — recognised patterns are reconstructed. - TypeDef.python_type: Mapped from string via \_PYTHON_TYPE_MAP for builtins. ## `graph_to_spec(g, *, spec_uri=None)` Reconstruct a GDSSpec from an RDF graph. If spec_uri is None, finds the first GDSSpec individual in the graph. Source code in `packages/gds-interchange/gds_interchange/owl/import_.py` ``` def graph_to_spec( g: Graph, *, spec_uri: URIRef | None = None, ) -> GDSSpec: """Reconstruct a GDSSpec from an RDF graph. If spec_uri is None, finds the first GDSSpec individual in the graph. """ from gds import ( GDSSpec, ParameterDef, SpecWiring, Wire, ) from gds.blocks.roles import BoundaryAction, Mechanism, Policy from gds.constraints import ( AdmissibleInputConstraint, StateMetric, TransitionSignature, ) from gds.spaces import Space from gds.state import Entity, StateVariable from gds.types.interface import Interface, port from gds.types.typedef import TypeDef if spec_uri is None: specs = _subjects_of_type(g, GDS_CORE["GDSSpec"]) if not specs: raise ValueError("No GDSSpec found in graph") spec_uri = specs[0] spec_name = _str(g, spec_uri, GDS_CORE["name"]) spec_desc = _str(g, spec_uri, GDS_CORE["description"]) spec = GDSSpec(name=spec_name, description=spec_desc) # Import types typedef_map: dict[str, TypeDef] = {} type_uris = list(g.objects(spec_uri, GDS_CORE["hasType"])) for t_uri in type_uris: if not isinstance(t_uri, URIRef): continue td_fields = _import_typedef(g, t_uri) td = TypeDef(**td_fields) typedef_map[td.name] = td spec.register_type(td) # Also collect all TypeDef URIs for parameter types all_typedef_uris = _subjects_of_type(g, GDS_CORE["TypeDef"]) for t_uri in all_typedef_uris: td_fields = _import_typedef(g, t_uri) if td_fields["name"] not in typedef_map: td = TypeDef(**td_fields) typedef_map[td.name] = td # Import spaces space_uris = list(g.objects(spec_uri, GDS_CORE["hasSpace"])) for s_uri in space_uris: if not isinstance(s_uri, URIRef): continue s_name = _str(g, s_uri, GDS_CORE["name"]) s_desc = _str(g, s_uri, GDS_CORE["description"]) fields: dict[str, TypeDef] = {} for field_node in g.objects(s_uri, GDS_CORE["hasField"]): field_name = _str(g, field_node, GDS_CORE["fieldName"]) field_type_uris = list(g.objects(field_node, GDS_CORE["fieldType"])) if field_type_uris: ft_name = _str(g, field_type_uris[0], GDS_CORE["name"]) if ft_name in typedef_map: fields[field_name] = typedef_map[ft_name] spec.register_space(Space(name=s_name, fields=fields, description=s_desc)) # Import entities entity_uris = list(g.objects(spec_uri, GDS_CORE["hasEntity"])) for e_uri in entity_uris: if not isinstance(e_uri, URIRef): continue e_name = _str(g, e_uri, GDS_CORE["name"]) e_desc = _str(g, e_uri, GDS_CORE["description"]) variables: dict[str, StateVariable] = {} for sv_uri in g.objects(e_uri, GDS_CORE["hasVariable"]): if not isinstance(sv_uri, URIRef): continue sv_name = _str(g, sv_uri, GDS_CORE["name"]) sv_desc = _str(g, sv_uri, GDS_CORE["description"]) sv_symbol = _str(g, sv_uri, GDS_CORE["symbol"]) # Resolve typedef sv_type_uris = list(g.objects(sv_uri, GDS_CORE["usesType"])) if sv_type_uris: sv_type_name = _str(g, sv_type_uris[0], GDS_CORE["name"]) sv_typedef = typedef_map.get( sv_type_name, TypeDef(name=sv_type_name, python_type=str), ) else: sv_typedef = TypeDef(name="unknown", python_type=str) variables[sv_name] = StateVariable( name=sv_name, typedef=sv_typedef, description=sv_desc, symbol=sv_symbol, ) spec.register_entity( Entity(name=e_name, variables=variables, description=e_desc) ) # Import parameters param_uris = list(g.objects(spec_uri, GDS_CORE["hasParameter"])) param_uri_map: dict[str, URIRef] = {} for p_uri in param_uris: if not isinstance(p_uri, URIRef): continue p_name = _str(g, p_uri, GDS_CORE["name"]) p_desc = _str(g, p_uri, GDS_CORE["description"]) param_uri_map[p_name] = p_uri # Resolve typedef pt_uris = list(g.objects(p_uri, GDS_CORE["paramType"])) if pt_uris: pt_name = _str(g, pt_uris[0], GDS_CORE["name"]) p_typedef = typedef_map.get(pt_name, TypeDef(name=pt_name, python_type=str)) else: p_typedef = TypeDef(name="unknown", python_type=str) spec.register_parameter( ParameterDef(name=p_name, typedef=p_typedef, description=p_desc) ) # Import blocks block_uris = list(g.objects(spec_uri, GDS_CORE["hasBlock"])) # Build reverse lookup: param URI -> param name param_name_by_uri: dict[URIRef, str] = {} for pname, puri in param_uri_map.items(): param_name_by_uri[puri] = pname for b_uri in block_uris: if not isinstance(b_uri, URIRef): continue b_name = _str(g, b_uri, GDS_CORE["name"]) b_kind = _str(g, b_uri, GDS_CORE["kind"]) # Reconstruct interface iface_uris = list(g.objects(b_uri, GDS_CORE["hasInterface"])) fwd_in_ports: list[str] = [] fwd_out_ports: list[str] = [] bwd_in_ports: list[str] = [] bwd_out_ports: list[str] = [] if iface_uris: iface_uri = iface_uris[0] for p in g.objects(iface_uri, GDS_CORE["hasForwardIn"]): fwd_in_ports.append(_str(g, p, GDS_CORE["portName"])) for p in g.objects(iface_uri, GDS_CORE["hasForwardOut"]): fwd_out_ports.append(_str(g, p, GDS_CORE["portName"])) for p in g.objects(iface_uri, GDS_CORE["hasBackwardIn"]): bwd_in_ports.append(_str(g, p, GDS_CORE["portName"])) for p in g.objects(iface_uri, GDS_CORE["hasBackwardOut"]): bwd_out_ports.append(_str(g, p, GDS_CORE["portName"])) iface = Interface( forward_in=tuple(port(n) for n in sorted(fwd_in_ports)), forward_out=tuple(port(n) for n in sorted(fwd_out_ports)), backward_in=tuple(port(n) for n in sorted(bwd_in_ports)), backward_out=tuple(port(n) for n in sorted(bwd_out_ports)), ) # Params used params_used = [] for pu in g.objects(b_uri, GDS_CORE["usesParameter"]): if isinstance(pu, URIRef) and pu in param_name_by_uri: params_used.append(param_name_by_uri[pu]) constraints = _strs(g, b_uri, GDS_CORE["constraint"]) options = _strs(g, b_uri, GDS_CORE["option"]) # Build block by kind if b_kind == "boundary": block = BoundaryAction( name=b_name, interface=iface, params_used=params_used, constraints=constraints, options=options, ) elif b_kind == "mechanism": updates: list[tuple[str, str]] = [] for entry in g.objects(b_uri, GDS_CORE["updatesEntry"]): ent = _str(g, entry, GDS_CORE["updatesEntity"]) var = _str(g, entry, GDS_CORE["updatesVariable"]) updates.append((ent, var)) block = Mechanism( name=b_name, interface=iface, updates=updates, params_used=params_used, constraints=constraints, ) elif b_kind == "policy": block = Policy( name=b_name, interface=iface, params_used=params_used, constraints=constraints, options=options, ) else: from gds.blocks.base import AtomicBlock block = AtomicBlock(name=b_name, interface=iface) spec.register_block(block) # Import wirings wiring_uris = list(g.objects(spec_uri, GDS_CORE["hasWiring"])) for w_uri in wiring_uris: if not isinstance(w_uri, URIRef): continue w_name = _str(g, w_uri, GDS_CORE["name"]) w_desc = _str(g, w_uri, GDS_CORE["description"]) block_names = [] for wb in g.objects(w_uri, GDS_CORE["wiringBlock"]): if isinstance(wb, URIRef): bn = _str(g, wb, GDS_CORE["name"]) if bn: block_names.append(bn) wires = [] for wire_node in g.objects(w_uri, GDS_CORE["hasWire"]): ws = _str(g, wire_node, GDS_CORE["wireSource"]) wt = _str(g, wire_node, GDS_CORE["wireTarget"]) wsp = _str(g, wire_node, GDS_CORE["wireSpace"]) wo = _bool(g, wire_node, GDS_CORE["wireOptional"]) wires.append(Wire(source=ws, target=wt, space=wsp, optional=wo)) spec.register_wiring( SpecWiring( name=w_name, block_names=block_names, wires=wires, description=w_desc, ) ) # Import admissibility constraints ac_uris = list(g.objects(spec_uri, GDS_CORE["hasAdmissibilityConstraint"])) for ac_uri in ac_uris: if not isinstance(ac_uri, URIRef): continue ac_name = _str(g, ac_uri, GDS_CORE["name"]) ac_boundary = _str(g, ac_uri, GDS_CORE["constraintBoundaryBlock"]) ac_desc = _str(g, ac_uri, GDS_CORE["description"]) depends_on: list[tuple[str, str]] = [] for dep in g.objects(ac_uri, GDS_CORE["hasDependency"]): ent = _str(g, dep, GDS_CORE["depEntity"]) var = _str(g, dep, GDS_CORE["depVariable"]) depends_on.append((ent, var)) spec.register_admissibility( AdmissibleInputConstraint( name=ac_name, boundary_block=ac_boundary, depends_on=depends_on, constraint=None, description=ac_desc, ) ) # Import transition signatures ts_uris = list(g.objects(spec_uri, GDS_CORE["hasTransitionSignature"])) for ts_uri in ts_uris: if not isinstance(ts_uri, URIRef): continue ts_mech = _str(g, ts_uri, GDS_CORE["signatureMechanism"]) reads: list[tuple[str, str]] = [] for entry in g.objects(ts_uri, GDS_CORE["hasReadEntry"]): ent = _str(g, entry, GDS_CORE["readEntity"]) var = _str(g, entry, GDS_CORE["readVariable"]) reads.append((ent, var)) depends_on_blocks = _strs(g, ts_uri, GDS_CORE["dependsOnBlock"]) invariant = _str(g, ts_uri, GDS_CORE["preservesInvariant"]) spec.register_transition_signature( TransitionSignature( mechanism=ts_mech, reads=reads, depends_on_blocks=depends_on_blocks, preserves_invariant=invariant, ) ) # Import state metrics sm_uris = list(g.objects(spec_uri, GDS_CORE["hasStateMetric"])) for sm_uri in sm_uris: if not isinstance(sm_uri, URIRef): continue sm_name = _str(g, sm_uri, GDS_CORE["name"]) sm_type = _str(g, sm_uri, GDS_CORE["metricType"]) sm_desc = _str(g, sm_uri, GDS_CORE["description"]) variables: list[tuple[str, str]] = [] for entry in g.objects(sm_uri, GDS_CORE["hasMetricVariable"]): ent = _str(g, entry, GDS_CORE["metricEntity"]) var = _str(g, entry, GDS_CORE["metricVariable"]) variables.append((ent, var)) spec.register_state_metric( StateMetric( name=sm_name, variables=variables, metric_type=sm_type, distance=None, # R3 lossy description=sm_desc, ) ) return spec ``` ## `graph_to_system_ir(g, *, system_uri=None)` Reconstruct a SystemIR from an RDF graph. Source code in `packages/gds-interchange/gds_interchange/owl/import_.py` ``` def graph_to_system_ir( g: Graph, *, system_uri: URIRef | None = None, ) -> SystemIR: """Reconstruct a SystemIR from an RDF graph.""" from gds.ir.models import ( BlockIR, CompositionType, FlowDirection, InputIR, SystemIR, WiringIR, ) if system_uri is None: systems = _subjects_of_type(g, GDS_IR["SystemIR"]) if not systems: raise ValueError("No SystemIR found in graph") system_uri = systems[0] name = _str(g, system_uri, GDS_CORE["name"]) comp_type_str = _str(g, system_uri, GDS_IR["compositionTypeSystem"]) comp_type = ( CompositionType(comp_type_str) if comp_type_str else CompositionType.SEQUENTIAL ) source = _str(g, system_uri, GDS_IR["sourceLabel"]) # Blocks blocks = [] for b_uri in g.objects(system_uri, GDS_IR["hasBlockIR"]): if not isinstance(b_uri, URIRef): continue b_name = _str(g, b_uri, GDS_CORE["name"]) block_type = _str(g, b_uri, GDS_IR["blockType"]) fwd_in = _str(g, b_uri, GDS_IR["signatureForwardIn"]) fwd_out = _str(g, b_uri, GDS_IR["signatureForwardOut"]) bwd_in = _str(g, b_uri, GDS_IR["signatureBackwardIn"]) bwd_out = _str(g, b_uri, GDS_IR["signatureBackwardOut"]) logic = _str(g, b_uri, GDS_IR["logic"]) color_code_vals = list(g.objects(b_uri, GDS_IR["colorCode"])) color_code = int(color_code_vals[0].toPython()) if color_code_vals else 1 blocks.append( BlockIR( name=b_name, block_type=block_type, signature=(fwd_in, fwd_out, bwd_in, bwd_out), logic=logic, color_code=color_code, ) ) # Wirings wirings = [] for w_uri in g.objects(system_uri, GDS_IR["hasWiringIR"]): if not isinstance(w_uri, URIRef): continue w_source = _str(g, w_uri, GDS_IR["source"]) w_target = _str(g, w_uri, GDS_IR["target"]) w_label = _str(g, w_uri, GDS_IR["label"]) w_type = _str(g, w_uri, GDS_IR["wiringType"]) w_dir_str = _str(g, w_uri, GDS_IR["direction"]) w_dir = FlowDirection(w_dir_str) if w_dir_str else FlowDirection.COVARIANT w_fb = _bool(g, w_uri, GDS_IR["isFeedback"]) w_temp = _bool(g, w_uri, GDS_IR["isTemporal"]) w_cat = _str(g, w_uri, GDS_IR["category"]) wirings.append( WiringIR( source=w_source, target=w_target, label=w_label, wiring_type=w_type, direction=w_dir, is_feedback=w_fb, is_temporal=w_temp, category=w_cat or "dataflow", ) ) # Inputs inputs = [] for inp_uri in g.objects(system_uri, GDS_IR["hasInputIR"]): if not isinstance(inp_uri, URIRef): continue inp_name = _str(g, inp_uri, GDS_CORE["name"]) inputs.append(InputIR(name=inp_name)) # Hierarchy hierarchy = None h_uris = list(g.objects(system_uri, GDS_IR["hasHierarchy"])) if h_uris and isinstance(h_uris[0], URIRef): hierarchy = _import_hierarchy(g, h_uris[0]) return SystemIR( name=name, blocks=blocks, wirings=wirings, inputs=inputs, composition_type=comp_type, hierarchy=hierarchy, source=source, ) ``` ## `graph_to_canonical(g, *, canonical_uri=None)` Reconstruct a CanonicalGDS from an RDF graph. Source code in `packages/gds-interchange/gds_interchange/owl/import_.py` ``` def graph_to_canonical( g: Graph, *, canonical_uri: URIRef | None = None, ) -> CanonicalGDS: """Reconstruct a CanonicalGDS from an RDF graph.""" from gds.canonical import CanonicalGDS if canonical_uri is None: canons = _subjects_of_type(g, GDS_CORE["CanonicalGDS"]) if not canons: raise ValueError("No CanonicalGDS found in graph") canonical_uri = canons[0] # State variables state_variables = [] for sv_uri in g.objects(canonical_uri, GDS_CORE["hasVariable"]): desc = _str(g, sv_uri, GDS_CORE["description"]) if "." in desc: entity_name, var_name = desc.split(".", 1) state_variables.append((entity_name, var_name)) # Role blocks boundary_blocks = tuple(_strs(g, canonical_uri, GDS_CORE["boundaryBlock"])) control_blocks = tuple(_strs(g, canonical_uri, GDS_CORE["controlBlock"])) policy_blocks = tuple(_strs(g, canonical_uri, GDS_CORE["policyBlock"])) mechanism_blocks = tuple(_strs(g, canonical_uri, GDS_CORE["mechanismBlock"])) # Update map update_entries: dict[str, list[tuple[str, str]]] = {} for entry in g.objects(canonical_uri, GDS_CORE["updatesEntry"]): mech_name = _str(g, entry, GDS_CORE["name"]) entity_name = _str(g, entry, GDS_CORE["updatesEntity"]) var_name = _str(g, entry, GDS_CORE["updatesVariable"]) update_entries.setdefault(mech_name, []).append((entity_name, var_name)) update_map = tuple( (mech, tuple(updates)) for mech, updates in update_entries.items() ) return CanonicalGDS( state_variables=tuple(state_variables), boundary_blocks=boundary_blocks, control_blocks=control_blocks, policy_blocks=policy_blocks, mechanism_blocks=mechanism_blocks, update_map=update_map, ) ``` ## `graph_to_report(g, *, report_uri=None)` Reconstruct a VerificationReport from an RDF graph. Source code in `packages/gds-interchange/gds_interchange/owl/import_.py` ``` def graph_to_report( g: Graph, *, report_uri: URIRef | None = None, ) -> VerificationReport: """Reconstruct a VerificationReport from an RDF graph.""" from gds.verification.findings import Finding, Severity, VerificationReport if report_uri is None: reports = _subjects_of_type(g, GDS_VERIF["VerificationReport"]) if not reports: raise ValueError("No VerificationReport found in graph") report_uri = reports[0] system_name = _str(g, report_uri, GDS_VERIF["systemName"]) findings = [] for f_uri in g.objects(report_uri, GDS_VERIF["hasFinding"]): check_id = _str(g, f_uri, GDS_VERIF["checkId"]) severity_str = _str(g, f_uri, GDS_VERIF["severity"]) severity = Severity(severity_str) if severity_str else Severity.INFO message = _str(g, f_uri, GDS_VERIF["message"]) passed = _bool(g, f_uri, GDS_VERIF["passed"]) source_elements = _strs(g, f_uri, GDS_VERIF["sourceElement"]) exportable = _str(g, f_uri, GDS_VERIF["exportablePredicate"]) findings.append( Finding( check_id=check_id, severity=severity, message=message, passed=passed, source_elements=source_elements, exportable_predicate=exportable, ) ) return VerificationReport(system_name=system_name, findings=findings) ``` # gds_interchange.owl Public API -- top-level exports. OWL/Turtle, SHACL, and SPARQL for gds-framework specifications. # gds_interchange.owl.ontology OWL class hierarchy (TBox) -- core schema definitions. GDS core ontology — OWL class hierarchy and property definitions (TBox). Builds the GDS ontology programmatically as an rdflib Graph. This defines the *schema* (classes, properties, domain/range) — not instance data. ## `build_core_ontology()` Build the complete GDS core ontology as an OWL graph (TBox). Returns an rdflib Graph containing all OWL class declarations, object properties, and datatype properties for the GDS ecosystem. This is the schema — use the export functions to produce instance data (ABox). Source code in `packages/gds-interchange/gds_interchange/owl/ontology.py` ``` def build_core_ontology() -> Graph: """Build the complete GDS core ontology as an OWL graph (TBox). Returns an rdflib Graph containing all OWL class declarations, object properties, and datatype properties for the GDS ecosystem. This is the schema — use the export functions to produce instance data (ABox). """ g = Graph() _bind_prefixes(g) # Ontology metadata g.add((GDS["ontology"], RDF.type, OWL.Ontology)) g.add( ( GDS["ontology"], RDFS.label, Literal("Generalized Dynamical Systems Ontology"), ) ) g.add( ( GDS["ontology"], RDFS.comment, Literal( "OWL ontology for typed compositional specifications " "of complex systems, grounded in GDS theory." ), ) ) _build_composition_algebra(g) _build_spec_framework(g) _build_ir_classes(g) _build_verification_classes(g) return g ``` # gds_interchange.owl.serialize RDF serialization utilities (Turtle format). Serialization convenience functions — Graph to Turtle/JSON-LD/N-Triples. Also provides high-level shortcuts that combine export + serialization. ## `to_turtle(graph)` Serialize an RDF graph to Turtle format. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def to_turtle(graph: Graph) -> str: """Serialize an RDF graph to Turtle format.""" return graph.serialize(format="turtle") ``` ## `to_jsonld(graph)` Serialize an RDF graph to JSON-LD format. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def to_jsonld(graph: Graph) -> str: """Serialize an RDF graph to JSON-LD format.""" return graph.serialize(format="json-ld") ``` ## `to_ntriples(graph)` Serialize an RDF graph to N-Triples format. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def to_ntriples(graph: Graph) -> str: """Serialize an RDF graph to N-Triples format.""" return graph.serialize(format="nt") ``` ## `spec_to_turtle(spec, **kwargs)` Export a GDSSpec directly to Turtle string. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def spec_to_turtle(spec: GDSSpec, **kwargs: Any) -> str: """Export a GDSSpec directly to Turtle string.""" return to_turtle(spec_to_graph(spec, **kwargs)) ``` ## `system_ir_to_turtle(system, **kwargs)` Export a SystemIR directly to Turtle string. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def system_ir_to_turtle(system: SystemIR, **kwargs: Any) -> str: """Export a SystemIR directly to Turtle string.""" return to_turtle(system_ir_to_graph(system, **kwargs)) ``` ## `canonical_to_turtle(canonical, **kwargs)` Export a CanonicalGDS directly to Turtle string. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def canonical_to_turtle(canonical: CanonicalGDS, **kwargs: Any) -> str: """Export a CanonicalGDS directly to Turtle string.""" return to_turtle(canonical_to_graph(canonical, **kwargs)) ``` ## `report_to_turtle(report, **kwargs)` Export a VerificationReport directly to Turtle string. Source code in `packages/gds-interchange/gds_interchange/owl/serialize.py` ``` def report_to_turtle(report: VerificationReport, **kwargs: Any) -> str: """Export a VerificationReport directly to Turtle string.""" return to_turtle(report_to_graph(report, **kwargs)) ``` # gds_interchange.owl.shacl SHACL shape library for validating GDS RDF graphs. SHACL shape library for validating GDS RDF graphs. Three shape sets: - Structural: Pydantic model constraints (cardinality, required fields) - Generic: G-001..G-006 verification checks on SystemIR - Semantic: SC-001..SC-007 verification checks on GDSSpec Requires pyshacl (optional dependency: pip install gds-owl[shacl]). ## `build_structural_shapes()` Build SHACL shapes for GDS structural constraints. These mirror the Pydantic model validators: required fields, cardinality, and role-specific invariants. Source code in `packages/gds-interchange/gds_interchange/owl/shacl.py` ``` def build_structural_shapes() -> Graph: """Build SHACL shapes for GDS structural constraints. These mirror the Pydantic model validators: required fields, cardinality, and role-specific invariants. """ g = Graph() _bind(g) # GDSSpec: must have exactly 1 name spec_shape = GDS_SHAPE["GDSSpecShape"] g.add((spec_shape, RDF.type, SH.NodeShape)) g.add((spec_shape, SH.targetClass, GDS_CORE["GDSSpec"])) _add_property_shape( g, spec_shape, GDS_CORE["name"], min_count=1, max_count=1, datatype=XSD.string, message="GDSSpec must have exactly one name", ) # BoundaryAction: must have 0 hasForwardIn ports ba_shape = GDS_SHAPE["BoundaryActionShape"] g.add((ba_shape, RDF.type, SH.NodeShape)) g.add((ba_shape, SH.targetClass, GDS_CORE["BoundaryAction"])) _add_property_shape( g, ba_shape, GDS_CORE["name"], min_count=1, max_count=1, message="BoundaryAction must have a name", ) # BoundaryAction interface must have no forward_in (checked via interface) _add_property_shape( g, ba_shape, GDS_CORE["hasInterface"], min_count=1, max_count=1, message="BoundaryAction must have exactly one interface", ) # Mechanism: must have 0 backward ports, >= 1 updatesEntry mech_shape = GDS_SHAPE["MechanismShape"] g.add((mech_shape, RDF.type, SH.NodeShape)) g.add((mech_shape, SH.targetClass, GDS_CORE["Mechanism"])) _add_property_shape( g, mech_shape, GDS_CORE["name"], min_count=1, max_count=1, message="Mechanism must have a name", ) _add_property_shape( g, mech_shape, GDS_CORE["updatesEntry"], min_count=1, message="Mechanism must update at least one state variable", ) # Policy: must have name and interface pol_shape = GDS_SHAPE["PolicyShape"] g.add((pol_shape, RDF.type, SH.NodeShape)) g.add((pol_shape, SH.targetClass, GDS_CORE["Policy"])) _add_property_shape( g, pol_shape, GDS_CORE["name"], min_count=1, max_count=1, message="Policy must have a name", ) # Entity: must have name, >= 0 variables ent_shape = GDS_SHAPE["EntityShape"] g.add((ent_shape, RDF.type, SH.NodeShape)) g.add((ent_shape, SH.targetClass, GDS_CORE["Entity"])) _add_property_shape( g, ent_shape, GDS_CORE["name"], min_count=1, max_count=1, message="Entity must have a name", ) # TypeDef: must have name and pythonType td_shape = GDS_SHAPE["TypeDefShape"] g.add((td_shape, RDF.type, SH.NodeShape)) g.add((td_shape, SH.targetClass, GDS_CORE["TypeDef"])) _add_property_shape( g, td_shape, GDS_CORE["name"], min_count=1, max_count=1, message="TypeDef must have a name", ) _add_property_shape( g, td_shape, GDS_CORE["pythonType"], min_count=1, max_count=1, message="TypeDef must have a pythonType", ) # Space: must have name space_shape = GDS_SHAPE["SpaceShape"] g.add((space_shape, RDF.type, SH.NodeShape)) g.add((space_shape, SH.targetClass, GDS_CORE["Space"])) _add_property_shape( g, space_shape, GDS_CORE["name"], min_count=1, max_count=1, message="Space must have a name", ) # AdmissibleInputConstraint: must have name and boundaryBlock aic_shape = GDS_SHAPE["AdmissibleInputConstraintShape"] g.add((aic_shape, RDF.type, SH.NodeShape)) g.add((aic_shape, SH.targetClass, GDS_CORE["AdmissibleInputConstraint"])) _add_property_shape( g, aic_shape, GDS_CORE["name"], min_count=1, max_count=1, datatype=XSD.string, message="AdmissibleInputConstraint must have a name", ) _add_property_shape( g, aic_shape, GDS_CORE["constraintBoundaryBlock"], min_count=1, max_count=1, datatype=XSD.string, message="AdmissibleInputConstraint must have a boundaryBlock", ) # TransitionSignature: must have name and mechanismName ts_shape = GDS_SHAPE["TransitionSignatureShape"] g.add((ts_shape, RDF.type, SH.NodeShape)) g.add((ts_shape, SH.targetClass, GDS_CORE["TransitionSignature"])) _add_property_shape( g, ts_shape, GDS_CORE["name"], min_count=1, max_count=1, datatype=XSD.string, message="TransitionSignature must have a name", ) _add_property_shape( g, ts_shape, GDS_CORE["signatureMechanism"], min_count=1, max_count=1, datatype=XSD.string, message="TransitionSignature must have a mechanismName", ) # StateMetric: must have name sm_shape = GDS_SHAPE["StateMetricShape"] g.add((sm_shape, RDF.type, SH.NodeShape)) g.add((sm_shape, SH.targetClass, GDS_CORE["StateMetric"])) _add_property_shape( g, sm_shape, GDS_CORE["name"], min_count=1, max_count=1, datatype=XSD.string, message="StateMetric must have a name", ) # BlockIR: must have name bir_shape = GDS_SHAPE["BlockIRShape"] g.add((bir_shape, RDF.type, SH.NodeShape)) g.add((bir_shape, SH.targetClass, GDS_IR["BlockIR"])) _add_property_shape( g, bir_shape, GDS_CORE["name"], min_count=1, max_count=1, message="BlockIR must have a name", ) # SystemIR: must have name sir_shape = GDS_SHAPE["SystemIRShape"] g.add((sir_shape, RDF.type, SH.NodeShape)) g.add((sir_shape, SH.targetClass, GDS_IR["SystemIR"])) _add_property_shape( g, sir_shape, GDS_CORE["name"], min_count=1, max_count=1, message="SystemIR must have a name", ) # WiringIR: must have source and target wir_shape = GDS_SHAPE["WiringIRShape"] g.add((wir_shape, RDF.type, SH.NodeShape)) g.add((wir_shape, SH.targetClass, GDS_IR["WiringIR"])) _add_property_shape( g, wir_shape, GDS_IR["source"], min_count=1, max_count=1, message="WiringIR must have a source", ) _add_property_shape( g, wir_shape, GDS_IR["target"], min_count=1, max_count=1, message="WiringIR must have a target", ) # Finding: must have checkId, severity, passed finding_shape = GDS_SHAPE["FindingShape"] g.add((finding_shape, RDF.type, SH.NodeShape)) g.add((finding_shape, SH.targetClass, GDS_VERIF["Finding"])) _add_property_shape( g, finding_shape, GDS_VERIF["checkId"], min_count=1, max_count=1, message="Finding must have a checkId", ) _add_property_shape( g, finding_shape, GDS_VERIF["severity"], min_count=1, max_count=1, message="Finding must have a severity", ) _add_property_shape( g, finding_shape, GDS_VERIF["passed"], min_count=1, max_count=1, message="Finding must have a passed status", ) return g ``` ## `build_generic_shapes()` Build SHACL shapes mirroring G-001..G-006 generic checks. These operate on SystemIR RDF graphs. G-006 (covariant acyclicity) is not expressible in SHACL and is documented as a SPARQL query instead. Source code in `packages/gds-interchange/gds_interchange/owl/shacl.py` ``` def build_generic_shapes() -> Graph: """Build SHACL shapes mirroring G-001..G-006 generic checks. These operate on SystemIR RDF graphs. G-006 (covariant acyclicity) is not expressible in SHACL and is documented as a SPARQL query instead. """ g = Graph() _bind(g) # G-004: Dangling wirings — every WiringIR source/target must reference # a BlockIR name that exists in the same SystemIR. # This is expressed as a SPARQL-based constraint. g004_shape = GDS_SHAPE["G004DanglingWiringShape"] g.add((g004_shape, RDF.type, SH.NodeShape)) g.add((g004_shape, SH.targetClass, GDS_IR["WiringIR"])) g.add( ( g004_shape, SH.message, Literal("G-004: Wiring references a block not in the system"), ) ) return g ``` ## `build_semantic_shapes()` Build SHACL shapes mirroring SC-001..SC-007 semantic checks. These operate on GDSSpec RDF graphs. Source code in `packages/gds-interchange/gds_interchange/owl/shacl.py` ``` def build_semantic_shapes() -> Graph: """Build SHACL shapes mirroring SC-001..SC-007 semantic checks. These operate on GDSSpec RDF graphs. """ g = Graph() _bind(g) # SC-001: Completeness — every Entity StateVariable should have # at least one Mechanism that updatesEntry referencing it. # This is advisory (not all specs require full coverage). # SC-005: Parameter references — blocks using parameters must # reference registered ParameterDef instances. # Expressed as: every usesParameter target must be of type ParameterDef. sc005_shape = GDS_SHAPE["SC005ParamRefShape"] g.add((sc005_shape, RDF.type, SH.NodeShape)) g.add((sc005_shape, SH.targetClass, GDS_CORE["AtomicBlock"])) _add_property_shape( g, sc005_shape, GDS_CORE["usesParameter"], class_=GDS_CORE["ParameterDef"], message=( "SC-005: Block references a parameter that is not a registered ParameterDef" ), ) # SC-008: Admissibility constraint must reference a BoundaryAction sc008_shape = GDS_SHAPE["SC008AdmissibilityShape"] g.add((sc008_shape, RDF.type, SH.NodeShape)) g.add((sc008_shape, SH.targetClass, GDS_CORE["AdmissibleInputConstraint"])) _add_property_shape( g, sc008_shape, GDS_CORE["constrainsBoundary"], class_=GDS_CORE["BoundaryAction"], message=("SC-008: Admissibility constraint must reference a BoundaryAction"), ) # SC-009: Transition signature must reference a Mechanism sc009_shape = GDS_SHAPE["SC009TransitionSigShape"] g.add((sc009_shape, RDF.type, SH.NodeShape)) g.add((sc009_shape, SH.targetClass, GDS_CORE["TransitionSignature"])) _add_property_shape( g, sc009_shape, GDS_CORE["signatureForMechanism"], class_=GDS_CORE["Mechanism"], message="SC-009: Transition signature must reference a Mechanism", ) return g ``` ## `build_constraint_shapes(data_graph)` Build SHACL shapes for TypeDef constraint_kind metadata. Reads TypeDef individuals from *data_graph* and generates SHACL NodeShapes with numeric/enum restrictions for each TypeDef that carries a `constraintKind` literal. Source code in `packages/gds-interchange/gds_interchange/owl/shacl.py` ``` def build_constraint_shapes(data_graph: Graph) -> Graph: """Build SHACL shapes for TypeDef constraint_kind metadata. Reads TypeDef individuals from *data_graph* and generates SHACL NodeShapes with numeric/enum restrictions for each TypeDef that carries a ``constraintKind`` literal. """ g = Graph() _bind(g) _add_constraint_shapes(g, data_graph) return g ``` ## `build_all_shapes()` Build all SHACL shapes (structural + generic + semantic). Source code in `packages/gds-interchange/gds_interchange/owl/shacl.py` ``` def build_all_shapes() -> Graph: """Build all SHACL shapes (structural + generic + semantic).""" g = build_structural_shapes() g += build_generic_shapes() g += build_semantic_shapes() return g ``` ## `validate_graph(data_graph, shapes_graph=None)` Validate an RDF graph against SHACL shapes. Requires pyshacl (optional dependency). Returns (conforms, results_graph, results_text). Source code in `packages/gds-interchange/gds_interchange/owl/shacl.py` ``` def validate_graph( data_graph: Graph, shapes_graph: Graph | None = None, ) -> tuple[bool, Graph, str]: """Validate an RDF graph against SHACL shapes. Requires pyshacl (optional dependency). Returns (conforms, results_graph, results_text). """ try: from pyshacl import validate except ImportError as e: raise ImportError( "pyshacl is required for SHACL validation. " "Install with: pip install gds-owl[shacl]" ) from e if shapes_graph is None: shapes_graph = build_all_shapes() conforms, results_graph, results_text = validate( data_graph, shacl_graph=shapes_graph ) return conforms, results_graph, results_text ``` # gds_interchange.owl.sparql SPARQL query templates for GDS analysis. SPARQL query templates for GDS RDF graphs. Pre-built queries for common analyses: dependency paths, reachability, loop detection, parameter impact, block grouping, and entity update maps. ## `SPARQLTemplate` A named, parameterized SPARQL query template. Source code in `packages/gds-interchange/gds_interchange/owl/sparql.py` ``` @dataclass(frozen=True) class SPARQLTemplate: """A named, parameterized SPARQL query template.""" name: str description: str query: str ``` ## `run_query(graph, template_name, **params)` Run a registered SPARQL template against a graph. Parameters can be substituted into the query using Python string formatting ({param_name} placeholders). Returns a list of dicts, one per result row, with variable names as keys. Source code in `packages/gds-interchange/gds_interchange/owl/sparql.py` ``` def run_query( graph: Graph, template_name: str, **params: str, ) -> list[dict[str, Any]]: """Run a registered SPARQL template against a graph. Parameters can be substituted into the query using Python string formatting ({param_name} placeholders). Returns a list of dicts, one per result row, with variable names as keys. """ if template_name not in TEMPLATES: raise KeyError( f"Unknown template '{template_name}'. Available: {sorted(TEMPLATES.keys())}" ) template = TEMPLATES[template_name] query = template.query.format(**params) if params else template.query results = graph.query(query) return [{str(var): row[i] for i, var in enumerate(results.vars)} for row in results] ``` # PSUU (gds-analysis) # gds-psuu **Parameter space search under uncertainty** -- explore, evaluate, and optimize simulation parameters with Monte Carlo awareness. ## What is this? `gds-psuu` bridges `gds-sim` simulations with systematic parameter exploration. It provides: - **Parameter spaces** -- `Continuous`, `Integer`, and `Discrete` dimensions with validation - **Composable KPIs** -- `Metric` (per-run scalar) + `Aggregation` (cross-run reducer) = `KPI` - **3 search strategies** -- Grid, Random, and Bayesian (optuna) optimizers - **Monte Carlo awareness** -- per-run distributions tracked alongside aggregated scores - **Zero mandatory dependencies** beyond `gds-sim` and `pydantic` ## Architecture ``` gds-sim (pip install gds-sim) | | Simulation engine: Model, StateUpdateBlock, | Simulation, Results (columnar storage). | +-- gds-psuu (pip install gds-analysis[psuu]) | | Parameter search: ParameterSpace, Metric, Aggregation, | KPI, Evaluator, Sweep, Optimizer. | +-- Your application | | Concrete models, parameter studies, | sensitivity analysis, optimization. ``` ## Conceptual Hierarchy The package follows a clear hierarchy from parameters to optimization: ``` Parameter Point {"growth_rate": 0.05} | v Simulation Model + timesteps + N runs | v Results Columnar data (timestep, substep, run, state vars) | v Metric (per-run) final_value("pop") -> scalar per run | v Aggregation (cross-run) mean_agg, std_agg, probability_above(...) | v KPI (composed) KPI(metric=..., aggregation=...) -> single score | v Sweep Optimizer drives suggest/evaluate/observe loop | v SweepResults All evaluations + best() selection ``` ## How the Sweep Loop Works ``` Optimizer.suggest() --> Evaluator.evaluate(params) --> Optimizer.observe(scores) ^ | | | gds-sim Simulation | +------------------------ repeat --------------------------+ ``` 1. The **Optimizer** suggests a parameter point 1. The **Evaluator** injects params into a `gds-sim` Model, runs N Monte Carlo simulations 1. Each **KPI** extracts a per-run **Metric**, then **Aggregates** across runs into a single score 1. The **Optimizer** observes the scores and decides what to try next ## Quick Start ``` uv add gds-psuu # or: pip install gds-analysis[psuu] ``` See [Getting Started](https://blockscience.github.io/gds-core/psuu/getting-started/index.md) for a full walkthrough. ## Credits Built on gds-sim by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add gds-psuu # or: pip install gds-analysis[psuu] ``` For Bayesian optimization (optional): ``` uv add "gds-psuu[bayesian]" # or: pip install gds-analysis[psuu][bayesian] ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First Parameter Sweep Define a `gds-sim` model, then sweep a parameter to find the best value: ``` from gds_sim import Model, StateUpdateBlock from gds_analysis.psuu import ( KPI, Continuous, GridSearchOptimizer, ParameterSpace, Sweep, final_value, mean_agg, ) # 1. Define a growth model def growth_policy(state, params, **kw): return {"delta": state["population"] * params["growth_rate"]} def update_pop(state, params, *, signal=None, **kw): return ("population", state["population"] + signal["delta"]) model = Model( initial_state={"population": 100.0}, state_update_blocks=[ StateUpdateBlock( policies={"growth": growth_policy}, variables={"population": update_pop}, ) ], ) # 2. Define what to search space = ParameterSpace( params={"growth_rate": Continuous(min_val=0.01, max_val=0.2)} ) # 3. Define what to measure kpis = [ KPI( name="avg_final_pop", metric=final_value("population"), # per-run: final value aggregation=mean_agg, # cross-run: mean ), ] # 4. Run the sweep sweep = Sweep( model=model, space=space, kpis=kpis, optimizer=GridSearchOptimizer(n_steps=5), timesteps=10, runs=3, # 3 Monte Carlo runs per parameter point ) results = sweep.run() # 5. Inspect results best = results.best("avg_final_pop") print(f"Best growth_rate: {best.params['growth_rate']:.3f}") print(f"Best avg final pop: {best.scores['avg_final_pop']:.1f}") ``` ## Composable KPIs The key design is the **Metric + Aggregation = KPI** pattern: ``` from gds_analysis.psuu import ( KPI, final_value, trajectory_mean, max_value, mean_agg, std_agg, percentile_agg, probability_above, ) # Mean of final population across runs avg_final = KPI(name="avg_pop", metric=final_value("population"), aggregation=mean_agg) # Standard deviation of final population (measures uncertainty) std_final = KPI(name="std_pop", metric=final_value("population"), aggregation=std_agg) # 90th percentile of trajectory means p90_mean = KPI(name="p90_mean", metric=trajectory_mean("population"), aggregation=percentile_agg(90)) # Probability that max population exceeds 500 risk = KPI(name="boom_risk", metric=max_value("population"), aggregation=probability_above(500.0)) ``` **Metric** extracts a scalar from each run. **Aggregation** reduces the per-run values to a single score. If no aggregation is specified, `mean_agg` is used by default: ``` # These are equivalent: KPI(name="avg_pop", metric=final_value("population")) KPI(name="avg_pop", metric=final_value("population"), aggregation=mean_agg) ``` ## Per-Run Distributions Metric-based KPIs track the full distribution across Monte Carlo runs: ``` results = sweep.run() for ev in results.evaluations: dist = ev.distributions["avg_final_pop"] print(f" params={ev.params}, per_run={dist}") # e.g. per_run=[265.3, 265.3, 265.3] for deterministic model ``` ## Multiple Optimizers ``` from gds_analysis.psuu import GridSearchOptimizer, RandomSearchOptimizer # Exhaustive grid (good for 1-2 dimensions) grid = GridSearchOptimizer(n_steps=10) # 10 points per continuous dim # Random sampling (good for higher dimensions) rand = RandomSearchOptimizer(n_samples=50, seed=42) ``` For Bayesian optimization (requires `gds-psuu[bayesian]`): ``` from gds_analysis.psuu.optimizers.bayesian import BayesianOptimizer bayes = BayesianOptimizer(n_calls=30, target_kpi="avg_final_pop", seed=42) ``` ## Legacy KPI Support The older `fn`-based KPI interface still works: ``` from gds_analysis.psuu import KPI, final_state_mean # Legacy style (backwards compatible) kpi = KPI(name="pop", fn=lambda r: final_state_mean(r, "population")) ``` Legacy KPIs don't track per-run distributions -- use metric-based KPIs for full Monte Carlo awareness. ## Next Steps - [Concepts](https://blockscience.github.io/gds-core/psuu/guide/concepts/index.md) -- Metric, Aggregation, KPI, and the full conceptual hierarchy - [Parameter Spaces](https://blockscience.github.io/gds-core/psuu/guide/spaces/index.md) -- dimensions, validation, and grid generation - [Optimizers](https://blockscience.github.io/gds-core/psuu/guide/optimizers/index.md) -- grid, random, and Bayesian search strategies - [API Reference](https://blockscience.github.io/gds-core/psuu/api/init/index.md) -- complete auto-generated API docs # Concepts This page explains the core abstractions in `gds-psuu` and how they compose. ## The Hierarchy ``` Parameter Point -> Simulation -> Results -> Metric -> Aggregation -> KPI ``` Each layer transforms data from the previous one. The sweep loop orchestrates the full pipeline across many parameter points. ______________________________________________________________________ ## Parameter Space A `ParameterSpace` defines what to search. Each dimension has a name and a type: | Dimension | Description | Grid behavior | | ------------------------------ | ------------------------- | ------------------------------ | | `Continuous(min_val, max_val)` | Real-valued range | `n_steps` evenly spaced points | | `Integer(min_val, max_val)` | Integer range (inclusive) | All integers in range | | `Discrete(values=(...))` | Explicit set of values | All values | ``` from gds_analysis.psuu import Continuous, Integer, Discrete, ParameterSpace space = ParameterSpace(params={ "learning_rate": Continuous(min_val=0.001, max_val=0.1), "hidden_layers": Integer(min_val=1, max_val=5), "activation": Discrete(values=("relu", "tanh", "sigmoid")), }) ``` Validation enforces `min_val < max_val` and at least one parameter. ______________________________________________________________________ ## Metric A `Metric` extracts a **single scalar from one simulation run**. It receives the full `Results` object and a run ID. ``` from gds_analysis.psuu import Metric # Built-in factories from gds_analysis.psuu import final_value, trajectory_mean, max_value, min_value final_value("population") # value at last timestep trajectory_mean("population") # mean over all timesteps max_value("population") # maximum over all timesteps min_value("population") # minimum over all timesteps ``` Custom metrics: ``` Metric( name="range", fn=lambda results, run: ( max_value("x").fn(results, run) - min_value("x").fn(results, run) ), ) ``` The `MetricFn` signature is `(Results, int) -> float` where the int is the run ID. ______________________________________________________________________ ## Aggregation An `Aggregation` reduces a **list of per-run values into a single scalar**. It operates on `list[float]` and returns `float`. ``` from gds_analysis.psuu import mean_agg, std_agg, percentile_agg, probability_above, probability_below mean_agg # arithmetic mean std_agg # sample standard deviation percentile_agg(50) # median (50th percentile) percentile_agg(95) # 95th percentile probability_above(100.0) # fraction of runs > 100 probability_below(0.0) # fraction of runs < 0 (risk measure) ``` Custom aggregations: ``` from gds_analysis.psuu import Aggregation cv_agg = Aggregation( name="cv", fn=lambda vals: ( (sum((x - sum(vals)/len(vals))**2 for x in vals) / (len(vals)-1))**0.5 / (sum(vals)/len(vals)) if len(vals) > 1 and sum(vals) != 0 else 0.0 ), ) ``` ______________________________________________________________________ ## KPI A `KPI` composes a Metric and an Aggregation into a named score: ``` from gds_analysis.psuu import KPI, final_value, std_agg kpi = KPI( name="uncertainty", metric=final_value("population"), # per-run: final value aggregation=std_agg, # cross-run: standard deviation ) ``` If `aggregation` is omitted, `mean_agg` is used by default. ### Per-run access Metric-based KPIs expose the full distribution: ``` results = simulation_results # from gds-sim per_run_values = kpi.per_run(results) # [val_run1, val_run2, ...] aggregated = kpi.compute(results) # single float ``` ### Legacy KPIs The older `fn`-based interface operates on the full `Results` at once: ``` from gds_analysis.psuu import KPI, final_state_mean kpi = KPI(name="pop", fn=lambda r: final_state_mean(r, "population")) ``` Legacy KPIs cannot use `per_run()` and don't produce distributions. Prefer metric-based KPIs for new code. ______________________________________________________________________ ## Evaluator The `Evaluator` bridges parameter points to scored KPIs: 1. Takes a parameter point `{"growth_rate": 0.05}` 1. Injects params into the `gds-sim` Model 1. Runs N Monte Carlo simulations 1. Computes each KPI on the results 1. Returns `EvaluationResult` with scores and distributions ``` from gds_analysis.psuu import Evaluator evaluator = Evaluator( base_model=model, kpis=[kpi1, kpi2], timesteps=100, runs=10, ) result = evaluator.evaluate({"growth_rate": 0.05}) # result.scores == {"kpi1_name": 42.0, "kpi2_name": 3.14} # result.distributions == {"kpi1_name": [per-run values...]} ``` ______________________________________________________________________ ## Optimizer An `Optimizer` implements the suggest/observe loop: | Optimizer | Strategy | When to use | | ---------------------------------------- | ---------------------------- | ----------------------------------- | | `GridSearchOptimizer(n_steps)` | Exhaustive cartesian product | 1-2 dimensions, need full coverage | | `RandomSearchOptimizer(n_samples, seed)` | Uniform random sampling | Higher dimensions, exploration | | `BayesianOptimizer(n_calls, target_kpi)` | Gaussian process surrogate | Expensive evaluations, optimization | All optimizers implement the same interface: ``` optimizer.setup(space, kpi_names) while not optimizer.is_exhausted(): params = optimizer.suggest() # ... evaluate ... optimizer.observe(params, scores) ``` ______________________________________________________________________ ## Sweep `Sweep` is the top-level orchestrator that connects everything: ``` from gds_analysis.psuu import Sweep sweep = Sweep( model=model, space=space, kpis=kpis, optimizer=optimizer, timesteps=100, runs=10, ) results = sweep.run() ``` ### SweepResults ``` results.evaluations # list[EvaluationResult] -- all evaluations results.summaries # list[EvaluationSummary] -- params + scores only results.best("kpi_name") # best evaluation for a KPI results.to_dataframe() # pandas DataFrame (requires pandas) ``` # Optimizers All optimizers implement the same `Optimizer` interface: `setup()`, `suggest()`, `observe()`, `is_exhausted()`. The `Sweep` class drives this loop automatically. ## GridSearchOptimizer Exhaustive evaluation of every point in a regular grid. ``` from gds_analysis.psuu import GridSearchOptimizer optimizer = GridSearchOptimizer(n_steps=10) ``` | Parameter | Type | Default | Description | | --------- | ----- | ------- | ------------------------------- | | `n_steps` | `int` | `5` | Points per continuous dimension | **Behavior:** Generates the full cartesian product via `ParameterSpace.grid_points()`, then evaluates each point exactly once. Does not adapt based on observed scores. **When to use:** Small parameter spaces (1-2 dimensions), need complete coverage, want reproducible results. **Total evaluations:** `n_steps^(n_continuous) * product(integer_ranges) * product(discrete_sizes)` ______________________________________________________________________ ## RandomSearchOptimizer Uniform random sampling across the parameter space. ``` from gds_analysis.psuu import RandomSearchOptimizer optimizer = RandomSearchOptimizer(n_samples=50, seed=42) ``` | Parameter | Type | Default | Description | | ----------- | ------------- | ------- | -------------------------------- | | `n_samples` | `int` | `20` | Total parameter points to sample | | `seed` | `int \| None` | `None` | Random seed for reproducibility | **Behavior:** Samples each dimension independently -- `uniform(min, max)` for Continuous, `randint(min, max)` for Integer, `choice(values)` for Discrete. Does not adapt based on observed scores. **When to use:** Higher-dimensional spaces where grid search is infeasible, initial exploration before Bayesian optimization. ______________________________________________________________________ ## BayesianOptimizer Gaussian process surrogate model that learns from past evaluations. Optional dependency Requires `scikit-optimize`. Install with: `uv add "gds-psuu[bayesian]"` ``` from gds_analysis.psuu.optimizers.bayesian import BayesianOptimizer optimizer = BayesianOptimizer( n_calls=30, target_kpi="avg_final_pop", maximize=True, seed=42, ) ``` | Parameter | Type | Default | Description | | ------------ | ------------- | ------- | ------------------------------------------ | | `n_calls` | `int` | `20` | Total evaluations (initial + optimization) | | `target_kpi` | `str \| None` | `None` | KPI to optimize (defaults to first) | | `maximize` | `bool` | `True` | Maximize (True) or minimize (False) | | `seed` | `int \| None` | `None` | Random seed | **Behavior:** Starts with random exploration (`min(5, n_calls)` initial points), then uses a Gaussian process surrogate to balance exploration and exploitation. Optimizes a single target KPI. **When to use:** Expensive simulations where you want to find the optimum with fewer evaluations. Works best with continuous parameters. ______________________________________________________________________ ## Custom Optimizers Subclass `Optimizer` to implement your own search strategy: ``` from gds_analysis.psuu.optimizers.base import Optimizer from gds_analysis.psuu.space import ParameterSpace from gds_analysis.psuu.types import KPIScores, ParamPoint class MyOptimizer(Optimizer): def setup(self, space: ParameterSpace, kpi_names: list[str]) -> None: # Initialize search state ... def suggest(self) -> ParamPoint: # Return next parameter point to evaluate ... def observe(self, params: ParamPoint, scores: KPIScores) -> None: # Learn from evaluation results ... def is_exhausted(self) -> bool: # Return True when search is complete ... ``` # Parameter Spaces ## Dimension Types ### Continuous A real-valued range with inclusive bounds. ``` from gds_analysis.psuu import Continuous dim = Continuous(min_val=0.0, max_val=1.0) ``` | Field | Type | Description | | --------- | ------- | ----------------------- | | `min_val` | `float` | Lower bound (inclusive) | | `max_val` | `float` | Upper bound (inclusive) | Validation: `min_val < max_val`, both must be finite. Grid behavior: `n_steps` evenly spaced points from `min_val` to `max_val`. ______________________________________________________________________ ### Integer An integer range with inclusive bounds. ``` from gds_analysis.psuu import Integer dim = Integer(min_val=1, max_val=10) ``` | Field | Type | Description | | --------- | ----- | ----------------------- | | `min_val` | `int` | Lower bound (inclusive) | | `max_val` | `int` | Upper bound (inclusive) | Validation: `min_val < max_val`. Grid behavior: all integers from `min_val` to `max_val` (ignores `n_steps`). ______________________________________________________________________ ### Discrete An explicit set of allowed values (any hashable type). ``` from gds_analysis.psuu import Discrete dim = Discrete(values=("adam", "sgd", "rmsprop")) ``` | Field | Type | Description | | -------- | ----------------- | -------------- | | `values` | `tuple[Any, ...]` | Allowed values | Validation: at least 1 value. Grid behavior: all values (ignores `n_steps`). ______________________________________________________________________ ## ParameterSpace Combines dimensions into a named parameter space: ``` from gds_analysis.psuu import Continuous, Integer, Discrete, ParameterSpace space = ParameterSpace(params={ "learning_rate": Continuous(min_val=0.001, max_val=0.1), "batch_size": Integer(min_val=16, max_val=128), "optimizer": Discrete(values=("adam", "sgd")), }) ``` ### Grid Generation ``` points = space.grid_points(n_steps=5) # Returns list of dicts, e.g.: # [ # {"learning_rate": 0.001, "batch_size": 16, "optimizer": "adam"}, # {"learning_rate": 0.001, "batch_size": 16, "optimizer": "sgd"}, # ... # ] ``` The total number of grid points is the cartesian product: `n_steps * (max_int - min_int + 1) * len(discrete_values)` For the example above: `5 * 113 * 2 = 1130` points. ### Properties | Property | Returns | Description | | ----------------- | ----------- | ------------------------------- | | `dimension_names` | `list[str]` | Ordered list of parameter names | ______________________________________________________________________ ## Connecting to GDS Parameter Schema When your system has a `GDSSpec` with a `ParameterSchema` (the declared parameter space, theta), you can connect it to the sweep so that the optimizer never silently explores values outside declared bounds or type constraints. ### Creating a ParameterSpace from a Schema `ParameterSpace.from_parameter_schema()` automatically creates dimensions from the declared `ParameterDef` entries: - `float` with bounds becomes `Continuous` - `int` with bounds becomes `Integer` - Parameters without bounds raise `ValueError` (bounds are required for sweep) - Unsupported types raise `TypeError` ``` from gds import GDSSpec, typedef, ParameterDef from gds_analysis.psuu import ParameterSpace spec = GDSSpec(name="my_system") rate_type = typedef("Rate", float, constraint=lambda x: 0.0 <= x <= 1.0) spec.register_parameter(ParameterDef(name="growth_rate", typedef=rate_type, bounds=(0.01, 0.5))) space = ParameterSpace.from_parameter_schema(spec.parameter_schema) # space.params == {"growth_rate": Continuous(min_val=0.01, max_val=0.5)} ``` ### Validating an Existing Space If you build your `ParameterSpace` manually, you can validate it against the schema: ``` from gds_analysis.psuu import Continuous, ParameterSpace space = ParameterSpace(params={ "growth_rate": Continuous(min_val=-1.0, max_val=2.0), # exceeds bounds }) violations = space.validate_against_schema(spec.parameter_schema) for v in violations: print(f"[{v.violation_type}] {v.param}: {v.message}") ``` Violation types: | Type | Meaning | | --------------------- | --------------------------------------------------------------- | | `missing_from_schema` | Parameter is swept but not declared in the schema | | `out_of_bounds` | Sweep range exceeds declared bounds or fails typedef constraint | | `type_mismatch` | Dimension type does not match declared Python type | ### PSUU-001 Check The `check_parameter_space_compatibility()` function wraps validation into the GDS `Finding` pattern: ``` from gds_analysis.psuu import check_parameter_space_compatibility findings = check_parameter_space_compatibility(space, spec.parameter_schema) for f in findings: print(f"[{f.check_id}] {f.severity}: {f.message}") ``` ### Sweep Integration Pass `parameter_schema` to `Sweep` for automatic validation before the optimizer loop starts. If any `ERROR`-level violations are found, `run()` raises `ValueError`: ``` from gds_analysis.psuu import Sweep sweep = Sweep( model=sim_model, space=space, kpis=kpis, optimizer=optimizer, parameter_schema=spec.parameter_schema, # optional validation ) sweep.run() # raises ValueError if space violates schema ``` Note The `parameter_schema` field is optional. If omitted, no validation is performed and the sweep runs as before. Install `gds-framework` (or use the `validation` extra: `pip install gds-analysis[psuu][validation]`) to use these features. # gds_analysis.psuu.evaluation Evaluation bridge between parameter points and gds-sim. Evaluation bridge between parameter points and gds-sim. ## `EvaluationResult` Outcome of evaluating a single parameter point. Source code in `packages/gds-analysis/gds_analysis/psuu/evaluation.py` ``` @dataclass(frozen=True) class EvaluationResult: """Outcome of evaluating a single parameter point.""" params: ParamPoint scores: KPIScores results: Results run_count: int distributions: dict[str, list[float]] = field(default_factory=dict) """Per-run metric values for metric-based KPIs.""" ``` ### `distributions = field(default_factory=dict)` Per-run metric values for metric-based KPIs. ## `Evaluator` Bases: `BaseModel` Runs a gds-sim simulation for a given parameter point and scores KPIs. Source code in `packages/gds-analysis/gds_analysis/psuu/evaluation.py` ``` class Evaluator(BaseModel): """Runs a gds-sim simulation for a given parameter point and scores KPIs.""" model_config = ConfigDict(arbitrary_types_allowed=True, frozen=True) base_model: Model kpis: list[KPI] timesteps: int runs: int def evaluate(self, params: ParamPoint) -> EvaluationResult: """Evaluate a single parameter point. Injects params as singleton lists into the model, runs the simulation, and computes KPI scores. For metric-based KPIs, also records per-run distributions. """ # Build params dict: each value as a singleton list for gds-sim sim_params: dict[str, list[Any]] = {k: [v] for k, v in params.items()} # Construct a new Model with the injected params model = Model( initial_state=dict(self.base_model.initial_state), state_update_blocks=list(self.base_model.state_update_blocks), params=sim_params, ) sim = Simulation(model=model, timesteps=self.timesteps, runs=self.runs) results = sim.run() scores: KPIScores = {} distributions: dict[str, list[float]] = {} for kpi in self.kpis: scores[kpi.name] = kpi.compute(results) if kpi.metric is not None: distributions[kpi.name] = kpi.per_run(results) return EvaluationResult( params=params, scores=scores, results=results, run_count=self.runs, distributions=distributions, ) ``` ### `evaluate(params)` Evaluate a single parameter point. Injects params as singleton lists into the model, runs the simulation, and computes KPI scores. For metric-based KPIs, also records per-run distributions. Source code in `packages/gds-analysis/gds_analysis/psuu/evaluation.py` ``` def evaluate(self, params: ParamPoint) -> EvaluationResult: """Evaluate a single parameter point. Injects params as singleton lists into the model, runs the simulation, and computes KPI scores. For metric-based KPIs, also records per-run distributions. """ # Build params dict: each value as a singleton list for gds-sim sim_params: dict[str, list[Any]] = {k: [v] for k, v in params.items()} # Construct a new Model with the injected params model = Model( initial_state=dict(self.base_model.initial_state), state_update_blocks=list(self.base_model.state_update_blocks), params=sim_params, ) sim = Simulation(model=model, timesteps=self.timesteps, runs=self.runs) results = sim.run() scores: KPIScores = {} distributions: dict[str, list[float]] = {} for kpi in self.kpis: scores[kpi.name] = kpi.compute(results) if kpi.metric is not None: distributions[kpi.name] = kpi.per_run(results) return EvaluationResult( params=params, scores=scores, results=results, run_count=self.runs, distributions=distributions, ) ``` # gds_analysis.psuu Public API -- top-level exports. Parameter space search under uncertainty (PSUU) for the GDS ecosystem. # gds_analysis.psuu.kpi KPI wrapper and legacy helper functions. KPI wrapper and helper functions. ## `KPI` Bases: `BaseModel` Named KPI backed by either a legacy fn or a Metric + Aggregation pair. Legacy usage (backwards compatible):: ``` KPI(name="avg_pop", fn=lambda r: final_state_mean(r, "population")) ``` Composable usage:: ``` KPI(name="avg_pop", metric=final_value("population"), aggregation=mean_agg) ``` Source code in `packages/gds-analysis/gds_analysis/psuu/kpi.py` ``` class KPI(BaseModel): """Named KPI backed by either a legacy fn or a Metric + Aggregation pair. Legacy usage (backwards compatible):: KPI(name="avg_pop", fn=lambda r: final_state_mean(r, "population")) Composable usage:: KPI(name="avg_pop", metric=final_value("population"), aggregation=mean_agg) """ model_config = ConfigDict(arbitrary_types_allowed=True, frozen=True) name: str fn: KPIFn | None = None metric: Metric | None = None aggregation: Aggregation | None = None @model_validator(mode="after") def _validate_specification(self) -> Self: has_fn = self.fn is not None has_metric = self.metric is not None if not has_fn and not has_metric: raise PsuuValidationError("KPI must have either 'fn' or 'metric' specified") if has_fn and has_metric: raise PsuuValidationError( "KPI cannot have both 'fn' and 'metric' specified" ) return self def compute(self, results: Results) -> float: """Compute the aggregated KPI score from results.""" if self.fn is not None: return self.fn(results) assert self.metric is not None agg = self.aggregation or mean_agg per_run = self.per_run(results) return agg.fn(per_run) def per_run(self, results: Results) -> list[float]: """Compute per-run metric values. Only available for metric-based KPIs.""" if self.metric is None: raise PsuuValidationError( "per_run() requires a metric-based KPI, not a legacy fn-based KPI" ) run_ids = _extract_run_ids(results) return [self.metric.fn(results, r) for r in run_ids] ``` ### `compute(results)` Compute the aggregated KPI score from results. Source code in `packages/gds-analysis/gds_analysis/psuu/kpi.py` ``` def compute(self, results: Results) -> float: """Compute the aggregated KPI score from results.""" if self.fn is not None: return self.fn(results) assert self.metric is not None agg = self.aggregation or mean_agg per_run = self.per_run(results) return agg.fn(per_run) ``` ### `per_run(results)` Compute per-run metric values. Only available for metric-based KPIs. Source code in `packages/gds-analysis/gds_analysis/psuu/kpi.py` ``` def per_run(self, results: Results) -> list[float]: """Compute per-run metric values. Only available for metric-based KPIs.""" if self.metric is None: raise PsuuValidationError( "per_run() requires a metric-based KPI, not a legacy fn-based KPI" ) run_ids = _extract_run_ids(results) return [self.metric.fn(results, r) for r in run_ids] ``` ## `final_state_mean(results, key)` Mean of a state variable's final-timestep values across all runs. Filters to the last timestep (max substep) for each run and averages. Source code in `packages/gds-analysis/gds_analysis/psuu/kpi.py` ``` def final_state_mean(results: Results, key: str) -> float: """Mean of a state variable's final-timestep values across all runs. Filters to the last timestep (max substep) for each run and averages. """ cols = results._trimmed_columns() timesteps = cols["timestep"] substeps = cols["substep"] runs = cols["run"] values = cols[key] n = results._size # Find max timestep max_t = 0 for i in range(n): t = timesteps[i] if t > max_t: max_t = t # Find max substep at max timestep max_s = 0 for i in range(n): if timesteps[i] == max_t: s = substeps[i] if s > max_s: max_s = s # Collect final values per run total = 0.0 count = 0 seen_runs: set[int] = set() for i in range(n): if timesteps[i] == max_t and substeps[i] == max_s: r = runs[i] if r not in seen_runs: seen_runs.add(r) total += float(values[i]) count += 1 if count == 0: return 0.0 return total / count ``` ## `final_state_std(results, key)` Std dev of a state variable's final-timestep values across all runs. Source code in `packages/gds-analysis/gds_analysis/psuu/kpi.py` ``` def final_state_std(results: Results, key: str) -> float: """Std dev of a state variable's final-timestep values across all runs.""" cols = results._trimmed_columns() timesteps = cols["timestep"] substeps = cols["substep"] runs = cols["run"] values = cols[key] n = results._size max_t = 0 for i in range(n): t = timesteps[i] if t > max_t: max_t = t max_s = 0 for i in range(n): if timesteps[i] == max_t: s = substeps[i] if s > max_s: max_s = s finals: list[float] = [] seen_runs: set[int] = set() for i in range(n): if timesteps[i] == max_t and substeps[i] == max_s: r = runs[i] if r not in seen_runs: seen_runs.add(r) finals.append(float(values[i])) if len(finals) < 2: return 0.0 mean = sum(finals) / len(finals) variance = sum((x - mean) ** 2 for x in finals) / (len(finals) - 1) return variance**0.5 ``` ## `time_average(results, key)` Mean of a state variable across all timesteps, substeps, and runs. Source code in `packages/gds-analysis/gds_analysis/psuu/kpi.py` ``` def time_average(results: Results, key: str) -> float: """Mean of a state variable across all timesteps, substeps, and runs.""" cols = results._trimmed_columns() values = cols[key] n = results._size if n == 0: return 0.0 total = sum(float(v) for v in values) return total / n ``` # gds_analysis.psuu.metric Metric and Aggregation primitives for composable KPI construction. Metric and Aggregation primitives for composable KPI construction. ## `Metric` Bases: `BaseModel` Per-run scalar extracted from simulation output. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` class Metric(BaseModel): """Per-run scalar extracted from simulation output.""" model_config = ConfigDict(arbitrary_types_allowed=True, frozen=True) name: str fn: MetricFn ``` ## `Aggregation` Bases: `BaseModel` Combines per-run metric values across Monte Carlo runs. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` class Aggregation(BaseModel): """Combines per-run metric values across Monte Carlo runs.""" model_config = ConfigDict(arbitrary_types_allowed=True, frozen=True) name: str fn: AggregationFn ``` ## `final_value(key)` Metric: value of a state variable at the final timestep of a run. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def final_value(key: str) -> Metric: """Metric: value of a state variable at the final timestep of a run.""" return Metric( name=f"final_{key}", fn=lambda results, run, _key=key: _final_value_fn(results, run, _key), ) ``` ## `trajectory_mean(key)` Metric: mean of a state variable over time for a single run. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def trajectory_mean(key: str) -> Metric: """Metric: mean of a state variable over time for a single run.""" return Metric( name=f"mean_{key}", fn=lambda results, run, _key=key: _trajectory_mean_fn(results, run, _key), ) ``` ## `max_value(key)` Metric: maximum value of a state variable within a single run. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def max_value(key: str) -> Metric: """Metric: maximum value of a state variable within a single run.""" return Metric( name=f"max_{key}", fn=lambda results, run, _key=key: _max_value_fn(results, run, _key), ) ``` ## `min_value(key)` Metric: minimum value of a state variable within a single run. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def min_value(key: str) -> Metric: """Metric: minimum value of a state variable within a single run.""" return Metric( name=f"min_{key}", fn=lambda results, run, _key=key: _min_value_fn(results, run, _key), ) ``` ## `percentile_agg(p)` Aggregation: p-th percentile across runs. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def percentile_agg(p: float) -> Aggregation: """Aggregation: p-th percentile across runs.""" def _fn(vals: list[float]) -> float: if not vals: return 0.0 s = sorted(vals) k = (p / 100.0) * (len(s) - 1) lo = int(k) hi = min(lo + 1, len(s) - 1) frac = k - lo return s[lo] + frac * (s[hi] - s[lo]) return Aggregation(name=f"p{p}", fn=_fn) ``` ## `probability_above(threshold)` Aggregation: fraction of runs where metric exceeds threshold. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def probability_above(threshold: float) -> Aggregation: """Aggregation: fraction of runs where metric exceeds threshold.""" return Aggregation( name=f"P(>{threshold})", fn=lambda vals: ( sum(1 for v in vals if v > threshold) / len(vals) if vals else 0.0 ), ) ``` ## `probability_below(threshold)` Aggregation: fraction of runs where metric is below threshold. Source code in `packages/gds-analysis/gds_analysis/psuu/metric.py` ``` def probability_below(threshold: float) -> Aggregation: """Aggregation: fraction of runs where metric is below threshold.""" return Aggregation( name=f"P(<{threshold})", fn=lambda vals: ( sum(1 for v in vals if v < threshold) / len(vals) if vals else 0.0 ), ) ``` # gds_analysis.psuu.optimizers Search strategy implementations. ## Base Abstract base class for optimizers. ## `Optimizer` Bases: `ABC` Base class for parameter search optimizers. Subclasses implement the suggest/observe loop. The optimizer is stateful and mutable — it tracks which points have been evaluated and uses that information to decide what to try next. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/base.py` ``` class Optimizer(ABC): """Base class for parameter search optimizers. Subclasses implement the suggest/observe loop. The optimizer is stateful and mutable — it tracks which points have been evaluated and uses that information to decide what to try next. """ @abstractmethod def setup(self, space: ParameterSpace, kpi_names: list[str]) -> None: """Initialize the optimizer with the search space and KPI names.""" @abstractmethod def suggest(self) -> ParamPoint: """Suggest the next parameter point to evaluate.""" @abstractmethod def observe(self, params: ParamPoint, scores: KPIScores) -> None: """Record the result of evaluating a parameter point.""" @abstractmethod def is_exhausted(self) -> bool: """Return True if no more suggestions are available.""" ``` ### `setup(space, kpi_names)` Initialize the optimizer with the search space and KPI names. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/base.py` ``` @abstractmethod def setup(self, space: ParameterSpace, kpi_names: list[str]) -> None: """Initialize the optimizer with the search space and KPI names.""" ``` ### `suggest()` Suggest the next parameter point to evaluate. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/base.py` ``` @abstractmethod def suggest(self) -> ParamPoint: """Suggest the next parameter point to evaluate.""" ``` ### `observe(params, scores)` Record the result of evaluating a parameter point. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/base.py` ``` @abstractmethod def observe(self, params: ParamPoint, scores: KPIScores) -> None: """Record the result of evaluating a parameter point.""" ``` ### `is_exhausted()` Return True if no more suggestions are available. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/base.py` ``` @abstractmethod def is_exhausted(self) -> bool: """Return True if no more suggestions are available.""" ``` ## Grid Search Grid search optimizer — exhaustive cartesian product search. ## `GridSearchOptimizer` Bases: `Optimizer` Evaluates every point in a regular grid over the parameter space. For Continuous dimensions, `n_steps` evenly spaced values are used. For Integer dimensions, all integers in [min, max] are used. For Discrete dimensions, all values are used. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/grid.py` ``` class GridSearchOptimizer(Optimizer): """Evaluates every point in a regular grid over the parameter space. For Continuous dimensions, ``n_steps`` evenly spaced values are used. For Integer dimensions, all integers in [min, max] are used. For Discrete dimensions, all values are used. """ def __init__(self, n_steps: int = 5) -> None: self._n_steps = n_steps self._grid: list[ParamPoint] = [] self._cursor: int = 0 def setup(self, space: ParameterSpace, kpi_names: list[str]) -> None: self._grid = space.grid_points(self._n_steps) self._cursor = 0 def suggest(self) -> ParamPoint: point = self._grid[self._cursor] self._cursor += 1 return point def observe(self, params: ParamPoint, scores: KPIScores) -> None: pass # Grid search doesn't adapt def is_exhausted(self) -> bool: return self._cursor >= len(self._grid) ``` ## Random Search Random search optimizer — uniform random sampling. ## `RandomSearchOptimizer` Bases: `Optimizer` Samples parameter points uniformly at random. Uses stdlib `random.Random` for reproducibility — no numpy required. When the parameter space has constraints, uses rejection sampling with a configurable retry limit. Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/random.py` ``` class RandomSearchOptimizer(Optimizer): """Samples parameter points uniformly at random. Uses stdlib ``random.Random`` for reproducibility — no numpy required. When the parameter space has constraints, uses rejection sampling with a configurable retry limit. """ def __init__(self, n_samples: int = 20, seed: int | None = None) -> None: self._n_samples = n_samples self._rng = random.Random(seed) self._space: ParameterSpace | None = None self._count: int = 0 def setup(self, space: ParameterSpace, kpi_names: list[str]) -> None: self._space = space self._count = 0 def _sample_point(self) -> ParamPoint: assert self._space is not None point: ParamPoint = {} for name, dim in self._space.params.items(): if isinstance(dim, Continuous): point[name] = self._rng.uniform(dim.min_val, dim.max_val) elif isinstance(dim, Integer): point[name] = self._rng.randint(dim.min_val, dim.max_val) elif isinstance(dim, Discrete): point[name] = self._rng.choice(dim.values) return point def suggest(self) -> ParamPoint: assert self._space is not None, "Call setup() before suggest()" if not self._space.constraints: point = self._sample_point() self._count += 1 return point for _ in range(_MAX_REJECTION_RETRIES): point = self._sample_point() if self._space.is_feasible(point): self._count += 1 return point raise PsuuSearchError( f"Could not find a feasible point after {_MAX_REJECTION_RETRIES} " "random samples. The feasible region may be too small." ) def observe(self, params: ParamPoint, scores: KPIScores) -> None: pass # Random search doesn't adapt def is_exhausted(self) -> bool: return self._count >= self._n_samples ``` ## Bayesian (optional) Bayesian optimizer — wraps optuna (optional dependency). ## `BayesianOptimizer` Bases: `Optimizer` Bayesian optimization using optuna's TPE sampler. Requires `optuna`. Install with:: ``` uv add gds-psuu[bayesian] ``` Optimizes a single target KPI (by default the first one registered). Source code in `packages/gds-analysis/gds_analysis/psuu/optimizers/bayesian.py` ``` class BayesianOptimizer(Optimizer): """Bayesian optimization using optuna's TPE sampler. Requires ``optuna``. Install with:: uv add gds-psuu[bayesian] Optimizes a single target KPI (by default the first one registered). """ def __init__( self, n_trials: int = 20, target_kpi: str | None = None, maximize: bool = True, seed: int | None = None, ) -> None: if not _HAS_OPTUNA: # pragma: no cover raise ImportError( "optuna is required for BayesianOptimizer. " "Install with: uv add gds-psuu[bayesian]" ) self._n_trials = n_trials self._target_kpi = target_kpi self._maximize = maximize self._seed = seed self._study: Any = None self._space: ParameterSpace | None = None self._param_names: list[str] = [] self._count: int = 0 self._current_trial: Any = None def setup(self, space: ParameterSpace, kpi_names: list[str]) -> None: if self._target_kpi is None: self._target_kpi = kpi_names[0] elif self._target_kpi not in kpi_names: raise PsuuSearchError( f"Target KPI '{self._target_kpi}' not found in {kpi_names}" ) self._space = space self._param_names = space.dimension_names sampler = optuna.samplers.TPESampler(seed=self._seed) direction = "maximize" if self._maximize else "minimize" optuna.logging.set_verbosity(optuna.logging.WARNING) self._study = optuna.create_study( direction=direction, sampler=sampler, ) self._count = 0 def suggest(self) -> ParamPoint: assert self._study is not None, "Call setup() before suggest()" assert self._space is not None self._current_trial = self._study.ask() point: ParamPoint = {} for name, dim in self._space.params.items(): if isinstance(dim, Continuous): point[name] = self._current_trial.suggest_float( name, dim.min_val, dim.max_val ) elif isinstance(dim, Integer): point[name] = self._current_trial.suggest_int( name, dim.min_val, dim.max_val ) elif isinstance(dim, Discrete): point[name] = self._current_trial.suggest_categorical( name, list(dim.values) ) return point def observe(self, params: ParamPoint, scores: KPIScores) -> None: assert self._study is not None assert self._target_kpi is not None assert self._current_trial is not None value = scores[self._target_kpi] self._study.tell(self._current_trial, value) self._current_trial = None self._count += 1 def is_exhausted(self) -> bool: return self._count >= self._n_trials ``` # gds_analysis.psuu.results Sweep results and summary types. Sweep results and summary types. ## `EvaluationSummary` Bases: `BaseModel` Summary of a single evaluation (without raw simulation data). Source code in `packages/gds-analysis/gds_analysis/psuu/results.py` ``` class EvaluationSummary(BaseModel): """Summary of a single evaluation (without raw simulation data).""" model_config = ConfigDict(frozen=True) params: ParamPoint scores: KPIScores ``` ## `SweepResults` Bases: `BaseModel` Container for all evaluation results from a sweep. Source code in `packages/gds-analysis/gds_analysis/psuu/results.py` ``` class SweepResults(BaseModel): """Container for all evaluation results from a sweep.""" model_config = ConfigDict(arbitrary_types_allowed=True, frozen=True) evaluations: list[EvaluationResult] kpi_names: list[str] optimizer_name: str @property def summaries(self) -> list[EvaluationSummary]: """Summaries without raw simulation data.""" return [ EvaluationSummary(params=e.params, scores=e.scores) for e in self.evaluations ] def best(self, kpi: str, *, maximize: bool = True) -> EvaluationSummary: """Return the evaluation with the best score for the given KPI. Args: kpi: Name of the KPI to optimize. maximize: If True, return the evaluation with the highest score. """ if not self.evaluations: raise ValueError("No evaluations to search") if kpi not in self.kpi_names: raise ValueError(f"KPI '{kpi}' not found in {self.kpi_names}") best_eval = max( self.evaluations, key=lambda e: e.scores[kpi] if maximize else -e.scores[kpi], ) return EvaluationSummary(params=best_eval.params, scores=best_eval.scores) def best_by_objective(self, objective: Objective) -> EvaluationSummary: """Return the evaluation with the best objective score. The objective reduces multiple KPI scores to a single scalar. Higher is better. """ if not self.evaluations: raise ValueError("No evaluations to search") best_eval = max( self.evaluations, key=lambda e: objective.score(e.scores), ) return EvaluationSummary(params=best_eval.params, scores=best_eval.scores) def to_dataframe(self) -> Any: """Convert to pandas DataFrame. Requires ``pandas`` installed.""" try: import pandas as pd # type: ignore[import-untyped] except ImportError as exc: # pragma: no cover raise ImportError( "pandas is required for to_dataframe(). " "Install with: pip install gds-psuu[pandas]" ) from exc rows: list[dict[str, Any]] = [] for ev in self.evaluations: row: dict[str, Any] = dict(ev.params) row.update(ev.scores) rows.append(row) return pd.DataFrame(rows) ``` ### `summaries` Summaries without raw simulation data. ### `best(kpi, *, maximize=True)` Return the evaluation with the best score for the given KPI. Parameters: | Name | Type | Description | Default | | ---------- | ------ | ------------------------------------------------------ | ---------- | | `kpi` | `str` | Name of the KPI to optimize. | *required* | | `maximize` | `bool` | If True, return the evaluation with the highest score. | `True` | Source code in `packages/gds-analysis/gds_analysis/psuu/results.py` ``` def best(self, kpi: str, *, maximize: bool = True) -> EvaluationSummary: """Return the evaluation with the best score for the given KPI. Args: kpi: Name of the KPI to optimize. maximize: If True, return the evaluation with the highest score. """ if not self.evaluations: raise ValueError("No evaluations to search") if kpi not in self.kpi_names: raise ValueError(f"KPI '{kpi}' not found in {self.kpi_names}") best_eval = max( self.evaluations, key=lambda e: e.scores[kpi] if maximize else -e.scores[kpi], ) return EvaluationSummary(params=best_eval.params, scores=best_eval.scores) ``` ### `best_by_objective(objective)` Return the evaluation with the best objective score. The objective reduces multiple KPI scores to a single scalar. Higher is better. Source code in `packages/gds-analysis/gds_analysis/psuu/results.py` ``` def best_by_objective(self, objective: Objective) -> EvaluationSummary: """Return the evaluation with the best objective score. The objective reduces multiple KPI scores to a single scalar. Higher is better. """ if not self.evaluations: raise ValueError("No evaluations to search") best_eval = max( self.evaluations, key=lambda e: objective.score(e.scores), ) return EvaluationSummary(params=best_eval.params, scores=best_eval.scores) ``` ### `to_dataframe()` Convert to pandas DataFrame. Requires `pandas` installed. Source code in `packages/gds-analysis/gds_analysis/psuu/results.py` ``` def to_dataframe(self) -> Any: """Convert to pandas DataFrame. Requires ``pandas`` installed.""" try: import pandas as pd # type: ignore[import-untyped] except ImportError as exc: # pragma: no cover raise ImportError( "pandas is required for to_dataframe(). " "Install with: pip install gds-psuu[pandas]" ) from exc rows: list[dict[str, Any]] = [] for ev in self.evaluations: row: dict[str, Any] = dict(ev.params) row.update(ev.scores) rows.append(row) return pd.DataFrame(rows) ``` # gds_analysis.psuu.space Parameter space definitions for search. Parameter space definitions for search. ## `Continuous` Bases: `BaseModel` A continuous parameter dimension with min/max bounds. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class Continuous(BaseModel): """A continuous parameter dimension with min/max bounds.""" model_config = ConfigDict(frozen=True) min_val: float max_val: float @model_validator(mode="after") def _validate_bounds(self) -> Self: if self.min_val >= self.max_val: raise PsuuValidationError( f"min_val ({self.min_val}) must be less than max_val ({self.max_val})" ) if not math.isfinite(self.min_val) or not math.isfinite(self.max_val): raise PsuuValidationError("Bounds must be finite") return self ``` ## `Integer` Bases: `BaseModel` An integer parameter dimension with min/max bounds (inclusive). Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class Integer(BaseModel): """An integer parameter dimension with min/max bounds (inclusive).""" model_config = ConfigDict(frozen=True) min_val: int max_val: int @model_validator(mode="after") def _validate_bounds(self) -> Self: if self.min_val >= self.max_val: raise PsuuValidationError( f"min_val ({self.min_val}) must be less than max_val ({self.max_val})" ) return self ``` ## `Discrete` Bases: `BaseModel` A discrete parameter dimension with explicit allowed values. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class Discrete(BaseModel): """A discrete parameter dimension with explicit allowed values.""" model_config = ConfigDict(frozen=True) values: tuple[Any, ...] @model_validator(mode="after") def _validate_values(self) -> Self: if len(self.values) < 1: raise PsuuValidationError("Discrete dimension must have at least 1 value") return self ``` ## `SchemaViolation` A single incompatibility between a sweep dimension and declared schema. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` @dataclass(frozen=True) class SchemaViolation: """A single incompatibility between a sweep dimension and declared schema.""" param: str violation_type: str # "missing_from_schema", "out_of_bounds", "type_mismatch" message: str ``` ## `Constraint` Bases: `BaseModel`, `ABC` Base class for parameter space constraints. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class Constraint(BaseModel, ABC): """Base class for parameter space constraints.""" model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) @abstractmethod def is_feasible(self, point: ParamPoint) -> bool: """Return True if the point satisfies this constraint.""" ``` ### `is_feasible(point)` Return True if the point satisfies this constraint. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` @abstractmethod def is_feasible(self, point: ParamPoint) -> bool: """Return True if the point satisfies this constraint.""" ``` ## `LinearConstraint` Bases: `Constraint` Linear inequality constraint: sum(coeff_i * x_i) \<= bound. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class LinearConstraint(Constraint): """Linear inequality constraint: sum(coeff_i * x_i) <= bound.""" coefficients: dict[str, float] bound: float @model_validator(mode="after") def _validate_nonempty(self) -> Self: if not self.coefficients: raise PsuuValidationError( "LinearConstraint must have at least 1 coefficient" ) return self def is_feasible(self, point: ParamPoint) -> bool: total = sum(coeff * point[name] for name, coeff in self.coefficients.items()) return total <= self.bound ``` ## `FunctionalConstraint` Bases: `Constraint` Arbitrary feasibility predicate over a parameter point. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class FunctionalConstraint(Constraint): """Arbitrary feasibility predicate over a parameter point.""" fn: Callable[[ParamPoint], bool] def is_feasible(self, point: ParamPoint) -> bool: return self.fn(point) ``` ## `ParameterSpace` Bases: `BaseModel` Defines the searchable parameter space as a mapping of named dimensions. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` class ParameterSpace(BaseModel): """Defines the searchable parameter space as a mapping of named dimensions.""" model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) params: dict[str, Dimension] constraints: tuple[Constraint, ...] = () @model_validator(mode="after") def _validate_nonempty(self) -> Self: if not self.params: raise PsuuValidationError("ParameterSpace must have at least 1 parameter") return self @model_validator(mode="after") def _validate_constraint_params(self) -> Self: param_names = set(self.params.keys()) for constraint in self.constraints: if isinstance(constraint, LinearConstraint): unknown = set(constraint.coefficients.keys()) - param_names if unknown: raise PsuuValidationError( f"LinearConstraint references unknown params: {unknown}" ) return self @property def dimension_names(self) -> list[str]: """Ordered list of parameter names.""" return list(self.params.keys()) def is_feasible(self, point: ParamPoint) -> bool: """Check if a parameter point satisfies all constraints.""" return all(c.is_feasible(point) for c in self.constraints) def grid_points(self, n_steps: int) -> list[ParamPoint]: """Generate a grid of parameter points. For Continuous: ``n_steps`` evenly spaced values between min and max. For Integer: all integers in [min_val, max_val] (ignores n_steps). For Discrete: all values. Points that violate constraints are excluded. """ axes: list[list[Any]] = [] for dim in self.params.values(): if isinstance(dim, Continuous): if n_steps < 2: raise PsuuValidationError( "n_steps must be >= 2 for Continuous dimensions" ) step = (dim.max_val - dim.min_val) / (n_steps - 1) axes.append([dim.min_val + i * step for i in range(n_steps)]) elif isinstance(dim, Integer): axes.append(list(range(dim.min_val, dim.max_val + 1))) elif isinstance(dim, Discrete): axes.append(list(dim.values)) names = self.dimension_names all_points = [ dict(zip(names, combo, strict=True)) for combo in itertools.product(*axes) ] if self.constraints: return [p for p in all_points if self.is_feasible(p)] return all_points @classmethod def from_parameter_schema(cls, schema: ParameterSchema) -> ParameterSpace: """Create a ParameterSpace from a GDS ParameterSchema. Maps ParameterDef entries to Dimensions: - ``float`` with bounds -> ``Continuous`` - ``int`` with bounds -> ``Integer`` - No bounds -> raises ``ValueError`` (bounds required for sweep) - Unsupported type -> raises ``TypeError`` Requires ``gds-framework`` to be installed. """ from gds.parameters import ParameterSchema as _Schema if not isinstance(schema, _Schema): raise TypeError(f"Expected ParameterSchema, got {type(schema).__name__}") params: dict[str, Dimension] = {} for name, pdef in schema.parameters.items(): if pdef.bounds is None: raise ValueError( f"Parameter {name!r} has no bounds declared — " f"cannot create sweep dimension without bounds" ) low, high = pdef.bounds py_type = pdef.typedef.python_type is_float = py_type is float or ( isinstance(py_type, type) and issubclass(py_type, float) ) is_int = py_type is int or ( isinstance(py_type, type) and issubclass(py_type, int) ) if is_float: params[name] = Continuous(min_val=float(low), max_val=float(high)) elif is_int: params[name] = Integer(min_val=int(low), max_val=int(high)) else: raise TypeError( f"Parameter {name!r} has type {py_type.__name__} — " f"only float and int are supported for automatic " f"dimension creation" ) return cls(params=params) def validate_against_schema(self, schema: ParameterSchema) -> list[SchemaViolation]: """Check that this space respects the declared parameter schema. Returns a list of violations. Empty list means the space is compatible with the schema. Requires ``gds-framework`` to be installed. """ from gds.parameters import ParameterSchema as _Schema if not isinstance(schema, _Schema): raise TypeError(f"Expected ParameterSchema, got {type(schema).__name__}") violations: list[SchemaViolation] = [] for name, dim in self.params.items(): # Check parameter exists in schema pdef = schema.parameters.get(name) if pdef is None: violations.append( SchemaViolation( param=name, violation_type="missing_from_schema", message=( f"Parameter {name!r} is swept but not " f"declared in the schema" ), ) ) continue # Check type compatibility py_type = pdef.typedef.python_type if isinstance(dim, Continuous) and py_type is not float: violations.append( SchemaViolation( param=name, violation_type="type_mismatch", message=( f"Parameter {name!r}: Continuous dimension " f"but declared type is {py_type.__name__}" ), ) ) elif isinstance(dim, Integer) and py_type is not int: violations.append( SchemaViolation( param=name, violation_type="type_mismatch", message=( f"Parameter {name!r}: Integer dimension " f"but declared type is {py_type.__name__}" ), ) ) # Check bounds compatibility if pdef.bounds is not None and isinstance(dim, (Continuous, Integer)): schema_low, schema_high = pdef.bounds if dim.min_val < schema_low: violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep min " f"{dim.min_val} < declared lower " f"bound {schema_low}" ), ) ) if dim.max_val > schema_high: violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep max " f"{dim.max_val} > declared upper " f"bound {schema_high}" ), ) ) # Check typedef constraint at sweep boundaries if pdef.typedef.constraint is not None and isinstance( dim, (Continuous, Integer) ): if not pdef.typedef.check_value(dim.min_val): violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep min " f"{dim.min_val} fails typedef constraint" ), ) ) if not pdef.typedef.check_value(dim.max_val): violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep max " f"{dim.max_val} fails typedef constraint" ), ) ) return violations ``` ### `dimension_names` Ordered list of parameter names. ### `is_feasible(point)` Check if a parameter point satisfies all constraints. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` def is_feasible(self, point: ParamPoint) -> bool: """Check if a parameter point satisfies all constraints.""" return all(c.is_feasible(point) for c in self.constraints) ``` ### `grid_points(n_steps)` Generate a grid of parameter points. For Continuous: `n_steps` evenly spaced values between min and max. For Integer: all integers in [min_val, max_val] (ignores n_steps). For Discrete: all values. Points that violate constraints are excluded. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` def grid_points(self, n_steps: int) -> list[ParamPoint]: """Generate a grid of parameter points. For Continuous: ``n_steps`` evenly spaced values between min and max. For Integer: all integers in [min_val, max_val] (ignores n_steps). For Discrete: all values. Points that violate constraints are excluded. """ axes: list[list[Any]] = [] for dim in self.params.values(): if isinstance(dim, Continuous): if n_steps < 2: raise PsuuValidationError( "n_steps must be >= 2 for Continuous dimensions" ) step = (dim.max_val - dim.min_val) / (n_steps - 1) axes.append([dim.min_val + i * step for i in range(n_steps)]) elif isinstance(dim, Integer): axes.append(list(range(dim.min_val, dim.max_val + 1))) elif isinstance(dim, Discrete): axes.append(list(dim.values)) names = self.dimension_names all_points = [ dict(zip(names, combo, strict=True)) for combo in itertools.product(*axes) ] if self.constraints: return [p for p in all_points if self.is_feasible(p)] return all_points ``` ### `from_parameter_schema(schema)` Create a ParameterSpace from a GDS ParameterSchema. Maps ParameterDef entries to Dimensions: - `float` with bounds -> `Continuous` - `int` with bounds -> `Integer` - No bounds -> raises `ValueError` (bounds required for sweep) - Unsupported type -> raises `TypeError` Requires `gds-framework` to be installed. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` @classmethod def from_parameter_schema(cls, schema: ParameterSchema) -> ParameterSpace: """Create a ParameterSpace from a GDS ParameterSchema. Maps ParameterDef entries to Dimensions: - ``float`` with bounds -> ``Continuous`` - ``int`` with bounds -> ``Integer`` - No bounds -> raises ``ValueError`` (bounds required for sweep) - Unsupported type -> raises ``TypeError`` Requires ``gds-framework`` to be installed. """ from gds.parameters import ParameterSchema as _Schema if not isinstance(schema, _Schema): raise TypeError(f"Expected ParameterSchema, got {type(schema).__name__}") params: dict[str, Dimension] = {} for name, pdef in schema.parameters.items(): if pdef.bounds is None: raise ValueError( f"Parameter {name!r} has no bounds declared — " f"cannot create sweep dimension without bounds" ) low, high = pdef.bounds py_type = pdef.typedef.python_type is_float = py_type is float or ( isinstance(py_type, type) and issubclass(py_type, float) ) is_int = py_type is int or ( isinstance(py_type, type) and issubclass(py_type, int) ) if is_float: params[name] = Continuous(min_val=float(low), max_val=float(high)) elif is_int: params[name] = Integer(min_val=int(low), max_val=int(high)) else: raise TypeError( f"Parameter {name!r} has type {py_type.__name__} — " f"only float and int are supported for automatic " f"dimension creation" ) return cls(params=params) ``` ### `validate_against_schema(schema)` Check that this space respects the declared parameter schema. Returns a list of violations. Empty list means the space is compatible with the schema. Requires `gds-framework` to be installed. Source code in `packages/gds-analysis/gds_analysis/psuu/space.py` ``` def validate_against_schema(self, schema: ParameterSchema) -> list[SchemaViolation]: """Check that this space respects the declared parameter schema. Returns a list of violations. Empty list means the space is compatible with the schema. Requires ``gds-framework`` to be installed. """ from gds.parameters import ParameterSchema as _Schema if not isinstance(schema, _Schema): raise TypeError(f"Expected ParameterSchema, got {type(schema).__name__}") violations: list[SchemaViolation] = [] for name, dim in self.params.items(): # Check parameter exists in schema pdef = schema.parameters.get(name) if pdef is None: violations.append( SchemaViolation( param=name, violation_type="missing_from_schema", message=( f"Parameter {name!r} is swept but not " f"declared in the schema" ), ) ) continue # Check type compatibility py_type = pdef.typedef.python_type if isinstance(dim, Continuous) and py_type is not float: violations.append( SchemaViolation( param=name, violation_type="type_mismatch", message=( f"Parameter {name!r}: Continuous dimension " f"but declared type is {py_type.__name__}" ), ) ) elif isinstance(dim, Integer) and py_type is not int: violations.append( SchemaViolation( param=name, violation_type="type_mismatch", message=( f"Parameter {name!r}: Integer dimension " f"but declared type is {py_type.__name__}" ), ) ) # Check bounds compatibility if pdef.bounds is not None and isinstance(dim, (Continuous, Integer)): schema_low, schema_high = pdef.bounds if dim.min_val < schema_low: violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep min " f"{dim.min_val} < declared lower " f"bound {schema_low}" ), ) ) if dim.max_val > schema_high: violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep max " f"{dim.max_val} > declared upper " f"bound {schema_high}" ), ) ) # Check typedef constraint at sweep boundaries if pdef.typedef.constraint is not None and isinstance( dim, (Continuous, Integer) ): if not pdef.typedef.check_value(dim.min_val): violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep min " f"{dim.min_val} fails typedef constraint" ), ) ) if not pdef.typedef.check_value(dim.max_val): violations.append( SchemaViolation( param=name, violation_type="out_of_bounds", message=( f"Parameter {name!r}: sweep max " f"{dim.max_val} fails typedef constraint" ), ) ) return violations ``` # gds_analysis.psuu.sweep Sweep orchestrator -- the main entry point for parameter search. Sweep orchestrator — the main entry point for parameter search. ## `Sweep` Bases: `BaseModel` Orchestrates parameter space search. Drives the optimizer suggest/observe loop, delegating evaluation to the Evaluator which bridges to gds-sim. If `parameter_schema` is provided, the sweep validates the parameter space against the declared schema before starting. Validation errors raise `ValueError`. Source code in `packages/gds-analysis/gds_analysis/psuu/sweep.py` ``` class Sweep(BaseModel): """Orchestrates parameter space search. Drives the optimizer suggest/observe loop, delegating evaluation to the Evaluator which bridges to gds-sim. If ``parameter_schema`` is provided, the sweep validates the parameter space against the declared schema before starting. Validation errors raise ``ValueError``. """ model_config = ConfigDict(arbitrary_types_allowed=True, frozen=True) model: Model space: ParameterSpace kpis: list[KPI] optimizer: Optimizer objective: Objective | None = None timesteps: int = 100 runs: int = 1 parameter_schema: Any = None def run(self) -> SweepResults: """Execute the sweep and return results.""" if self.parameter_schema is not None: self._validate_schema() kpi_names = [k.name for k in self.kpis] self.optimizer.setup(self.space, kpi_names) evaluator = Evaluator( base_model=self.model, kpis=list(self.kpis), timesteps=self.timesteps, runs=self.runs, ) evaluations: list[EvaluationResult] = [] while not self.optimizer.is_exhausted(): params = self.optimizer.suggest() result = evaluator.evaluate(params) self.optimizer.observe(params, result.scores) evaluations.append(result) return SweepResults( evaluations=evaluations, kpi_names=kpi_names, optimizer_name=type(self.optimizer).__name__, ) def _validate_schema(self) -> None: """Validate the parameter space against the declared schema.""" from gds.verification.findings import Severity from gds_analysis.psuu.checks import check_parameter_space_compatibility findings = check_parameter_space_compatibility( self.space, self.parameter_schema ) errors = [f for f in findings if f.severity == Severity.ERROR] if errors: messages = "; ".join(f.message for f in errors) raise ValueError(f"Parameter space violates declared schema: {messages}") ``` ### `run()` Execute the sweep and return results. Source code in `packages/gds-analysis/gds_analysis/psuu/sweep.py` ``` def run(self) -> SweepResults: """Execute the sweep and return results.""" if self.parameter_schema is not None: self._validate_schema() kpi_names = [k.name for k in self.kpis] self.optimizer.setup(self.space, kpi_names) evaluator = Evaluator( base_model=self.model, kpis=list(self.kpis), timesteps=self.timesteps, runs=self.runs, ) evaluations: list[EvaluationResult] = [] while not self.optimizer.is_exhausted(): params = self.optimizer.suggest() result = evaluator.evaluate(params) self.optimizer.observe(params, result.scores) evaluations.append(result) return SweepResults( evaluations=evaluations, kpi_names=kpi_names, optimizer_name=type(self.optimizer).__name__, ) ``` # Continuous-Time (gds-continuous) # gds-continuous **Continuous-time ODE integration engine** -- the continuous-time counterpart to `gds-sim`. ## What is this? `gds-continuous` provides an ODE simulation engine for continuous-time dynamical systems. It follows the same standalone architectural pattern as `gds-sim` -- minimal dependencies, Pydantic models, columnar results -- but integrates SciPy's ODE solvers instead of discrete timestep iteration. - **`ODEModel`** -- declares state variables, initial conditions, and a right-hand side function `dx/dt = f(t, x, params)` - **`ODESimulation`** -- configures time span, solver method, tolerances, and evaluation points - **`ODEResults`** -- columnar storage of time series with named state access - **6 solver methods** -- `RK45`, `RK23`, `DOP853`, `Radau`, `BDF`, `LSODA` (all via `scipy.integrate.solve_ivp`) - **Zero GDS dependency** -- standalone package, same as `gds-sim` ## Architecture ``` scipy + numpy (optional deps) | +-- gds-continuous (uv add gds-continuous[scipy]) | | ODE engine: ODEModel, ODESimulation, ODEResults. | 6 solver backends via scipy.integrate.solve_ivp. | +-- Your application | | Concrete ODE models, parameter studies, | phase portraits, trajectory analysis. ``` ## Relationship to gds-sim | | gds-sim | gds-continuous | | ---------------- | ---------------------------- | --------------------------- | | **Time** | Discrete timesteps | Continuous `t_span` | | **Update rule** | `f(state, params) -> state` | `dx/dt = f(t, x, params)` | | **Solver** | Direct iteration | SciPy `solve_ivp` | | **Results** | `Results` (timestep-indexed) | `ODEResults` (time-indexed) | | **Dependencies** | pydantic only | pydantic + scipy + numpy | Both are standalone engines with no `gds-framework` dependency. They can be used independently or bridged via `gds-analysis`. ## Solver Methods | Method | Type | Best for | | -------- | ------------------------------ | -------------------------------- | | `RK45` | Explicit Runge-Kutta (default) | General non-stiff problems | | `RK23` | Explicit Runge-Kutta | Low-accuracy requirements | | `DOP853` | Explicit Runge-Kutta | High-accuracy non-stiff problems | | `Radau` | Implicit Runge-Kutta | Stiff problems | | `BDF` | Implicit multi-step | Stiff problems | | `LSODA` | Automatic stiff/non-stiff | Unknown stiffness | ## Quick Start ``` uv add "gds-continuous[scipy]" ``` See [Getting Started](https://blockscience.github.io/gds-core/continuous/getting-started/index.md) for a full walkthrough. ## Credits Built by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add "gds-continuous[scipy]" ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First ODE: Exponential Decay Model the simplest continuous-time system: exponential decay `dx/dt = -kx`. ``` from gds_continuous import ODEModel, ODESimulation, ODEResults # 1. Define the right-hand side: dx/dt = f(t, x, params) def decay_rhs(t, state, params): k = params["k"] return {"x": -k * state["x"]} # 2. Build the model model = ODEModel( state_names=["x"], initial_state={"x": 10.0}, rhs=decay_rhs, params={"k": 0.5}, ) # 3. Configure and run the simulation sim = ODESimulation(model=model, t_span=(0.0, 10.0), solver="RK45") results: ODEResults = sim.run() # 4. Inspect results print(f"t = {results.t[-1]:.1f}, x = {results['x'][-1]:.4f}") # t = 10.0, x = 0.0067 ``` ## Plotting Results ``` import matplotlib.pyplot as plt plt.plot(results.t, results["x"]) plt.xlabel("Time") plt.ylabel("x(t)") plt.title("Exponential Decay: dx/dt = -0.5x") plt.grid(True) plt.show() ``` ## A Two-State System: Lotka-Volterra Model predator-prey dynamics with coupled ODEs: ``` from gds_continuous import ODEModel, ODESimulation def lotka_volterra(t, state, params): x, y = state["prey"], state["predator"] a, b, c, d = params["a"], params["b"], params["c"], params["d"] return { "prey": a * x - b * x * y, "predator": c * x * y - d * y, } model = ODEModel( state_names=["prey", "predator"], initial_state={"prey": 10.0, "predator": 5.0}, rhs=lotka_volterra, params={"a": 1.1, "b": 0.4, "c": 0.1, "d": 0.4}, ) sim = ODESimulation(model=model, t_span=(0.0, 50.0), solver="RK45") results = sim.run() ``` ## Parameter Sweep Compare different decay rates by running multiple simulations: ``` import matplotlib.pyplot as plt from gds_continuous import ODEModel, ODESimulation def decay_rhs(t, state, params): return {"x": -params["k"] * state["x"]} fig, ax = plt.subplots() for k in [0.1, 0.3, 0.5, 1.0, 2.0]: model = ODEModel( state_names=["x"], initial_state={"x": 10.0}, rhs=decay_rhs, params={"k": k}, ) results = ODESimulation(model=model, t_span=(0.0, 10.0), solver="RK45").run() ax.plot(results.t, results["x"], label=f"k={k}") ax.set_xlabel("Time") ax.set_ylabel("x(t)") ax.legend() ax.set_title("Exponential Decay: Parameter Sweep") plt.show() ``` ## Choosing a Solver For most problems, the default `RK45` works well. Switch solvers when needed: ``` # Stiff system -- use an implicit solver sim = ODESimulation(model=model, t_span=(0.0, 10.0), solver="Radau") # Unknown stiffness -- let LSODA auto-detect sim = ODESimulation(model=model, t_span=(0.0, 10.0), solver="LSODA") # High accuracy -- use DOP853 with tight tolerances sim = ODESimulation(model=model, t_span=(0.0, 10.0), solver="DOP853", rtol=1e-10, atol=1e-12) ``` ## Next Steps - [Overview](https://blockscience.github.io/gds-core/continuous/index.md) -- solver comparison table and architecture - [Symbolic Math](https://blockscience.github.io/gds-core/symbolic/index.md) -- generate ODE right-hand sides from symbolic equations - [Analysis](https://blockscience.github.io/gds-core/analysis/index.md) -- bridge GDS specifications to continuous-time simulation # Symbolic Math (gds-domains) # gds-symbolic **SymPy bridge for gds-control** -- symbolic state equations, automatic linearization, and ODE code generation. ## What is this? `gds-symbolic` extends `gds-control`'s `ControlModel` with symbolic mathematics. Instead of writing numerical right-hand side functions by hand, you declare state and output equations as symbolic expressions and let the compiler do the rest. - **`StateEquation`** -- symbolic expression for `dx/dt` (e.g., `"-k * x + b * u"`) - **`OutputEquation`** -- symbolic expression for sensor output `y` (e.g., `"x + noise"`) - **`compile_to_ode()`** -- lambdifies symbolic equations into a callable `ODEFunction` compatible with `gds-continuous` - **`linearize()`** -- computes Jacobian matrices (A, B, C, D) at an operating point - **Safe expression parsing** -- uses `sympy.parsing.sympy_parser.parse_expr`, never `eval` ## Architecture ``` gds-control (pip install gds-domains) | | State-space control DSL: State, Input, Sensor, Controller. | +-- gds-symbolic (uv add gds-symbolic[sympy]) | | Symbolic layer: StateEquation, OutputEquation, | compile_to_ode(), linearize(). | +-- gds-continuous (optional integration) | | ODE simulation engine: ODEModel, ODESimulation. ``` ## Key Types | Type | Purpose | | ---------------------- | --------------------------------------------------- | | `StateEquation` | Symbolic `dx_i/dt = expr(x, u, params)` | | `OutputEquation` | Symbolic `y_i = expr(x, u, params)` | | `SymbolicControlModel` | Extends `ControlModel` with symbolic equations | | `ODEFunction` | Lambdified callable: `f(t, x, params) -> dx/dt` | | `LinearSystem` | Matrices `(A, B, C, D)` from Jacobian linearization | ## How It Works ``` Symbolic expressions (strings) | v parse_expr() --> SymPy Expr objects | v compile_to_ode() --> ODEFunction (lambdified, numpy-backed) | | v v linearize() gds-continuous ODEModel | v LinearSystem(A, B, C, D) --> eigenvalue analysis, controllability, etc. ``` All expression parsing uses `sympy.parsing.sympy_parser.parse_expr` with a restricted transformation set -- arbitrary code execution is not possible. ## Quick Start ``` uv add "gds-symbolic[sympy]" ``` See [Getting Started](https://blockscience.github.io/gds-core/symbolic/getting-started/index.md) for a full walkthrough. ## Credits Built on [gds-control](https://blockscience.github.io/gds-core/control/index.md) by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add "gds-symbolic[sympy]" ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## Your First Symbolic Model Define a damped harmonic oscillator symbolically: two state variables (position and velocity), one input (external force). ``` from gds_domains.symbolic import ( SymbolicControlModel, StateEquation, OutputEquation, compile_to_ode, ) # 1. Declare symbolic state equations # dx1/dt = x2 (velocity) # dx2/dt = -k*x1 - c*x2 + u (acceleration with damping) model = SymbolicControlModel( name="DampedOscillator", state_equations=[ StateEquation(state="x1", expr="x2"), StateEquation(state="x2", expr="-k * x1 - c * x2 + u"), ], output_equations=[ OutputEquation(output="position", expr="x1"), ], parameters={"k": 4.0, "c": 0.5}, ) # 2. Compile to a callable ODE function ode_fn = compile_to_ode(model) # 3. Evaluate at a point dx = ode_fn(t=0.0, state={"x1": 1.0, "x2": 0.0}, params={"k": 4.0, "c": 0.5, "u": 0.0}) print(dx) # {"x1": 0.0, "x2": -4.0} ``` ## Integration with gds-continuous Plug the compiled ODE function directly into `gds-continuous`: ``` from gds_continuous import ODEModel, ODESimulation ode_model = ODEModel( state_names=["x1", "x2"], initial_state={"x1": 1.0, "x2": 0.0}, rhs=ode_fn, params={"k": 4.0, "c": 0.5, "u": 0.0}, ) sim = ODESimulation(model=ode_model, t_span=(0.0, 20.0), solver="RK45") results = sim.run() import matplotlib.pyplot as plt plt.plot(results.t, results["x1"], label="position") plt.plot(results.t, results["x2"], label="velocity") plt.legend() plt.title("Damped Harmonic Oscillator") plt.xlabel("Time") plt.grid(True) plt.show() ``` ## Linearization Compute Jacobian matrices at an operating point to get the standard state-space form `(A, B, C, D)`: ``` from gds_domains.symbolic import linearize # Linearize around the equilibrium (x1=0, x2=0, u=0) lin = linearize( model, operating_point={"x1": 0.0, "x2": 0.0}, input_point={"u": 0.0}, ) print("A =", lin.A) # [[ 0. 1.], [-4. -0.5]] print("B =", lin.B) # [[0.], [1.]] print("C =", lin.C) # [[1. 0.]] print("D =", lin.D) # [[0.]] ``` The `LinearSystem` object holds NumPy arrays for each matrix: ``` import numpy as np # Check eigenvalues for stability eigenvalues = np.linalg.eigvals(lin.A) print(f"Eigenvalues: {eigenvalues}") print(f"Stable: {all(e.real < 0 for e in eigenvalues)}") ``` ## Nonlinear Example: Van der Pol Oscillator A classic nonlinear system where linearization reveals local stability: ``` from gds_domains.symbolic import SymbolicControlModel, StateEquation, linearize vdp = SymbolicControlModel( name="VanDerPol", state_equations=[ StateEquation(state="x1", expr="x2"), StateEquation(state="x2", expr="mu * (1 - x1**2) * x2 - x1"), ], parameters={"mu": 1.0}, ) # Linearize at the origin lin = linearize(vdp, operating_point={"x1": 0.0, "x2": 0.0}) print("A =", lin.A) # [[0, 1], [-1, mu]] -- unstable for mu > 0 ``` ## Next Steps - [Symbolic Overview](https://blockscience.github.io/gds-core/symbolic/index.md) -- architecture and key types - [Continuous-Time](https://blockscience.github.io/gds-core/continuous/index.md) -- ODE simulation engine for running compiled models - [Control](https://blockscience.github.io/gds-core/control/index.md) -- the underlying control DSL # Analysis (gds-analysis) # gds-analysis **Bridge from GDS structural specifications to runtime simulation and analysis.** ## What is this? `gds-analysis` closes the gap between `gds-framework`'s structural annotations (AdmissibleInputConstraint, TransitionSignature) and `gds-sim`'s runtime engine. It provides the behavioral layer that turns verified specifications into executable models. - **`spec_to_model()`** -- adapter that converts a `GDSSpec` + behavioral functions into a `gds_sim.Model` - **`guarded_policy()`** -- wraps a policy function with `AdmissibleInputConstraint` enforcement at runtime - **`reachable_set()`** -- computes the reachable set R(x) from an initial state by exploring the transition graph (Paper Def 4.1) - **`reachable_graph()`** -- returns the full state transition graph as an adjacency structure - **`configuration_space()`** -- finds the largest SCC of the reachability graph (Paper Def 4.2) - **`trajectory_distances()`** -- computes metric distances along a trajectory for convergence analysis ## Architecture ``` gds-framework gds-sim | | | GDSSpec, entities, | Model, StateUpdateBlock, | blocks, constraints | Simulation, Results | | +-------+ gds-analysis +-------+ | | | spec_to_model(), guarded_policy(), | reachable_set(), trajectory_distances() | +-- Your application | | Verified specs -> executable simulations, | reachability analysis, convergence proofs. ``` ## Key Functions | Function | Input | Output | | ----------------------------------------------------- | ------------------------------- | ---------------- | | `spec_to_model(spec, policies, sufs, ...)` | `GDSSpec` + dict of callables | `gds_sim.Model` | | `guarded_policy(policy_fn, constraint, fallback)` | callable + predicate + fallback | guarded callable | | `reachable_set(spec, model, state, input_samples)` | spec + model + state + inputs | `list[dict]` | | `reachable_graph(spec, model, states, input_samples)` | spec + model + states + inputs | adjacency dict | | `configuration_space(graph)` | adjacency dict | largest SCC | | `trajectory_distances(results, metric)` | `Results` + distance fn | `list[float]` | ## The Behavioral Gap GDS specifications are structural -- they declare *what* blocks exist, how they wire together, and what constraints hold. But they do not contain *behavioral* functions (policies, state update functions). The adapter pattern separates structural (R1) from behavioral (R3): - **Structural (from GDSSpec)**: block topology, wiring, role assignments, entity/variable declarations - **Behavioral (from user)**: policy functions, state update functions, initial conditions - **Bridge**: `spec_to_model()` wires user-supplied callables into the structural skeleton ## Constraint Enforcement `guarded_policy()` wraps a policy function so that `AdmissibleInputConstraint` predicates are checked at every timestep. If a constraint is violated, the guard invokes the fallback function or raises `ConstraintViolationError` with the failing constraint's name and the offending input values. ## Quick Start ``` uv add gds-analysis ``` See [Getting Started](https://blockscience.github.io/gds-core/analysis/getting-started/index.md) for a full walkthrough. ## Credits Built on [gds-framework](https://blockscience.github.io/gds-core/framework/index.md) and [gds-sim](https://pypi.org/project/gds-sim/) by [BlockScience](https://block.science). # Getting Started ## Installation ``` uv add gds-analysis ``` For development (monorepo): ``` git clone https://github.com/BlockScience/gds-core.git cd gds-core uv sync --all-packages ``` ## From Specification to Simulation The typical workflow: build a GDS specification from a domain model, supply behavioral functions, then simulate. ``` from gds_domains.control import ( State, Input, Sensor, Controller, ControlModel, compile_model, ) from gds_analysis import spec_to_model # 1. Build a GDS specification from a control model model = ControlModel( name="Thermostat", states=[State(name="temperature", initial=20.0)], inputs=[Input(name="setpoint")], sensors=[Sensor(name="thermometer", observes=["temperature"])], controllers=[ Controller(name="PID", reads=["thermometer", "setpoint"], drives=["temperature"]), ], ) spec = compile_model(model) # 2. Supply behavioral functions (R3 -- not in the spec) sim_model = spec_to_model( spec, policies={ "thermometer": lambda state, params, **kw: {"reading": state["temperature"]}, "PID": lambda state, params, **kw: { "command": params["Kp"] * (params["setpoint"] - state["temperature"]) }, }, sufs={ "temperature": lambda state, params, signal=None, **kw: ( "temperature", state["temperature"] + signal["command"] * 0.1, ), }, initial_state={"temperature": 20.0}, params={"setpoint": 22.0, "Kp": 0.3}, ) # 3. Run via gds-sim from gds_sim import Simulation results = Simulation(model=sim_model, timesteps=50, runs=1).run() print(f"Final temperature: {results['temperature'][-1]:.1f}") ``` ## Guarded Policies Enforce `AdmissibleInputConstraint` at runtime: ``` from gds_analysis import guarded_policy # Wrap a policy to enforce admissibility safe_policy = guarded_policy( policy_fn=my_policy, constraint=lambda state, action: action["power"] <= state["max_power"], fallback=lambda state, params, **kw: {"power": 0.0}, ) ``` If the constraint predicate returns `False`, the fallback function is called instead. Without a fallback, `ConstraintViolationError` is raised. ## Computing Reachable Sets Explore what states are reachable from an initial condition: ``` from gds_analysis import reachable_set, reachable_graph, configuration_space # R(x) = set of states reachable in one step from x reached = reachable_set( spec, sim_model, state={"temperature": 20.0}, input_samples=[ {"command": 0.0}, {"command": 1.0}, {"command": -1.0}, ], ) print(f"Reachable states: {len(reached)}") # Build graph over sampled states and extract configuration space graph = reachable_graph(spec, sim_model, states=sampled_states, input_samples=inputs) x_c = configuration_space(graph) # largest SCC -- mutually reachable states ``` ## Trajectory Analysis Measure convergence along a simulation trajectory: ``` from gds_analysis import trajectory_distances distances = trajectory_distances( results, metric=lambda s1, s2: abs(s1["temperature"] - s2["temperature"]), ) # Check convergence: distances should decrease print(f"Initial distance: {distances[0]:.3f}") print(f"Final distance: {distances[-1]:.3f}") ``` ## Next Steps - [Analysis Overview](https://blockscience.github.io/gds-core/analysis/index.md) -- architecture and key functions - [Framework](https://blockscience.github.io/gds-core/framework/index.md) -- GDS specification and structural annotations - [PSUU](https://blockscience.github.io/gds-core/psuu/index.md) -- systematic parameter exploration over simulations # Case Studies # Case Studies Real-world projects built on the GDS ecosystem, demonstrating how compositional specifications translate into interactive applications, research tools, and educational platforms. | Project | Domain | GDS Packages Used | Description | | ----------------------------------------------------------------------------------------------- | ----------- | ------------------------------------- | --------------------------------------------------------------------------------------- | | **[Axelrod Tournament](https://blockscience.github.io/gds-core/case-studies/axelrod/index.md)** | Game Theory | gds-games, gds-sim, gds-psuu, gds-viz | Interactive exploration of the iterated Prisoner's Dilemma through six analytical views | # Axelrod Tournament **One model, many views** — an interactive exploration of Axelrod's iterated Prisoner's Dilemma tournament, built on the GDS ecosystem. [:octicons-link-external-16: Live Site](https://blockscience.github.io/gds-axelrod/)   [:octicons-mark-github-16: Source](https://github.com/BlockScience/gds-axelrod) ______________________________________________________________________ ## Overview [gds-axelrod](https://github.com/BlockScience/gds-axelrod) demonstrates how a single OGS game specification can be projected through six distinct analytical lenses — from narrative storytelling to formal mathematical decomposition to interactive parameter exploration — without simplification or compromise. The project is a concrete realization of the **specification-as-interoperability-layer** pattern described in the [Interoperability Guide](https://blockscience.github.io/gds-core/guides/interoperability/index.md): one compositional model serves as the single source of truth, and multiple independent tools consume it for different purposes. ## Architecture The project splits into two tiers: ``` Pipeline (Python) Site (Vite/JavaScript) ┌──────────────────────┐ ┌──────────────────────────┐ │ OGS game definition │──export──→ │ Canvas Petri dish viz │ │ gds-sim population │ (JSON) │ Mermaid diagrams │ │ gds-psuu sweeps │ │ Narrative chapters │ │ Nash/dominance calc │ │ Pyodide PSUU sandbox │ └──────────────────────┘ └──────────────────────────┘ ``` **Pipeline**: Python data generation using `gds-games`, `gds-sim`, `gds-psuu`, and `gds-viz`. Produces JSON artifacts consumed by the frontend. **Site**: Vite-based JavaScript frontend with Canvas rendering, responsive chapter navigation, and browser-side Python execution via Pyodide. ## Six Showcase Views Each page presents the same underlying Prisoner's Dilemma model through a different analytical lens: | View | What It Shows | GDS Package | | -------------------- | ------------------------------------------------------ | --------------------------------- | | **Story** | Narrative chapters with interactive sandbox simulation | Strategy definitions from OGS | | **Formal Structure** | Canonical `h = f . g` decomposition | `GDSSpec` + `project_canonical()` | | **Visualizations** | Mermaid diagrams across 6 view types | `gds-viz` on `SystemIR` | | **Simulation** | Population trajectory tracking over generations | `gds-sim` | | **Nash Analysis** | Equilibria and dominance calculations | `PatternIR` from `gds-games` | | **PSUU** | Interactive parameter space exploration | `gds-psuu` via Pyodide | ## GDS Ecosystem Integration gds-axelrod exercises four GDS packages together, demonstrating the composability of the ecosystem: - **gds-domains** (`gds_domains.games`) — defines the game as an OGS pattern, compiles to `PatternIR` and `GDSSpec` - **gds-viz** (`gds_viz`) — renders Mermaid diagrams from the compiled `SystemIR` - **gds-sim** (`gds_sim`) — runs population dynamics simulation over iterated tournament rounds - **gds-analysis** (`gds_analysis.psuu`) — parameter sweeps compiled to WebAssembly via Pyodide for in-browser execution ## Key Patterns Demonstrated ### Specification as Single Source of Truth The OGS game definition is written once. Every view — visualization, simulation, equilibrium analysis, parameter exploration — derives from the same specification. No view has a private copy of the model. ### Thin Projections Each analytical tool is a thin projection over the specification: - `PatternIR` → Nash equilibria via Nashpy - `PatternIR` → payoff matrix → tournament simulation - `SystemIR` → Mermaid diagrams via gds-viz - `GDSSpec` → canonical decomposition ### Browser-Side Computation The PSUU page compiles Python (gds-sim + gds-psuu) to WebAssembly via Pyodide, enabling interactive parameter exploration without a backend server. ## Related - [Interoperability Guide](https://blockscience.github.io/gds-core/guides/interoperability/index.md) — the specification-as-interoperability-layer pattern - [Evolution of Trust](https://blockscience.github.io/gds-core/examples/examples/evolution-of-trust/index.md) — the in-repo tutorial model that gds-axelrod builds upon - [Prisoner's Dilemma](https://blockscience.github.io/gds-core/examples/examples/prisoners-dilemma/index.md) — the base GDS framework version - [View Stratification](https://blockscience.github.io/gds-core/guides/view-stratification/index.md) — the theoretical basis for "one model, many views" # Design & Research # Assurance Claims and Residual Gaps This document explicitly states what `verify()` does and does not prove, maps each check to an assurance layer, lists residual verification obligations by domain, and provides a verification passport template for practitioners. A practitioner who sees all checks pass should know exactly what guarantees they hold -- and what evidence they still need to collect. ______________________________________________________________________ ## 1. The Verification Pyramid GDS verification is organized in layers. Each layer depends on the layers below it. The framework currently implements the bottom two layers; the upper layers require external evidence. ``` /\ / \ Full V&V (external evidence required) /----\ / \ Behavioral (trajectory predicates -- future T2-2) /--------\ / \ Semantic (SC-001..SC-010 on GDSSpec) /------------\ / \ Structural (G-001..G-006 on SystemIR) /----------------\ ``` **Structural** checks validate the composition algebra: the wiring graph is a well-formed mathematical object with compatible ports, no dangling references, and acyclic forward flow. **Semantic** checks validate the specification: state coverage is complete, updates are deterministic, references resolve, and the canonical decomposition `h = f . g` is well-defined. **Behavioral** checks would validate trajectory properties: invariants hold under execution, states remain bounded, goals are eventually reached. This layer does not exist yet (planned as T2-2). **Full V&V** requires evidence that the framework cannot produce: simulation results, formal proofs, physical testing, or domain expert review. ______________________________________________________________________ ## 2. Check-to-Layer Mapping | Layer | Checks | What They Prove | Operates On | | ------------------------ | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------- | | Structural (topology) | G-001..G-006 | Wiring graph is well-formed: type-compatible, complete signatures, acyclic forward flow, no dangling references | `SystemIR` | | Semantic (specification) | SC-001..SC-009, SC-010 | Spec is internally consistent: complete state coverage, deterministic updates, reachable signals, valid references, canonical wellformedness, ControlAction pathway separation | `GDSSpec` | | Behavioral (trajectory) | None yet | Would prove: trajectory invariants hold under execution | Future (T2-2) | | Full assurance | -- | Requires simulation evidence, formal proofs, or physical testing | External | ### Structural checks (Layer 0) These operate on `SystemIR` and are run automatically by `verify(system)`. | Check | Property | | ----- | ---------------------------------------------------------------------------- | | G-001 | Covariant wiring labels are token-subsets of source output or target input | | G-002 | Every block has non-degenerate interface (at least one input and one output) | | G-003 | No direction flag contradictions; contravariant port-slot matching | | G-004 | Wiring endpoints reference blocks or inputs that exist | | G-005 | Stack wiring labels match both source output and target input | | G-006 | Forward (covariant) flow graph is a directed acyclic graph | ### Semantic checks (Layer 1) These operate on `GDSSpec` and are called individually. | Check | Property | | ------ | -------------------------------------------------------------------------- | | SC-001 | Every state variable is updated by at least one mechanism | | SC-002 | No variable updated by multiple mechanisms in the same wiring | | SC-003 | Signal path exists between two named blocks (reachability) | | SC-004 | Wire space references resolve to registered spaces | | SC-005 | Block `params_used` match registered parameter names | | SC-006 | At least one mechanism exists (state transition f is non-empty) | | SC-007 | At least one state variable exists (state space X is non-empty) | | SC-008 | Admissibility constraints reference valid blocks and variables | | SC-009 | Transition signatures reference valid mechanisms and variables | | SC-010 | ControlAction outputs do not route back to Policy or BoundaryAction blocks | ______________________________________________________________________ ## 3. What the Framework Does NOT Prove A `verify()` pass establishes structural well-formedness and specification consistency. It does **not** establish any of the following properties: **Behavioral safety** -- no state reaches an unsafe region. Requires simulation or formal proof. A structurally valid system can still drive state variables to dangerous values. **Liveness** -- the system eventually reaches a goal state. Requires temporal logic model checking or bounded simulation. A well-formed spec says nothing about whether desired states are ever attained. **Stability** -- trajectories converge or remain bounded. Requires Lyapunov analysis or simulation. This is the primary concern of the gds-control DSL domain and gds-continuous integration. **Conservation** -- quantities are preserved across transitions. Requires trajectory invariant checks (flow balance audits). This is the primary concern of the gds-stockflow DSL domain, where stock levels should satisfy `d(Stock)/dt = sum(inflows) - sum(outflows)`. **Optimality** -- decisions maximize or minimize an objective. Requires optimization analysis. Verification checks that blocks are wired correctly, not that the policies they implement are optimal. **Incentive compatibility** -- agents' equilibrium strategies align with desired outcomes. Requires Nash equilibrium computation. This is the primary concern of the gds-games DSL domain, which provides nashpy integration for this purpose. **Convergence** -- iterative processes terminate or approach a limit. Requires convergence analysis or fixed-point computation. Feedback loops are structurally validated (G-006 checks acyclicity of covariant flow), but convergence of the dynamics they represent is not assessed. **Adequacy to purpose** -- the model correctly represents the real-world system it is intended to describe. Requires domain expert validation, physical testing, and stakeholder review. This is fundamentally outside any framework's scope. ______________________________________________________________________ ## 4. Residual Verification Obligations For each property that `verify()` cannot establish, the table below identifies what evidence is needed and which layer of the ecosystem is responsible. | Property | Required Evidence | Responsible Layer | | ----------------------- | --------------------------------------------------- | -------------------------------- | | Stability | Simulation + Lyapunov analysis | gds-control DSL + gds-continuous | | Conservation | Trajectory invariant checks (flow balance) | gds-stockflow DSL + gds-sim | | Incentive compatibility | Nash equilibrium computation | gds-games DSL (nashpy) | | Safety | Behavioral predicates on reachable states | Future T2-2 + gds-analysis | | Liveness | Temporal logic model checking or bounded simulation | Future (not planned) | | Convergence | Fixed-point analysis or bounded iteration testing | Domain-specific | | Optimality | Objective function evaluation over trajectories | Domain-specific + gds-psuu | | Adequacy | Domain expert review, physical testing | Outside framework scope | The key takeaway: passing all 16 checks (G-001..G-006, SC-001..SC-010) establishes that the specification is a well-formed, internally consistent mathematical object. It says nothing about whether that object faithfully models reality or behaves safely when executed. ______________________________________________________________________ ## 5. Verification Passport Template The following template provides a one-page assessment format for any GDS-specified system. Copy and fill it in for each system you verify. ``` # Verification Passport: [System Name] ## System Identity - **GDSSpec name:** [name] - **Version:** [version/commit hash] - **DSL:** [which DSL, if applicable] - **Date:** [assessment date] ## Structural Verification (Layer 0) - [ ] G-001 through G-006: [PASS/FAIL] - **SystemIR compiled from:** [composition tree description] - **Findings:** [count] errors, [count] warnings ## Semantic Verification (Layer 1) - [ ] SC-001 through SC-010: [PASS/FAIL] - **Canonical form:** [formula() output] - **State space dimension:** [|X|] - **Findings:** [count] errors, [count] warnings ## Claims Supported by Framework Checks Based on passing structural and semantic verification: - Wiring topology is well-formed (no type mismatches, no dangling references) - State variables have complete, deterministic update coverage - Canonical decomposition h = f . g is well-defined - [Additional claims based on specific checks passed] ## Residual Obligations (NOT covered by framework) | Property | Status | Evidence | |----------|--------|----------| | Stability | [ ] Verified / [ ] Not assessed | [method + results] | | Safety | [ ] Verified / [ ] Not assessed | [method + results] | | Conservation | [ ] Verified / [ ] Not assessed | [method + results] | | Incentive compatibility | [ ] Verified / [ ] Not assessed | [method + results] | | Convergence | [ ] Verified / [ ] Not assessed | [method + results] | | Optimality | [ ] Verified / [ ] Not assessed | [method + results] | | Adequacy | [ ] Verified / [ ] Not assessed | [method + results] | ## Sign-off - **Structural verification by:** [automated -- gds-framework v___] - **Behavioral evidence by:** [person/system] - **Domain adequacy by:** [domain expert] ``` ______________________________________________________________________ ## 6. Cross-References - [Verification Check Specifications](https://blockscience.github.io/gds-core/framework/design/check-specifications/index.md) -- formal property statements, invariant connections, and soundness conditions for all 15 core checks - [Traceability Matrix](https://blockscience.github.io/gds-core/framework/design/traceability-matrix/index.md) -- mapping from checks to test cases and code locations - [Verification Check Catalog](https://blockscience.github.io/gds-core/framework/guide/verification/index.md) -- user-facing reference with examples for every check - [Controller-Plant Duality](https://blockscience.github.io/gds-core/framework/design/controller-plant-duality/index.md) -- design rationale for SC-010 (ControlAction pathway separation) - T2-2 (behavioral verification layer) -- planned future work for trajectory predicate checking via gds-analysis and gds-sim - **Verification humility doctrine (MD-4)** -- the principle that structural verification is necessary but not sufficient, and that the framework must never overstate its assurance claims. Verification proves well-formedness of the mathematical object, not correctness of the model it represents. # Verification Check Specifications This document defines the formal property statement for each of the 15 core verification checks in `gds-framework`. Every check is stated as a predicate on the IR or specification model so that `verify()` results can be interpreted unambiguously and traced to the composition algebra or canonical form. ## Two-Layer Verification Architecture GDS verification is split across two independent layers that mirror the framework's two-layer design: **Layer 0 -- Structural Checks (G-001 through G-006)** operate on `SystemIR`, the flat intermediate representation produced by `compile_system()`. They validate properties of the composition algebra: port compatibility, interface completeness, direction consistency, referential integrity, sequential type safety, and acyclicity. These checks know nothing about GDS semantics -- they enforce the well-definedness of the block graph as a mathematical object. **Layer 1 -- Semantic Checks (SC-001 through SC-009)** operate on `GDSSpec`, the specification-level registry. They validate properties that require knowledge of entities, state variables, block roles, parameter declarations, admissibility constraints, transition signatures, and the canonical decomposition `h = f . g`. These checks ensure that the specification is internally consistent and that the canonical form is non-degenerate. The two layers are independent: you can run structural checks without building a `GDSSpec`, and you can run semantic checks without compiling to `SystemIR`. In practice, a well-formed model passes both. ## Notation Throughout this document: | Symbol | Meaning | | ----------- | ------------------------------------------------------------------------------------------- | | `B` | Set of blocks in `SystemIR` or `GDSSpec` | | `W` | Set of wirings | | `E` | Set of entities | | `X` | State space (union of all entity variables) | | `tokens(s)` | Token decomposition of port name string `s` | | `sig(b)` | 4-tuple signature `(forward_in, forward_out, backward_in, backward_out)` of block `b` | | `h = f . g` | Canonical GDS decomposition: `g` is the observation/policy map, `f` is the state transition | | `M` | Set of Mechanism blocks | | `P` | Set of registered parameters in `ParameterSchema` | ## Summary Table | Code | Name | Layer | Severity | Operates on | Property | | ------ | ----------------------------- | ----- | -------- | ----------- | -------------------------------------------------------------------------- | | G-001 | Domain/Codomain Matching | 0 | ERROR | `SystemIR` | Covariant wiring labels are token-subsets of source output or target input | | G-002 | Signature Completeness | 0 | ERROR | `SystemIR` | Every block has non-degenerate interface (input and output) | | G-003 | Direction Consistency | 0 | ERROR | `SystemIR` | No flag contradictions; contravariant port-slot matching | | G-004 | Dangling Wirings | 0 | ERROR | `SystemIR` | Wiring endpoints are referentially valid | | G-005 | Sequential Type Compatibility | 0 | ERROR | `SystemIR` | Stack wiring labels match BOTH source output AND target input | | G-006 | Covariant Acyclicity | 0 | ERROR | `SystemIR` | Forward flow graph is a DAG | | SC-001 | Completeness | 1 | WARNING | `GDSSpec` | Every state variable is updated by at least one mechanism | | SC-002 | Determinism | 1 | ERROR | `GDSSpec` | No variable updated by multiple mechanisms in same wiring | | SC-003 | Reachability | 1 | WARNING | `GDSSpec` | Signal path exists between two named blocks | | SC-004 | Type Safety | 1 | ERROR | `GDSSpec` | Wire space references resolve to registered spaces | | SC-005 | Parameter References | 1 | ERROR | `GDSSpec` | Block `params_used` match registered parameter names | | SC-006 | Canonical Wellformedness (f) | 1 | WARNING | `GDSSpec` | At least one mechanism exists | | SC-007 | Canonical Wellformedness (X) | 1 | WARNING | `GDSSpec` | At least one state variable exists | | SC-008 | Admissibility References | 1 | ERROR | `GDSSpec` | Admissibility constraints reference valid blocks and variables | | SC-009 | Transition Reads | 1 | ERROR | `GDSSpec` | Transition signatures reference valid mechanisms and variables | ______________________________________________________________________ ## Layer 0 -- Structural Checks (G-001 through G-006) These checks operate on `SystemIR` and validate the composition algebra's structural well-definedness. They are run automatically by `verify(system)`. ______________________________________________________________________ ### G-001 -- Domain/Codomain Matching **Type:** Structural (Layer 0) **Severity:** ERROR **Operates on:** SystemIR **Property Statement:** Let `W_cov = {w in W : w.direction = COVARIANT}`. For every `w in W_cov` where both `w.source` and `w.target` are in `B`: ``` tokens(w.label) <= tokens(sig(w.source).forward_out) OR tokens(w.label) <= tokens(sig(w.target).forward_in) ``` where `<=` denotes the token-subset relation. Additionally, both `sig(w.source).forward_out` and `sig(w.target).forward_in` must be non-empty (otherwise the wiring cannot be verified). **Invariant Enforced:** Type safety in the covariant (forward) channel of the token algebra. In the composition `A >> B`, signals flowing from A to B must reference ports that actually exist on at least one side. This is a necessary condition for the composition `A ; B` to be well-typed in the block algebra. **Failure Semantics:** A MISMATCH means the wiring label references tokens that exist on neither the source's output nor the target's input. The composition is structurally ill-typed: signals are routed to non-existent ports. An empty-port failure (source or target has no forward ports) means the block is structurally incapable of participating in covariant flow. **Soundness Conditions:** A pass guarantees that every covariant wiring's label is consistent with at least one endpoint's port declaration. This is a necessary but not sufficient condition for composition well-typedness -- G-005 provides the stronger bilateral condition for sequential composition. G-001 does not check contravariant wirings (see G-003). **Algorithm:** For each covariant wiring, retrieve both endpoint signatures from a `{block.name: block.signature}` lookup. Apply `tokens_subset(label, port)` to source `forward_out` and target `forward_in`. Report MISMATCH if neither subset relation holds. ______________________________________________________________________ ### G-002 -- Signature Completeness **Type:** Structural (Layer 0) **Severity:** ERROR **Operates on:** SystemIR **Property Statement:** For every block `b in B`: ``` has_output(b) = (sig(b).forward_out != "" OR sig(b).backward_out != "") has_input(b) = (sig(b).forward_in != "" OR sig(b).backward_in != "") ``` If `b.block_type = "boundary"` (BoundaryAction): ``` has_output(b) = True ``` Otherwise: ``` has_input(b) AND has_output(b) = True ``` **Invariant Enforced:** Non-degeneracy of block interfaces. A block with no outputs cannot contribute signals to any downstream composition. A block with no inputs (unless it is a BoundaryAction, which models exogenous signals) cannot receive signals and is structurally isolated. This ensures every block participates meaningfully in the composition graph. **Failure Semantics:** A block with empty input and output is completely isolated -- it cannot appear in any valid composition. The block algebra requires every composable element to have at least one port on each side (with BoundaryAction exempted from the input requirement by definition). **Soundness Conditions:** A pass guarantees every block has at least a minimal interface. Note that BoundaryActions legitimately have no inputs (they inject exogenous signals), and terminal Mechanisms may have no forward outputs (they only write state). The BoundaryAction exemption is built into the check. **Algorithm:** For each block, inspect all four signature slots. Track whether at least one input slot and one output slot is non-empty. Apply the BoundaryAction exemption based on `block_type == "boundary"`. ______________________________________________________________________ ### G-003 -- Direction Consistency **Type:** Structural (Layer 0) **Severity:** ERROR **Operates on:** SystemIR **Property Statement:** Two sub-properties: **(A) Flag Consistency.** For every wiring `w in W`: ``` NOT (w.direction = COVARIANT AND w.is_feedback) NOT (w.direction = CONTRAVARIANT AND w.is_temporal) ``` The first conjunction is a contradiction because feedback flow is inherently contravariant (backward within a timestep). The second is a contradiction because temporal flow is inherently covariant (forward across timesteps). **(B) Contravariant Port-Slot Matching.** For every `w in W` where `w.direction = CONTRAVARIANT` and both endpoints are blocks: ``` (sig(w.source).backward_out != "" OR sig(w.target).backward_in != "") AND (tokens(w.label) <= tokens(sig(w.source).backward_out) OR tokens(w.label) <= tokens(sig(w.target).backward_in)) ``` **Invariant Enforced:** Bidirectional flow discipline. The composition algebra distinguishes covariant (forward) and contravariant (backward) channels. G-003 ensures that (a) the direction/feedback/temporal flags are mutually consistent, and (b) contravariant wirings are token-compatible with the backward ports, completing the type-safety story that G-001 begins for the covariant side. Together, G-001 and G-003 establish that every wiring is compatible with the appropriate channel of its endpoint signatures. **Failure Semantics:** A flag contradiction means the wiring's metadata is internally inconsistent -- it cannot be both covariant and feedback, or both contravariant and temporal. A contravariant port mismatch means backward signals are routed to non-existent backward ports. Both are structural errors that make the composition algebra ill-defined. **Soundness Conditions:** A pass on (A) guarantees no flag contradictions. A pass on (B) guarantees contravariant wirings match backward port declarations. Together with G-001, this covers all four directional port-matching cases. Wirings with non-block endpoints (e.g., InputIR) are skipped -- G-004 handles dangling references. **Algorithm:** For each wiring: (A) check the two forbidden flag combinations; (B) if contravariant and both endpoints are blocks, verify that at least one backward port is non-empty, then apply `tokens_subset` to backward_out and backward_in. ______________________________________________________________________ ### G-004 -- Dangling Wirings **Type:** Structural (Layer 0) **Severity:** ERROR **Operates on:** SystemIR **Property Statement:** Let `N = {b.name : b in B} UNION {i.name : i in inputs}` be the set of all recognized endpoint names. For every wiring `w in W`: ``` w.source in N AND w.target in N ``` **Invariant Enforced:** Referential integrity of the wiring graph. Every wiring must connect two known endpoints. A dangling reference (source or target not in the block/input set) means the wiring points to a non-existent component -- either a typo, a missing block, or an incomplete composition. **Failure Semantics:** A dangling wiring makes the system graph structurally incomplete. The composition cannot be evaluated because at least one endpoint does not exist. This is a hard structural error. **Soundness Conditions:** A pass guarantees all wiring endpoints resolve to known blocks or inputs. This check does not validate that the connected blocks are type-compatible (G-001 and G-003 handle that) or that the graph is connected (SC-003 handles that). **Algorithm:** Build the set of known names from `system.blocks` and `system.inputs`. For each wiring, check membership of `source` and `target` in this set. ______________________________________________________________________ ### G-005 -- Sequential Type Compatibility **Type:** Structural (Layer 0) **Severity:** ERROR **Operates on:** SystemIR **Property Statement:** Let `W_seq = {w in W : w.direction = COVARIANT, NOT w.is_temporal, w.source in B, w.target in B}`. For every `w in W_seq` where both `sig(w.source).forward_out != ""` and `sig(w.target).forward_in != ""`: ``` tokens(w.label) <= tokens(sig(w.source).forward_out) AND tokens(w.label) <= tokens(sig(w.target).forward_in) ``` **Invariant Enforced:** Bilateral type safety for sequential (stack) composition. While G-001 requires the wiring label to match *at least one* side, G-005 requires it to match *both* sides. This is the stronger condition needed for sequential composition `A >> B` to be well-typed: the output of A and the input of B must agree on the signal being passed. This directly ensures that in the composition `A ; B`, the codomain of A is compatible with the domain of B. **Failure Semantics:** A type mismatch means the sequential composition has a type gap: A produces a signal that B does not expect (or vice versa). The composition `A ; B` is ill-typed. Unlike G-001, which permits unilateral matching, G-005 requires bilateral agreement. This is the key check for ensuring the `>>` operator is sound. **Soundness Conditions:** A pass guarantees bilateral token-subset compatibility for all non-temporal covariant wirings between blocks. Wirings to InputIR endpoints are excluded (they represent system-level inputs, not block-to-block compositions). Temporal wirings are excluded (they are cross-timestep, not within-timestep sequential). If either endpoint has an empty forward port, the wiring is silently skipped (G-001 catches this case). **Algorithm:** For each qualifying wiring, apply `tokens_subset(label, src_out)` and `tokens_subset(label, tgt_in)`. Both must hold for compatibility. Report type mismatch if either fails. ______________________________________________________________________ ### G-006 -- Covariant Acyclicity **Type:** Structural (Layer 0) **Severity:** ERROR **Operates on:** SystemIR **Property Statement:** Let `G_cov = (V, E_cov)` where: ``` V = {b.name : b in B} E_cov = {(w.source, w.target) : w in W, w.direction = COVARIANT, NOT w.is_temporal} ``` `G_cov` is acyclic (a directed acyclic graph). **Invariant Enforced:** Well-definedness of the within-timestep computation order. In the canonical form `h = f . g`, the composition must define a function, not an implicit equation. A cycle in the covariant flow graph means Block A depends on Block B which depends on Block A within the same timestep -- an algebraic loop with no well-defined evaluation order. Temporal wirings (which introduce delay) and contravariant wirings (which flow backward by design) are excluded because they do not create within-timestep algebraic dependencies. **Failure Semantics:** A cycle means the system has an algebraic loop: the canonical form `h = f . g` cannot be evaluated as a function because the computation has circular dependencies. The system requires a fixed-point solver or the cycle must be broken by introducing temporal delay (`.loop()`). This is a critical structural error. **Soundness Conditions:** A pass guarantees that the covariant, non-temporal flow graph admits a topological ordering, which is necessary for `h = f . g` to be evaluated as a sequential function composition. This does not guarantee that the temporal dynamics are well-defined (that requires analysis of the full system including temporal loops). **Algorithm:** Build an adjacency list from covariant, non-temporal wirings between blocks. Run DFS-based cycle detection using three-color marking (WHITE/GRAY/BLACK). A back edge (encounter of a GRAY node) indicates a cycle. Report the cycle path. ______________________________________________________________________ ## Layer 1 -- Semantic Checks (SC-001 through SC-009) These checks operate on `GDSSpec` and validate domain properties that require knowledge of entities, block roles, parameters, and the canonical decomposition. They are called individually, not through `verify()`. ______________________________________________________________________ ### SC-001 -- Completeness **Type:** Semantic (Layer 1) **Severity:** WARNING **Operates on:** GDSSpec **Property Statement:** Let `U = {(e, v) : m in M, (e, v) in m.updates}` be the set of all (entity, variable) pairs updated by some mechanism. For every entity `e in E` and every variable `v in e.variables`: ``` (e.name, v) in U ``` In other words: the mechanism update map is surjective onto the state variable set. Every declared state variable has at least one mechanism that updates it. **Invariant Enforced:** Surjectivity of mechanism coverage onto the state space X. In the canonical form `h = f . g`, the state transition function `f` must be defined on all of X. An orphan variable (one never updated) means `f` is partial -- part of the state space is unreachable by the dynamics. This is almost always a specification error (a declared variable that was forgotten in the mechanism wiring). **Failure Semantics:** An orphan state variable will never change from its initial value. The state transition `f` does not cover it. If this is intentional (e.g., a constant parameter encoded as state), the warning can be accepted. Otherwise, the specification is incomplete. **Soundness Conditions:** A pass guarantees every declared state variable has at least one mechanism listing it in `updates`. This does not verify that the mechanism's logic actually modifies the variable at runtime -- only that the structural declaration exists. **Algorithm:** Collect all `(entity, variable)` pairs from all Mechanism `.updates` fields into a set. Iterate all entity variables and check membership. ______________________________________________________________________ ### SC-002 -- Determinism **Type:** Semantic (Layer 1) **Severity:** ERROR **Operates on:** GDSSpec **Property Statement:** For every wiring `w` in the spec and every (entity, variable) pair `(e, v)`: ``` |{m in w.block_names : m is Mechanism, (e, v) in m.updates}| <= 1 ``` Within any single wiring (composition), at most one mechanism may declare an update to a given state variable. **Invariant Enforced:** Functional determinism of state updates. In the canonical form `h = f . g`, the state transition `f` must be a function (single-valued), not a multi-valued relation. If two mechanisms in the same wiring both update the same variable, the result is ambiguous -- the final state depends on unspecified execution order. This makes `f` non-deterministic, which violates the GDS requirement that `f: X -> X` is a well-defined function. **Failure Semantics:** A write conflict means the state transition for that variable is ambiguous. Two mechanisms racing to update the same state variable within the same composition produce an undefined result. The canonical form `f` is not a function. This is a hard specification error. **Soundness Conditions:** A pass guarantees no write conflicts within any single wiring. This does not prevent the same variable from being updated by different mechanisms in different wirings (which is valid -- different compositions can have different update paths). The check is scoped to individual wirings because each wiring represents a single composition that executes as a unit. **Algorithm:** For each wiring, build a map from `(entity, variable)` to the list of mechanisms that update it. Report any entry with more than one mechanism. ______________________________________________________________________ ### SC-003 -- Reachability **Type:** Semantic (Layer 1) **Severity:** WARNING **Operates on:** GDSSpec **Property Statement:** Given blocks `from_block` and `to_block`, let `G_wire = (V_wire, E_wire)` where: ``` V_wire = UNION({wire.source, wire.target} : wire in w.wires, w in spec.wirings) E_wire = {(wire.source, wire.target) : wire in w.wires, w in spec.wirings} ``` There exists a directed path in `G_wire` from `from_block` to `to_block`. **Invariant Enforced:** Signal reachability in the wiring graph. Maps to the GDS attainability correspondence: can a boundary input ultimately influence a state update? Unreachable blocks indicate disconnected subgraphs in the composition, which means the specification has structurally isolated components. **Failure Semantics:** An unreachable pair means signals from `from_block` cannot influence `to_block` through any chain of wirings. This may indicate a missing wiring, a disconnected subgraph, or an intentional isolation boundary. WARNING severity because some disconnection may be by design (independent subsystems). **Soundness Conditions:** A pass guarantees the existence of a directed path. This is a structural reachability property -- it does not guarantee that signals are actually propagated at runtime (that depends on block behavior). The check uses the `Wire` declarations in `SpecWiring`, not the compiled `SystemIR` wirings. Unlike other semantic checks, SC-003 requires explicit `from_block` and `to_block` arguments and is not called automatically. **Algorithm:** Build an adjacency list from all `Wire` declarations across all `SpecWiring` instances. Run BFS from `from_block`. Report whether `to_block` is visited. ______________________________________________________________________ ### SC-004 -- Type Safety **Type:** Semantic (Layer 1) **Severity:** ERROR **Operates on:** GDSSpec **Property Statement:** For every wiring `w` in the spec and every wire `wire in w.wires`: ``` wire.space != "" IMPLIES wire.space in spec.spaces ``` Every non-empty space reference on a wire must resolve to a registered Space in the specification. **Invariant Enforced:** Referential integrity of space declarations on wiring channels. Spaces define the typed data domains that signals carry between blocks. An unregistered space means the data channel is undefined -- the system references a type that does not exist. This is necessary for the spec to be self-consistent and for downstream tools (simulation, analysis) that rely on space definitions. **Failure Semantics:** An unregistered space reference means the wire's data domain is undefined. Downstream consumers (simulation bridges, OWL export) cannot resolve the channel type. This is a hard specification error. **Soundness Conditions:** A pass guarantees all wire space references resolve to registered Spaces. This check validates referential integrity only -- it does not verify that the Space's TypeDef fields are compatible with the connected blocks' port types (that would require cross-referencing the token-based and TypeDef-based type systems, which is not currently implemented). **Algorithm:** For each wire in each wiring, if `wire.space` is non-empty, check membership in `spec.spaces`. Report any unregistered reference. ______________________________________________________________________ ### SC-005 -- Parameter References **Type:** Semantic (Layer 1) **Severity:** ERROR **Operates on:** GDSSpec **Property Statement:** Let `P_names = spec.parameter_schema.names()` be the set of registered parameter names. For every block `b in B` that implements `HasParams`: ``` {p : p in b.params_used} <= P_names ``` Every parameter referenced by a block must be registered in the spec's parameter schema. **Invariant Enforced:** Referential integrity of parameter declarations. Parameters (Theta) are structural metadata that parameterize block behavior. If a block declares that it uses a parameter but that parameter is not registered, downstream tools (PSUU, simulation) cannot resolve the reference. In the canonical form, parameters condition the transition function `f(x; theta)` -- an unresolved parameter means `theta` is partially undefined. **Failure Semantics:** An unresolved parameter reference means the block uses a parameter that does not exist in the spec. The parameter space Theta is incomplete. This is a hard specification error that will cause failures in any tool that tries to bind parameter values. **Soundness Conditions:** A pass guarantees all `params_used` entries resolve to registered `ParameterDef` objects. This does not validate that the parameter's TypeDef is compatible with how the block uses it (runtime concern, not structural). **Algorithm:** Retrieve registered parameter names from `spec.parameter_schema`. For each block implementing `HasParams`, check that every entry in `params_used` is in the registered set. ______________________________________________________________________ ### SC-006 -- Canonical Wellformedness (f) **Type:** Semantic (Layer 1) **Severity:** WARNING **Operates on:** GDSSpec **Property Statement:** Let `canonical = project_canonical(spec)`. Then: ``` |canonical.mechanism_blocks| >= 1 ``` The state transition function `f` in the canonical decomposition `h = f . g` must contain at least one mechanism. **Invariant Enforced:** Non-degeneracy of the state transition. In the canonical form `h = f . g`, `f: X -> X` is the state transition function implemented by mechanisms. If there are no mechanisms, `f` is the empty function -- the system has no state dynamics. The canonical form degenerates to `h = g` (pure observation with no state update). **Failure Semantics:** An empty `f` means the system cannot update state. This may be intentional for pure policy compositions or game-theoretic specifications that model strategy selection without state dynamics. WARNING severity reflects this ambiguity. **Soundness Conditions:** A pass guarantees at least one mechanism exists. This does not validate that the mechanisms form a coherent transition function (SC-001 and SC-002 address coverage and determinism). **Algorithm:** Call `project_canonical(spec)` and check whether `mechanism_blocks` is non-empty. ______________________________________________________________________ ### SC-007 -- Canonical Wellformedness (X) **Type:** Semantic (Layer 1) **Severity:** WARNING **Operates on:** GDSSpec **Property Statement:** Let `canonical = project_canonical(spec)`. Then: ``` |canonical.state_variables| >= 1 ``` The state space X must contain at least one variable. **Invariant Enforced:** Non-degeneracy of the state space. In the canonical form `h = f . g`, the domain and codomain of `f` is the state space `X = PRODUCT(e.variables : e in E)`. If X is empty (no entities with variables), the canonical form has no state to transition over. The system is stateless. **Failure Semantics:** An empty X means there is no state for `f` to act on. Like SC-006, this may be intentional for stateless compositions. WARNING severity reflects this ambiguity. **Soundness Conditions:** A pass guarantees at least one state variable exists in the canonical projection. This does not validate that the state space is well-typed (Space and TypeDef consistency is a separate concern). **Algorithm:** Call `project_canonical(spec)` and check whether `state_variables` is non-empty. Note: SC-006 and SC-007 are both produced by the same function `check_canonical_wellformedness()`. ______________________________________________________________________ ### SC-008 -- Admissibility References **Type:** Semantic (Layer 1) **Severity:** ERROR **Operates on:** GDSSpec **Property Statement:** For every registered `AdmissibleInputConstraint` `ac`: ``` (1) ac.boundary_block in spec.blocks (2) spec.blocks[ac.boundary_block] is BoundaryAction (3) For all (e, v) in ac.depends_on: e in spec.entities AND v in spec.entities[e].variables ``` Every admissibility constraint must reference an existing BoundaryAction and valid (entity, variable) pairs. **Invariant Enforced:** Referential integrity of admissibility declarations. Admissibility constraints define the input space restrictions on BoundaryActions -- they specify which exogenous inputs are admissible given the current state. An invalid reference (non-existent block, wrong block type, non-existent entity or variable) means the constraint cannot be evaluated. In the canonical form, admissibility constrains the domain of `g` (the observation/policy map that includes boundary inputs). **Failure Semantics:** An invalid reference means the admissibility constraint is structurally broken. It references a block that does not exist, is not a BoundaryAction, or depends on state variables that are not declared. Downstream tools that evaluate admissibility (simulation, analysis) will fail. **Soundness Conditions:** A pass guarantees all structural references in admissibility constraints are valid. This does not verify that the constraint predicate is logically satisfiable or that the referenced BoundaryAction's interface is compatible with the constraint -- those are runtime concerns. **Algorithm:** For each `AdmissibleInputConstraint`, check: (1) `boundary_block` exists in `spec.blocks`; (2) it is a `BoundaryAction` instance; (3) each `(entity, var)` in `depends_on` resolves to a registered entity with that variable. ______________________________________________________________________ ### SC-009 -- Transition Reads **Type:** Semantic (Layer 1) **Severity:** ERROR **Operates on:** GDSSpec **Property Statement:** For every registered `TransitionSignature` `ts`: ``` (1) ts.mechanism in spec.blocks (2) spec.blocks[ts.mechanism] is Mechanism (3) For all (e, v) in ts.reads: e in spec.entities AND v in spec.entities[e].variables (4) For all b in ts.depends_on_blocks: b in spec.blocks ``` Every transition signature must reference an existing Mechanism, valid (entity, variable) read pairs, and valid block dependencies. **Invariant Enforced:** Referential integrity of transition signature declarations. Transition signatures describe the read dependencies of each mechanism -- which state variables it reads and which blocks it depends on. In the canonical form, this metadata describes the input dependencies of the state transition function `f`. An invalid reference means the dependency graph of `f` is structurally broken. **Failure Semantics:** An invalid reference means the transition signature points to non-existent components. The mechanism may not exist, the read variables may not be declared, or the dependency blocks may not be registered. Downstream tools that use transition signatures for dependency analysis (reachability, causal ordering) will produce incorrect results. **Soundness Conditions:** A pass guarantees all structural references in transition signatures are valid. This does not verify that the declared reads are consistent with the mechanism's actual runtime behavior or that the dependency graph is complete. **Algorithm:** For each `TransitionSignature`, check: (1) `mechanism` exists in `spec.blocks`; (2) it is a `Mechanism` instance; (3) each `(entity, var)` in `reads` resolves to a registered entity with that variable; (4) each block in `depends_on_blocks` exists in `spec.blocks`. ______________________________________________________________________ ## What the Checks Do NOT Prove The 15 checks above validate *structural well-formedness* of the specification. They ensure the model is internally consistent, the composition algebra is well-typed, and the canonical form `h = f . g` is non-degenerate. They do NOT prove any of the following: **Behavioral correctness.** The checks do not verify that the system *does the right thing*. A thermostat model can pass all 15 checks and still have inverted control logic (heating when it should cool). Correctness requires behavioral specifications (pre/post conditions, temporal logic) that are outside the scope of structural verification. **Safety properties.** The checks do not prove that the system avoids bad states. "The temperature never exceeds 100C" is a safety property that requires invariant analysis over the state space, not structural checks on the specification graph. **Liveness properties.** The checks do not prove that the system eventually reaches a desired state. "The system eventually stabilizes" is a liveness property that requires temporal logic or Lyapunov analysis. **Stability.** The checks do not analyze the dynamical stability of the system. Eigenvalue analysis, Lyapunov functions, and bifurcation analysis are the domain of `gds-continuous` and `gds-symbolic`, not structural verification. **Completeness of the specification itself.** SC-001 checks that every variable has a mechanism, but it does not check whether the specification captures all relevant aspects of the real system. The map is not the territory. **Semantic equivalence.** The checks do not prove that two specifications are equivalent, or that a specification correctly implements a higher-level requirement. Bisimulation and refinement checking are research-level concerns (see `research/verification-plan.md`). This aligns with the **verification humility doctrine**: structural verification establishes necessary conditions for a well-formed specification, but it does not establish sufficient conditions for correctness. The checks are a foundation for further analysis (simulation, formal methods, domain review), not a substitute for it. # Controller-Plant Duality and Perspective Inversion > Design note documenting the formal duality between the controller and plant perspectives at every `>>` composition boundary, and the role of `ControlAction` as the output map `y = C(x, d)`. ______________________________________________________________________ ## The Duality Statement At every `>>` (sequential composition) boundary, one system's **ControlAction output** is isomorphic to the next system's **BoundaryAction input**: ``` System A System B ┌─────────────────────┐ ┌─────────────────────┐ │ BoundaryAction (z) │ │ BoundaryAction (z) │ │ Policy d = g(x,z) │ │ Policy d = g(x,z) │ │ Mechanism x'=f(x,d)│ │ Mechanism x'=f(x,d)│ │ ControlAction y=C │ ──(y)──> │ │ └─────────────────────┘ └─────────────────────┘ output y receives y as z ``` What System A calls its **output** (`ControlAction.forward_out`), System B receives as its **exogenous input** (`BoundaryAction.forward_out` into the system). The signal is the same; only the perspective changes. This duality is fundamental to compositional modeling: every system boundary is simultaneously an output boundary (from inside) and an input boundary (from outside). ______________________________________________________________________ ## Role Naming: Two Perspectives | Role | Inside (plant) perspective | Outside (controller) perspective | | ------------------ | ------------------------------------------ | ------------------------------------------------ | | **BoundaryAction** | Exogenous input z the system conditions on | Output from a previous system acting on this one | | **Policy** | Decision d = g(x, z) | Internal -- opaque to outside | | **Mechanism** | State update x' = f(x, d) | Internal dynamics -- opaque to outside | | **ControlAction** | Output map y = C(x, d) | Action this system exerts on the next system | The inside perspective sees `BoundaryAction` as "things that happen to me" and `ControlAction` as "what I emit." The outside perspective sees the same pair as "what I do to the next system" and "what the previous system did to me." ______________________________________________________________________ ## Port Direction: Two Perspectives | Port direction | Inside perspective | Outside perspective | | -------------- | --------------------------------- | ----------------------------------- | | `forward_out` | What this system emits (y) | Control action on the next block | | `forward_in` | What this system receives (z) | Output from the previous block | | `backward_out` | Cost/utility this system produces | Constraint signal to previous block | | `backward_in` | Cost/utility this system receives | Constraint signal from next block | The forward channel carries state-dependent signals. The backward channel carries evaluative or constraint signals (costs, utilities, feasibility). Both channels exhibit the same duality at `>>` boundaries. ______________________________________________________________________ ## Canonical Form with Output Map The full canonical form with the output map is: ``` d = g(x, z) -- input map (Policy) x' = f(x, d) -- state transition (Mechanism) y = C(x, d) -- output map (ControlAction) ``` Where: - **X** = state space (Entity variables) - **Z** = exogenous signal space (BoundaryAction outputs) - **D** = decision space (Policy outputs) - **Y** = output space (ControlAction outputs) - **h = f . g** produces the state transition `x' = f(x, g(x, z))` - **C** produces the observable output `y` that crosses the system boundary The state transition `h` and the output map `C` are independent: `h` determines what the system *becomes*, while `C` determines what the system *emits*. Both depend on the current state and decision, but they produce different things (next state vs. output signal). ______________________________________________________________________ ## Example: Thermostat from Both Perspectives ### Inside (plant) perspective -- "I am the room" ``` # What happens TO the room sensor = BoundaryAction("Sensor", forward_out=["Temperature"]) # z: sensed temperature setpoint = BoundaryAction("Setpoint", forward_out=["Target Temp"]) # z: desired temperature # Room's internal logic controller = Policy("PID Controller", forward_in=["Temperature", "Target Temp"], forward_out=["Heater Command"]) # d = g(x, z) heater = Mechanism("Heater Dynamics", forward_in=["Heater Command"], updates=[("Room", "temperature")]) # x' = f(x, d) # What the room EMITS to the next system output = ControlAction("Temperature Output", forward_in=["Temperature"], forward_out=["Room Temperature"]) # y = C(x, d) ``` ### Outside (controller) perspective -- "I am the building manager" From outside, the room is a single compositional unit: - It receives `Sensor` and `Setpoint` signals (I provide these) - It emits `Room Temperature` (I observe this) - Its internal PID controller and heater dynamics are opaque to me When I compose the room with an HVAC optimizer via `>>`, the room's `ControlAction` output (`Room Temperature`) becomes the optimizer's `BoundaryAction` input. ______________________________________________________________________ ## Connection to SC-010: Why C Must Not Feed g SC-010 enforces that ControlAction outputs do not route back to Policy or BoundaryAction blocks within the same system. The formal reason: 1. **g** (the input map) transforms `(x, z)` into decisions `d` 1. **C** (the output map) transforms `(x, d)` into observable output `y` 1. If `y` feeds back into `g`, then `g` depends on `C` which depends on `g` -- creating an algebraic loop within a single timestep This conflates the output map with the input map, breaking the canonical separation `h = f . g`. The output map C is meant to produce signals that cross the system boundary at `>>` composition points, not signals that circulate internally. ControlAction outputs **may** feed Mechanism blocks (the state dynamics `f` can legitimately depend on output observations). They **must not** feed Policy or BoundaryAction blocks (the input map `g`). If internal feedback is needed, use Policy-to-Policy wiring within the `g` pathway, or use `.loop()` / `.feedback()` for temporal/backward recurrence. ______________________________________________________________________ ## Connection to Future Work - **T0-4 (Temporal agnosticism)**: The duality holds regardless of whether composition is discrete-time, continuous-time, or event-driven. The `>>` boundary is a structural fact, not a temporal one. - **T1-3 (Disturbance inputs)**: A future extension may distinguish between controllable exogenous inputs (reference signals) and uncontrollable exogenous inputs (disturbances). Both enter through BoundaryAction, but their controllability status differs. This refinement would add metadata to BoundaryAction without changing the duality structure. ______________________________________________________________________ ## Summary The controller-plant duality is not a design choice -- it is an emergent property of compositional systems. Any system that can be composed with `>>` necessarily has an inside (what it does to itself) and an outside (what it does to its neighbors). The four GDS roles map cleanly to both perspectives, and the `ControlAction` role is the formal carrier of the output map that bridges them. # Disturbance Formalization: The U_c / W Partition ## Motivation The standard GDS canonical form treats all BoundaryAction outputs as a single exogenous signal space Z: ``` g : X x Z -> D (policy / input map) f : X x D -> X (state transition) h = f . g (composition) ``` In practice, not all exogenous inputs pass through the decision layer g. Some inputs -- wind gusts, sensor noise, market shocks -- enter the state dynamics f directly, bypassing any policy or control logic. These are **disturbances**. ## The Partition We split Z into two disjoint subspaces: | Space | Name | Description | | ------- | ----------------- | ---------------------------------------------------- | | **U_c** | Controlled inputs | Exogenous signals that feed the policy map g | | **W** | Disturbances | Exogenous signals that bypass g and enter f directly | The extended canonical form becomes: ``` g : X x U_c -> D (policy / input map) f : X x D x W -> X (state transition with disturbances) h = f(-, g(-, -), -) (composition) ``` When W is empty, this reduces to the standard form. ## Tagging Convention Disturbance inputs are declared by tagging a `BoundaryAction` block with `role="disturbance"`: ``` import gds wind = gds.BoundaryAction( name="Wind Gust", interface=gds.interface(forward_out=["Force"]), tags={"role": "disturbance"}, ) ``` The tag is semantic metadata -- it does not change block construction, composition, or compilation. It only affects: 1. **Canonical projection**: `project_canonical()` partitions BoundaryAction ports into `input_ports` (U_c) and `disturbance_ports` (W). 1. **DST-001 verification**: checks that disturbance-tagged blocks are not wired to Policy blocks. ## DST-001: Disturbance Routing Check **Invariant**: No component of W appears in the domain of g. A disturbance-tagged BoundaryAction must route to Mechanism blocks (the f pathway), never to Policy blocks (the g pathway). Routing a disturbance through Policy would mean the controller can observe and act on it, which contradicts its classification as a disturbance. | Wiring target | Allowed? | Rationale | | -------------- | ------------------ | --------------------------------------- | | Mechanism | Yes | Disturbance enters f directly | | Policy | No (DST-001 ERROR) | Would place W in domain of g | | ControlAction | Yes | Output map C may depend on disturbances | | BoundaryAction | N/A | BoundaryActions have no forward_in | ## Modeling Guidelines ### When to tag as disturbance Tag a BoundaryAction as a disturbance when: - The input represents noise, perturbation, or environmental forcing - No controller in the system observes or reacts to this specific input - The input affects state dynamics directly (e.g., wind on a drone, noise on a sensor reading, demand shock on inventory) ### When NOT to tag as disturbance Do not tag as disturbance when: - The input is observed by a sensor and fed to a controller - The input represents a setpoint, reference signal, or user command - A policy block explicitly takes this input as part of its decision ### Example: Thermostat with Wind Disturbance ``` import gds # Controlled input -- feeds the policy setpoint = gds.BoundaryAction( name="Setpoint", interface=gds.interface(forward_out=["Target Temperature"]), ) # Disturbance -- bypasses policy, enters mechanism directly wind = gds.BoundaryAction( name="Wind", interface=gds.interface(forward_out=["Heat Loss"]), tags={"role": "disturbance"}, ) controller = gds.Policy( name="PID Controller", interface=gds.interface( forward_in=["Target Temperature"], forward_out=["Heater Command"], ), ) heater = gds.Mechanism( name="Room Dynamics", interface=gds.interface(forward_in=["Heater Command + Heat Loss"]), updates=[("Room", "temperature")], ) ``` The canonical projection will show: - `input_ports`: `[("Setpoint", "Target Temperature")]` (U_c) - `disturbance_ports`: `[("Wind", "Heat Loss")]` (W) - `formula()`: `h : X -> X (h = f . g); f : X x D x W -> X` ## Relationship to Existing Theory The U_c / W partition is a **semantic layer extension** -- it does not change the underlying composition algebra or compilation pipeline. The partition is derived purely from tags at canonical projection time. This aligns with control theory's standard plant model: ``` x_{t+1} = f(x_t, u_t, w_t) ``` where u_t is the control input and w_t is the disturbance. The GDS framework makes this distinction explicit and verifiable through DST-001. # Temporal Agnosticism of the Core Algebra ## Invariant Statement > The composition algebra of gds-framework is temporally agnostic. The flag `is_temporal=True` on a wiring asserts structural recurrence and nothing else. No time model is implied or required by the core. The canonical form h = f . g is an atemporal map. Time models are DSL-layer declarations. ## Three-Layer Temporal Stack ``` Layer 0 — gds-framework (core) is_temporal=True encodes structural recurrence only: "a temporal boundary exists here." No commitment to what model of time governs that boundary. h = f . g is a single atemporal map application. The algebra is silent on discrete steps, continuous flow, and events. Layer 1 — DSL (ExecutionContract) The DSL declares what "temporal boundary" means for its domain: discrete — boundary is a discrete index step continuous — boundary is a continuous-time interval event — boundary is triggered by a discrete event atemporal — boundary carries no time semantics (e.g., OGS iterated games) This is the DSL author's commitment, not the core's. Layer 2 — Simulation (SolverInterface / runner) Required only to execute a specification. A solver or runner instantiates the time model concretely. The choice of solver (RK4, event queue, discrete stepper) is simulation-layer, not specification-layer. Specification and verification are fully valid without a solver. ``` ## Proof by Inspection of Composition Operators None of the four composition operators introduce a time model. They operate on structural interfaces only. **StackComposition (`>>`)** chains `forward_out` to `forward_in` by token overlap. No temporal concept is referenced. The validator checks token sets, not time indices. **ParallelComposition (`|`)** concatenates interfaces. No validation is performed between left and right. Time is not mentioned. **FeedbackLoop (`.feedback()`)** routes `backward_out` to `backward_in` within a single evaluation. The wiring direction is `CONTRAVARIANT`. No time model is assumed -- "within a single evaluation" means "before the current map application completes," not "within a discrete timestep." **TemporalLoop (`.loop()`)** routes `forward_out` to `forward_in` across evaluations. The wiring direction must be `COVARIANT`. The `is_temporal` flag is set on the resulting `WiringIR` edges. This flag tells the acyclicity checker (G-006) to exclude those edges from the covariant DAG constraint. It carries no semantic content about what kind of time governs the recurrence -- it is a structural recurrence marker only. The `is_temporal` flag exists so that G-006 can distinguish "this edge closes a recurrence" from "this edge creates an illegal cycle." That is its entire semantics at Layer 0. ## The OGS Existence Proof OGS (Open Game Specification) iterated games compile and verify correctly with temporal wirings but no time model. The canonical form degenerates to h = g with X = empty, f = empty. This is not a special case -- it is existence proof that the algebra is genuinely time-agnostic. An OGS `IteratedGame` wraps a `OneShot` game with `.loop()` temporal wirings that carry strategy and payoff signals across rounds. The word "round" here is a game-theoretic concept, not a time concept. No discrete-time index, no continuous-time interval, and no event trigger is declared at the core level. The temporal wirings assert only: "there is structural recurrence here." This demonstrates that the core algebra supports recurrence patterns that have nothing to do with physical time. The `.loop()` operator is about structural feedback topology, not about clocks. ## ExecutionContract Time Model Table | time_domain | Meaning at DSL layer | Example DSL | | ----------- | ----------------------------------------------- | -------------------------- | | discrete | Temporal boundary is a discrete index step | gds-stockflow, gds-control | | continuous | Temporal boundary is a continuous-time interval | gds-continuous | | event | Temporal boundary is triggered by an event | (future) | | atemporal | Temporal boundary carries no time semantics | gds-games (OGS) | ## Vocabulary Guide When writing documentation for the core framework (Layer 0), use temporally neutral language: | Avoid (core docs) | Prefer | | ------------------------------- | ---------------------------------------------------- | | "within a single timestep" | "within a single evaluation" | | "across timesteps" | "across temporal boundaries" | | "next step" / "next timestep" | "subsequent application" or "recurrence" | | "time t" / "t+1" subscripts | "evaluation k" / "k+1" (or omit indices) | | "trajectory x_0, x_1, ..." | "sequence of states under repeated application of h" | | "iteration" (implying counting) | "recurrence" or "repeated application" | DSL-layer exception DSL-layer docs (gds-stockflow, gds-control, gds-continuous, gds-games) **may** use time-specific language because they have declared a time model via their `ExecutionContract`. The vocabulary guide applies only to core framework documentation. # Execution Semantics ## The Three-Layer Temporal Stack GDS separates temporal concerns into three layers: | Layer | Responsibility | Where | | ---------------------- | ----------------------------------------- | ------------------------------------ | | **Core algebra** | Temporally agnostic composition operators | `gds-framework` blocks, compiler, IR | | **DSL declaration** | Declares what "time" means for the domain | `ExecutionContract` on `GDSSpec` | | **Simulation runtime** | Advances state through time | `gds-sim`, `gds-continuous` | The core algebra (Layer 0) carries no intrinsic notion of time. `TemporalLoop` names a structural boundary between evaluation steps, but the algebra does not define what "step" means -- that is the DSL's job. ## ExecutionContract `ExecutionContract` is a frozen dataclass attached to `GDSSpec` as an optional field. It declares the time model that the DSL commits to: ``` from gds.execution import ExecutionContract contract = ExecutionContract( time_domain="discrete", # discrete | continuous | event | atemporal synchrony="synchronous", # synchronous | asynchronous (discrete only) observation_delay=0, # 0 = Moore, 1 = one-step delay (discrete only) update_ordering="Moore", # Moore | Mealy (discrete only) ) ``` ### Fields | Field | Type | Default | Meaning | | ------------------- | ------- | --------------- | ----------------------------------------------- | | `time_domain` | Literal | *required* | What kind of temporal boundary the DSL declares | | `synchrony` | Literal | `"synchronous"` | For discrete only: sync or async state updates | | `observation_delay` | int | `0` | For discrete only: observation delay in steps | | `update_ordering` | Literal | `"Moore"` | For discrete only: Moore or Mealy semantics | **Validation:** Fields `synchrony`, `observation_delay`, and `update_ordering` are only meaningful for `time_domain="discrete"`. Setting non-default values with any other time domain raises `ValueError`. ### Compatibility Two contracts are compatible (can be composed) when: - They share the same `time_domain`, **or** - At least one is `atemporal` (universal donor -- composes with anything) ``` discrete = ExecutionContract(time_domain="discrete") atemporal = ExecutionContract(time_domain="atemporal") assert discrete.is_compatible_with(atemporal) # True assert atemporal.is_compatible_with(discrete) # True ``` ### Optional Attachment A `GDSSpec` without an `ExecutionContract` is valid for all structural and semantic verification. The contract is required only when connecting a spec to a simulation engine. ``` spec = GDSSpec(name="my-system") # ... register types, blocks, wirings ... spec.execution_contract = ExecutionContract(time_domain="discrete") ``` ## Moore Discrete-Time Semantics The default discrete contract (`synchronous / 0 / Moore`) corresponds to the classical Moore machine: - **Synchronous:** All state variables update simultaneously at each step - **Observation delay = 0:** Output depends on current state only (not inputs) - **Moore ordering:** Observation happens before decision within each step This is the natural semantics for stock-flow models, control systems, and state machines -- all DSLs that use `.loop()` for temporal recurrence. ## DSL Contract Mapping | DSL | time_domain | synchrony | update_ordering | Notes | | -------------------------------------------------- | ----------- | ----------- | --------------- | ------------------------------ | | gds-stockflow | discrete | synchronous | Moore | Accumulation semantics | | gds-control | discrete | synchronous | Moore | Sensor-controller-plant | | gds-games | atemporal | -- | -- | Round iteration, no time model | | gds-software (state machine) | discrete | synchronous | Moore | State transitions | | gds-software (DFD, C4, component, ERD, dependency) | atemporal | -- | -- | Structural diagrams | | gds-business (CLD) | discrete | synchronous | Moore | Causal feedback | | gds-business (SCN) | discrete | synchronous | Moore | Supply chain flows | | gds-business (VSM) | atemporal | -- | -- | Value stream mapping | ## Verification `SC-011` (`check_execution_contract_compatibility`) validates the contract: - **No contract:** INFO -- spec is valid for structural verification only - **Valid contract:** INFO -- reports the declared time model - **Invalid contract:** ERROR -- inconsistent field values (defensive, normally caught by `__post_init__`) ## What Is Not Covered - **Continuous-time (ODE):** `time_domain="continuous"` is defined but no DSL emits it yet. This will connect to `gds-continuous` when the spec-to-sim bridge is extended (T2-4). - **Event-driven:** `time_domain="event"` is reserved for future event-based DSLs. - **Asynchronous updates:** `synchrony="asynchronous"` is defined but no DSL emits it yet. This enables agent-based models where entities update at different rates. - **Mealy semantics:** `update_ordering="Mealy"` is defined for systems where output depends on both state and current input. # Verification Check Traceability Matrix This document maps formal verification requirements (from [Check Specifications](https://blockscience.github.io/gds-core/framework/design/check-specifications/index.md)) to their test implementations. ## Matrix | Requirement | Test File | Test Class/Method | Coverage | | ----------- | -------------------- | --------------------------- | --------------------------------------- | | G-001 | test_verification.py | TestG001 | Domain/codomain matching pass/fail | | G-002 | test_verification.py | TestG002 | Signature completeness pass/fail | | G-003 | test_verification.py | TestG003 | Direction consistency pass/fail | | G-004 | test_verification.py | TestG004 | Dangling wirings pass/fail | | G-005 | test_verification.py | TestG005 | Sequential type compatibility pass/fail | | G-006 | test_verification.py | TestG006 | Covariant acyclicity pass/fail | | SC-001 | test_spec_checks.py | TestCompleteness | Orphan variable detection | | SC-002 | test_spec_checks.py | TestDeterminism | Write conflict detection | | SC-003 | test_spec_checks.py | TestReachability | Signal path queries | | SC-004 | test_spec_checks.py | TestTypeSafety | Wire-space consistency | | SC-005 | test_spec_checks.py | TestParameterReferences | Parameter resolution pass/fail | | SC-006 | test_spec_checks.py | TestCanonicalWellformedness | Non-empty f (mechanism exists) | | SC-007 | test_spec_checks.py | TestCanonicalWellformedness | Non-empty X (entity exists) | | SC-008 | test_spec_checks.py | TestAdmissibilityReferences | Constraint reference validation | | SC-009 | test_spec_checks.py | TestTransitionReads | Transition signature validation | ## Running Requirement-Traced Tests ``` # Run all requirement-traced tests uv run --package gds-framework pytest packages/gds-framework/tests -v -m "requirement" # Run tests for a specific requirement uv run --package gds-framework pytest packages/gds-framework/tests -v -m "requirement('G-006')" ``` ## Coverage Gaps As of this document's creation, all 15 core checks have at least one positive and one negative test case. Domain-specific checks (CS-xxx, SF-xxx, T-xxx, S-xxx, etc.) are tested in their respective packages but are not yet covered by this traceability matrix. # gds-core: A Clean Python Package for Typed GDS Specifications > Design document synthesizing Generalized Dynamical Systems theory, MSML, and BDP-lib into a minimal, composable Python package. ______________________________________________________________________ ## 1. What is a Generalized Dynamical System? ### 1.1 The Core Formalism A Generalized Dynamical System (GDS), as formalized by Roxin in the 1960s and extended by Zargham & Shorish (2022), is a pair **{h, X}** where: - **X** is the **state space** — but unlike classical dynamical systems, X can be *any* data structure, not just ℝⁿ. It can be records, graphs, token balances, governance configurations, agent populations — anything. - **h** is a **transition mapping** X → X, where the space of such mappings is **closed under composition**. The key extension over classical dynamical systems: GDS doesn't assume vector spaces. The mapping h can incorporate: - **Admissible inputs U** — what actions are currently allowed given the state - **Constraints** — invariants that must hold across transitions - **Multiple agents** with different action sets - **Policies** that select from feasible actions From the paper: *"a data structure is mapped to itself and the space of such mappings is closed under composition."* ### 1.2 Why GDS Matters GDS nests several well-known modeling frameworks into one: - **Optimal control** — state + control inputs + constraints - **System dynamics** — stocks, flows, feedback loops - **Agent-based models** — heterogeneous actors with strategies - **Network dynamics** — topology-dependent state evolution The practical power: you can specify a complex socio-technical system — a DAO's treasury management, a bonding curve, an insurance contract — as a single formal object that admits questions about: - **Reachability** — "Can the system get to state Y from state X?" - **Admissibility** — "Is this action allowed given the current state?" - **Controllability** — "Can we steer the system toward desired outcomes?" ### 1.3 Block Diagram Representation The block diagram representation (from Zargham & Shorish's "Block Diagrams for Categorical Cybernetics") makes GDS implementable as software: - **Blocks** are typed functions: `domain → codomain` - **Spaces** are the types flowing between blocks - **Wiring** is composition: connecting block outputs to inputs The composed wiring *is* the transition function h. The spaces flowing through wires carry typed data that can be verified at specification time — before any code runs. This is the bridge from math to engineering. ### 1.4 GDS Concept → Software Mapping | GDS Math | Software Concept | Python Class | | ---------------------------- | ------------------------------------------- | ----------------------------------- | | X (state space) | Product of entity states | `Entity` + `StateVariable` | | U (admissible inputs) | Exogenous signals | `BoundaryAction` | | h: X → X (transition) | Composed block wiring | `Wiring` | | Action spaces | Typed data flowing between blocks | `Space` | | Type structure | Constrained Python type | `TypeDef` | | Mechanism | State-writing function | `Mechanism` | | Policy / Decision | Signal-routing logic | `Policy` | | Constraints | Invariants on blocks | `Block.constraints` | | Reachability | Transitive wiring closure | `SpecVerifier` | | Admissibility | Domain satisfaction check | `SpecVerifier` | | Attainability correspondence | Possible next-states from current state | `SpecVerifier.check_reachability()` | | Configuration space | State subspace satisfying conservation laws | `SpecVerifier.check_conservation()` | ______________________________________________________________________ ## 2. Prior Art: MSML & BDP-lib ### 2.1 MSML (math-spec-mapping) *888 commits, 58 Python files, JSON-spec-first approach.* MSML was designed as an end-to-end specification-to-simulation tool — and in that role it delivered considerable value: 1. JSON-based spec → parse → validate → report pipeline gives trackability via git 1. Block subtypes (BoundaryAction, Policy, Mechanism, ControlAction) map cleanly to GDS roles 1. Transmission channels (action + state update) make data flow explicit 1. Obsidian report generation for stakeholder communication is genuinely useful 1. Parameter crawling — tracing which params affect which blocks — is a killer feature 1. Composite blocks (Stack, Parallel, Split) for wiring composition match GDS composition **Where our goals diverge:** MSML was built to serve a full pipeline from specification through rendering to cadCAD execution, which led to natural design choices for that use case — JSON-first authoring for language-agnostic specs, integrated Mermaid rendering, a central `MathSpec` coordinator. Our goal is different: a lightweight, composable library focused purely on typed specification and verification. This means we make different trade-offs: - **Python-native authoring** instead of JSON-first — optimizing for the Python developer workflow - **Runtime type constraints** instead of metadata labels — catching errors at spec-time - **Separated concerns** — spec, rendering, and execution as independent packages - **Formal verification** — completeness, determinism, and reachability checks that go beyond structural validation ### 2.2 BDP-lib (Block Diagram Protocol) *161 commits, JSON-schema protocol, language-agnostic design.* BDP introduced an elegant conceptual framework that we build on: 1. **Clean 2×2 conceptual framework** — Abstract/Concrete × Structure/Behavior: | | Abstract | Concrete | | ------------- | -------- | --------- | | **Structure** | Space | Wire | | **Behavior** | Block | Processor | 1. Space (abstract structure) vs Wire (concrete structure) distinction is the right ontology 1. Block (abstract behavior) vs Processor (concrete behavior) — templates vs instances 1. Protocol/client separation — schema is language-agnostic, implementations can vary 1. Validation rules: referential integrity, single-input ports, connectivity checks **Where our goals diverge:** BDP was designed as a general-purpose block diagram protocol — language-agnostic and domain-neutral. That generality is a strength for its intended purpose, but our needs are more specific. We need domain-aware primitives (state entities, parameters, GDS block roles) and semantic validation (type matching, reachability) that go beyond structural connectivity checks. We also prioritize Python-native ergonomics over protocol-level interoperability. ### 2.3 The Synthesis `gds-framework` builds on the strengths of both projects: **BDP's layered architecture** (abstract/concrete separation) combined with **MSML's domain knowledge** (GDS block roles, state entities, parameter tracking). We add **Python-native classes** with real type constraints, **bidirectional composition** from categorical cybernetics, and a **formal verification layer** — all in a focused library that separates specification from rendering and execution. ______________________________________________________________________ ## 3. Proposed Design: gds-core ### 3.1 Design Principles **1. Python-first, JSON-optional** Users define specs in Python with real types and IDE autocomplete. JSON serialization is an export format, not the authoring format. **2. Types that bite** `TypeDef` carries runtime constraints (not just labels). `Space.validate()` actually checks data against its schema. Errors are caught at spec-time, before simulation. **3. Spec ≠ Rendering ≠ Execution** gds-core is ONLY types, classes, and verification. No Mermaid. No cadCAD. No Obsidian. Those become separate packages that consume a `GDSSpec` object. **4. Composition is first-class** `StackWiring`, `ParallelWiring`, `SplitWiring` are explicit. Domain/codomain compatibility is checked at composition time, not after the fact. **5. Verification ladder** Level 1: structural (references exist). Level 2: type-flow (spaces match across wires). Level 3: semantic (conservation, determinism, reachability). Each level builds on the last. **6. Building on prior art** Adopt BDP's abstract/concrete × structure/behavior framing. Adopt MSML's GDS-specific block subtypes. Add Python-native authoring with runtime type constraints. ### 3.2 Package Structure ``` gds-core/ ├── gds_core/ │ ├── __init__.py # Public API │ ├── types.py # TypeDef, built-in types │ ├── spaces.py # Space, EMPTY, TERMINAL │ ├── blocks.py # Block, BoundaryAction, Policy, │ │ # Mechanism, ControlAction │ ├── state.py # Entity, StateVariable │ ├── wiring.py # Wire, Wiring, Stack/Parallel/Split │ ├── spec.py # GDSSpec (registration + basic validation) │ ├── verify.py # SpecVerifier (higher-order checks) │ ├── query.py # Dependency graph, reachability, impact analysis │ └── serialize.py # to/from JSON, to/from dict ├── tests/ │ ├── test_types.py │ ├── test_spaces.py │ ├── test_blocks.py │ ├── test_wiring.py │ ├── test_spec.py │ └── test_verify.py ├── examples/ │ ├── predator_prey.py │ ├── bonding_curve.py │ └── insurance_contract.py └── pyproject.toml ``` ### 3.3 Layer Separation | Layer | Package | Depends on | | ----------------- | ---------------------- | ------------------------- | | **Specification** | **gds-core (this)** | **Nothing (stdlib only)** | | Visualization | gds-viz (separate) | gds-core | | Simulation | gds-sim (separate) | gds-core | | Reports | gds-reports (separate) | gds-core, gds-viz | | cadCAD bridge | gds-cadcad (separate) | gds-core, cadcad | The spec layer has **zero dependencies**. This is a hard constraint. ______________________________________________________________________ ## 4. Full Type & Class API ### 4.1 types.py — TypeDef and Built-in Types ``` from typing import Any, Optional, Callable class TypeDef: """A named, constrained type used in spaces and state. A named, constrained type — the atom of the type system. Carries runtime-checkable constraints beyond metadata labels. """ def __init__( self, name: str, python_type: type, description: str = "", constraint: Optional[Callable[[Any], bool]] = None, units: Optional[str] = None, ): self.name = name self.python_type = python_type self.description = description self.constraint = constraint self.units = units def validate(self, value: Any) -> bool: """Check if a value satisfies this type definition.""" if not isinstance(value, self.python_type): return False if self.constraint and not self.constraint(value): return False return True def __repr__(self): return f"TypeDef({self.name}: {self.python_type.__name__})" def __eq__(self, other): return isinstance(other, TypeDef) and self.name == other.name def __hash__(self): return hash(self.name) # ── Built-in types ────────────────────────────────────────── Probability = TypeDef( "Probability", float, constraint=lambda x: 0.0 <= x <= 1.0, description="A value in [0, 1]", ) NonNegativeFloat = TypeDef( "NonNegativeFloat", float, constraint=lambda x: x >= 0, ) PositiveInt = TypeDef( "PositiveInt", int, constraint=lambda x: x > 0, ) TokenAmount = TypeDef( "TokenAmount", float, constraint=lambda x: x >= 0, units="tokens", ) AgentID = TypeDef("AgentID", str) Timestamp = TypeDef( "Timestamp", float, constraint=lambda x: x >= 0, units="seconds", ) ``` ### 4.2 spaces.py — Typed Product Spaces ``` from typing import Dict from .types import TypeDef class Space: """A typed product space — defines the shape of signals flowing between blocks. In BDP terms: this is Abstract Structure. In GDS terms: these are the action spaces / signal spaces. Fields are TypeDef instances, so validation is enforced at the type level — data flowing through a wire is checked against its schema. """ def __init__( self, name: str, schema: Dict[str, TypeDef], description: str = "", ): self.name = name self.schema = schema # {field_name: TypeDef} self.description = description def validate(self, data: dict) -> list[str]: """Validate a data dict against this space's schema. Returns list of error strings (empty = valid). """ errors = [] for field_name, typedef in self.schema.items(): if field_name not in data: errors.append(f"Missing field: {field_name}") elif not typedef.validate(data[field_name]): errors.append( f"{field_name}: expected {typedef.name}, " f"got {type(data[field_name]).__name__} " f"with value {data[field_name]!r}" ) extra_fields = set(data.keys()) - set(self.schema.keys()) if extra_fields: errors.append(f"Unexpected fields: {extra_fields}") return errors def is_compatible(self, other: "Space") -> bool: """Check if another space has the same structure (field names and types).""" if set(self.schema.keys()) != set(other.schema.keys()): return False return all( self.schema[k] == other.schema[k] for k in self.schema ) def __repr__(self): fields = ", ".join(f"{k}: {v.name}" for k, v in self.schema.items()) return f"Space({self.name} {{ {fields} }})" def __eq__(self, other): return isinstance(other, Space) and self.name == other.name def __hash__(self): return hash(self.name) # ── Sentinel spaces ──────────────────────────────────────── EMPTY = Space("∅", {}, "No data flows through this port") TERMINAL = Space("⊤", {}, "Signal terminates here (state write)") ``` ### 4.3 blocks.py — Abstract Behavior with GDS Roles ``` from typing import Tuple, Optional from .spaces import Space, TERMINAL class Block: """Abstract behavioral specification — a typed function signature. In BDP terms: Abstract Behavior. In GDS terms: a component of the transition function h. A Block declares what goes in (domain), what comes out (codomain), what parameters it reads, and what constraints it must satisfy. Blocks do NOT hold implementations — they are pure type-level declarations. """ kind = "generic" def __init__( self, name: str, domain: Tuple[Space, ...], codomain: Tuple[Space, ...], description: str = "", params_used: Optional[list[str]] = None, constraints: Optional[list[str]] = None, ): self.name = name self.domain = domain self.codomain = codomain self.description = description self.params_used = params_used or [] self.constraints = constraints or [] def signature(self) -> str: """Human-readable type signature.""" d = " × ".join(s.name for s in self.domain) c = " × ".join(s.name for s in self.codomain) return f"{self.name}: {d} → {c}" def __repr__(self): return f"<{self.kind}: {self.signature()}>" def __eq__(self, other): return isinstance(other, Block) and self.name == other.name def __hash__(self): return hash(self.name) class BoundaryAction(Block): """Exogenous input — enters the system from outside. In GDS terms: this is part of the admissible input set U. Boundary actions model external agents, oracles, user inputs, environmental signals — anything the system doesn't control. """ kind = "boundary" def __init__(self, name, domain, codomain, description="", params_used=None, constraints=None, options: Optional[list[str]] = None): super().__init__(name, domain, codomain, description, params_used, constraints) self.options = options or [] # Named behavioral variants class ControlAction(Block): """Endogenous control — reads state, emits control signals. These are internal feedback loops: the system observing itself and generating signals that influence downstream policy/mechanism blocks. """ kind = "control" def __init__(self, name, domain, codomain, description="", params_used=None, constraints=None, options: Optional[list[str]] = None): super().__init__(name, domain, codomain, description, params_used, constraints) self.options = options or [] class Policy(Block): """Decision logic — maps signals to mechanism inputs. Policies select from feasible actions. They may have multiple named options for A/B testing or scenario analysis. In GDS terms: policies implement the decision mapping d = g(x, z) within the canonical form h = f ∘ g. """ kind = "policy" def __init__(self, name, domain, codomain, description="", params_used=None, constraints=None, options: Optional[list[str]] = None): super().__init__(name, domain, codomain, description, params_used, constraints) self.options = options or [] class Mechanism(Block): """State update — the only block type that writes to state. Codomain is always TERMINAL because mechanisms don't pass signals forward; they write to entity state variables. In GDS terms: mechanisms are the atomic state transitions that compose into h. """ kind = "mechanism" def __init__( self, name: str, domain: Tuple[Space, ...], description: str = "", params_used: Optional[list[str]] = None, constraints: Optional[list[str]] = None, updates: Optional[list[tuple[str, str]]] = None, ): super().__init__(name, domain, (TERMINAL,), description, params_used, constraints) self.updates = updates or [] # updates is a list of (entity_name, variable_name) pairs ``` ### 4.4 state.py — Entities and State Variables ``` from typing import Optional from .types import TypeDef class StateVariable: """A single typed variable within an entity's state. Each variable has a TypeDef (with runtime constraints), a human-readable description, and an optional math symbol. """ def __init__( self, name: str, typedef: TypeDef, description: str = "", symbol: Optional[str] = None, ): self.name = name self.typedef = typedef self.description = description self.symbol = symbol or name def validate(self, value) -> bool: return self.typedef.validate(value) def __repr__(self): return f"StateVar({self.name}: {self.typedef.name})" class Entity: """A named component of the system state. In GDS terms, the full state space X is the product of all entity state spaces: X = Entity_1.state × Entity_2.state × ... × Entity_n.state Entities correspond to actors, resources, registries — anything that persists across temporal boundaries and has mutable state. """ def __init__( self, name: str, variables: list[StateVariable], description: str = "", ): self.name = name self.variables = {v.name: v for v in variables} self.description = description def validate_state(self, data: dict) -> list[str]: """Validate a state snapshot for this entity.""" errors = [] for vname, var in self.variables.items(): if vname not in data: errors.append(f"{self.name}.{vname}: missing") elif not var.validate(data[vname]): errors.append(f"{self.name}.{vname}: type/constraint violation") return errors def __repr__(self): vars_str = ", ".join(self.variables.keys()) return f"Entity({self.name} {{ {vars_str} }})" ``` ### 4.5 wiring.py — Composition ``` from typing import Tuple, Optional from .spaces import Space, EMPTY, TERMINAL from .blocks import Block class Wire: """Concrete connection between block ports. In BDP terms: Concrete Structure. A wire carries data of a specific Space type from one block's codomain port to another block's domain port. """ def __init__( self, source: str, target: str, space: Space, optional: bool = False, ): self.source = source # block name (codomain side) self.target = target # block name (domain side) self.space = space self.optional = optional def __repr__(self): opt = " (optional)" if self.optional else "" return f"Wire({self.source} --[{self.space.name}]--> {self.target}{opt})" class Wiring: """A composed system of blocks connected by wires. In BDP terms: this combines Processors (block instances) and Wires (connections) into a system. In GDS terms: this is a particular composition of h. """ def __init__( self, name: str, blocks: list[Block], wires: list[Wire], description: str = "", ): self.name = name self.blocks = {b.name: b for b in blocks} self.wires = wires self.description = description def validate_wiring(self) -> list[str]: """Check that all wires connect valid blocks with matching spaces.""" errors = [] for w in self.wires: if w.source not in self.blocks: errors.append(f"Wire source '{w.source}' not in blocks") if w.target not in self.blocks: errors.append(f"Wire target '{w.target}' not in blocks") if w.source in self.blocks and w.target in self.blocks: src = self.blocks[w.source] tgt = self.blocks[w.target] if w.space not in src.codomain: errors.append( f"Wire space '{w.space.name}' not in " f"{w.source}'s codomain" ) if w.space not in tgt.domain: errors.append( f"Wire space '{w.space.name}' not in " f"{w.target}'s domain" ) return errors @property def external_domain(self) -> Tuple[Space, ...]: """Infer unwired inputs (external boundary of this wiring).""" wired_targets = set() for w in self.wires: wired_targets.add((w.target, w.space.name)) external = [] for b in self.blocks.values(): for s in b.domain: if s not in (EMPTY, TERMINAL): if (b.name, s.name) not in wired_targets: external.append(s) return tuple(external) @property def external_codomain(self) -> Tuple[Space, ...]: """Infer unwired outputs (external boundary of this wiring).""" wired_sources = set() for w in self.wires: wired_sources.add((w.source, w.space.name)) external = [] for b in self.blocks.values(): for s in b.codomain: if s not in (EMPTY, TERMINAL): if (b.name, s.name) not in wired_sources: external.append(s) return tuple(external) def __repr__(self): return f"Wiring({self.name}: {len(self.blocks)} blocks, {len(self.wires)} wires)" class StackWiring(Wiring): """Sequential composition: A → B → C. Each block's codomain must match the next block's domain. Wires are auto-generated from the sequence. """ def __init__(self, name: str, sequence: list[Block], description: str = ""): wires = [] for a, b in zip(sequence[:-1], sequence[1:]): # Match codomain of a to domain of b for s in a.codomain: if s not in (EMPTY, TERMINAL) and s in b.domain: wires.append(Wire(a.name, b.name, s)) super().__init__(name, sequence, wires, description) self.sequence = sequence def validate_wiring(self) -> list[str]: errors = super().validate_wiring() # Additional check: sequential domain/codomain matching for a, b in zip(self.sequence[:-1], self.sequence[1:]): a_out = [s for s in a.codomain if s not in (EMPTY, TERMINAL)] b_in = [s for s in b.domain if s not in (EMPTY, TERMINAL)] if a_out != b_in: errors.append( f"Stack mismatch: {a.name} outputs {[s.name for s in a_out]} " f"but {b.name} expects {[s.name for s in b_in]}" ) return errors class ParallelWiring(Wiring): """Parallel composition: A ∥ B. Independent blocks running simultaneously. Domain is the union of all component domains; codomain is the union of all component codomains. """ def __init__(self, name: str, components: list[Block], description: str = ""): super().__init__(name, components, [], description) self.components = components class SplitWiring(Wiring): """Branching composition: one input fans out to multiple paths. A single source block's output is consumed by multiple downstream blocks. """ def __init__(self, name: str, source: Block, targets: list[Block], description: str = ""): wires = [] for t in targets: for s in source.codomain: if s not in (EMPTY, TERMINAL) and s in t.domain: wires.append(Wire(source.name, t.name, s)) all_blocks = [source] + targets super().__init__(name, all_blocks, wires, description) self.source = source self.targets = targets ``` ### 4.6 spec.py — The GDS Specification Object ``` from typing import Dict from .types import TypeDef from .spaces import Space from .blocks import Block, Mechanism from .state import Entity, StateVariable from .wiring import Wiring class GDSSpec: """Complete Generalized Dynamical System specification. Mathematically: GDS = {h, X} where X = state space (product of entity states) h = transition map (composed from wirings) This class holds the full typed specification and validates structural integrity. It does NOT render, simulate, or export. That is the job of separate packages. GDSSpec handles registration and structural validation only. Rendering, simulation, and export are separate concerns. """ def __init__(self, name: str, description: str = ""): self.name = name self.description = description self.types: Dict[str, TypeDef] = {} self.spaces: Dict[str, Space] = {} self.entities: Dict[str, Entity] = {} self.blocks: Dict[str, Block] = {} self.wirings: Dict[str, Wiring] = {} self.parameters: Dict[str, TypeDef] = {} # ── Registration ──────────────────────────────────────── def register_type(self, t: TypeDef) -> "GDSSpec": assert t.name not in self.types, f"Type '{t.name}' already registered" self.types[t.name] = t return self # chainable def register_space(self, s: Space) -> "GDSSpec": assert s.name not in self.spaces, f"Space '{s.name}' already registered" self.spaces[s.name] = s return self def register_entity(self, e: Entity) -> "GDSSpec": assert e.name not in self.entities, f"Entity '{e.name}' already registered" self.entities[e.name] = e return self def register_block(self, b: Block) -> "GDSSpec": assert b.name not in self.blocks, f"Block '{b.name}' already registered" self.blocks[b.name] = b return self def register_wiring(self, w: Wiring) -> "GDSSpec": assert w.name not in self.wirings, f"Wiring '{w.name}' already registered" self.wirings[w.name] = w return self def register_parameter(self, name: str, typedef: TypeDef) -> "GDSSpec": assert name not in self.parameters, f"Parameter '{name}' already registered" self.parameters[name] = typedef return self # ── Validation ────────────────────────────────────────── def validate(self) -> list[str]: """Full structural validation. Returns list of error strings.""" errors = [] errors += self._validate_space_types() errors += self._validate_block_spaces() errors += self._validate_wiring_compatibility() errors += self._validate_mechanism_updates() errors += self._validate_param_references() return errors def _validate_space_types(self) -> list[str]: """Every TypeDef used in a Space is registered.""" errors = [] for space in self.spaces.values(): for field_name, typedef in space.schema.items(): if typedef.name not in self.types: errors.append( f"Space '{space.name}' field '{field_name}' uses " f"unregistered type '{typedef.name}'" ) return errors def _validate_block_spaces(self) -> list[str]: """Every Space referenced by a block is registered.""" errors = [] for block in self.blocks.values(): for s in list(block.domain) + list(block.codomain): if s.name not in self.spaces and s.name not in ("∅", "⊤"): errors.append( f"Block '{block.name}' references " f"unregistered space '{s.name}'" ) return errors def _validate_wiring_compatibility(self) -> list[str]: """All wirings have structurally valid connections.""" errors = [] for wiring in self.wirings.values(): errors += wiring.validate_wiring() return errors def _validate_mechanism_updates(self) -> list[str]: """Mechanisms only update existing entity variables.""" errors = [] for block in self.blocks.values(): if isinstance(block, Mechanism): for entity_name, var_name in block.updates: if entity_name not in self.entities: errors.append( f"Mechanism '{block.name}' updates " f"unknown entity '{entity_name}'" ) elif var_name not in self.entities[entity_name].variables: errors.append( f"Mechanism '{block.name}' updates " f"unknown variable '{entity_name}.{var_name}'" ) return errors def _validate_param_references(self) -> list[str]: """All parameter references in blocks are registered.""" errors = [] for block in self.blocks.values(): for param in block.params_used: if param not in self.parameters: errors.append( f"Block '{block.name}' references " f"unregistered parameter '{param}'" ) return errors ``` ### 4.7 verify.py — Higher-Order GDS Properties ``` from .spec import GDSSpec from .blocks import Mechanism, BoundaryAction, ControlAction, Policy class SpecVerifier: """Verifies higher-order structural properties of a GDSSpec. These correspond to GDS-theoretic properties like admissibility, reachability, and conservation — checked at the specification level (before any simulation runs). Formal verification at the specification level, before any simulation. """ def __init__(self, spec: GDSSpec): self.spec = spec def check_completeness(self) -> list[str]: """Every entity variable is updated by at least one mechanism. (No orphan state variables that can never change.) """ errors = [] all_updates = set() for block in self.spec.blocks.values(): if isinstance(block, Mechanism): for entity_name, var_name in block.updates: all_updates.add((entity_name, var_name)) for entity in self.spec.entities.values(): for var_name in entity.variables: if (entity.name, var_name) not in all_updates: errors.append( f"Orphan variable: {entity.name}.{var_name} " f"is never updated by any mechanism" ) return errors def check_determinism(self) -> list[str]: """Within each wiring, no two mechanisms update the same variable. (Write conflict detection.) """ errors = [] for wiring in self.spec.wirings.values(): update_map = {} # (entity, var) -> list of mechanism names for bname, block in wiring.blocks.items(): if isinstance(block, Mechanism): for entity_name, var_name in block.updates: key = (entity_name, var_name) if key not in update_map: update_map[key] = [] update_map[key].append(bname) for (ename, vname), mechs in update_map.items(): if len(mechs) > 1: errors.append( f"Write conflict in wiring '{wiring.name}': " f"{ename}.{vname} updated by {mechs}" ) return errors def check_reachability(self, from_block: str, to_block: str) -> bool: """Can signals reach from block A to block B through wiring? (Maps to GDS attainability correspondence.) """ # Build adjacency from all wirings adj = {} for wiring in self.spec.wirings.values(): for wire in wiring.wires: if wire.source not in adj: adj[wire.source] = set() adj[wire.source].add(wire.target) # BFS visited = set() queue = [from_block] while queue: current = queue.pop(0) if current == to_block: return True if current in visited: continue visited.add(current) queue.extend(adj.get(current, set())) return False def check_admissibility(self, wiring_name: str) -> list[str]: """All blocks in a wiring have their domain requirements satisfied. No dangling inputs — every non-empty domain port is either wired or is an external input. (Maps to GDS admissible inputs U.) """ errors = [] wiring = self.spec.wirings[wiring_name] wired_inputs = set() for w in wiring.wires: wired_inputs.add((w.target, w.space.name)) for bname, block in wiring.blocks.items(): for space in block.domain: if space.name in ("∅", "⊤"): continue if (bname, space.name) not in wired_inputs: # This is an external input — that's fine, # but it should be noted pass # Could flag as "requires external input" return errors def check_type_safety(self) -> list[str]: """Full type-flow analysis through all wirings. Every wire's space matches source codomain and target domain at the field level (not just by name). """ errors = [] for wiring in self.spec.wirings.values(): for wire in wiring.wires: src = wiring.blocks.get(wire.source) tgt = wiring.blocks.get(wire.target) if src and tgt: # Check space compatibility at field level src_spaces = {s.name: s for s in src.codomain} tgt_spaces = {s.name: s for s in tgt.domain} if wire.space.name in src_spaces and wire.space.name in tgt_spaces: s1 = src_spaces[wire.space.name] s2 = tgt_spaces[wire.space.name] if not s1.is_compatible(s2): errors.append( f"Type mismatch on wire {wire}: " f"source space and target space " f"have different schemas" ) return errors def blocks_affecting(self, entity: str, variable: str) -> list[str]: """Which blocks can transitively affect this variable? Generalized transitive impact analysis. """ # Direct: mechanisms that update this variable direct = [] for bname, block in self.spec.blocks.items(): if isinstance(block, Mechanism): if (entity, variable) in block.updates: direct.append(bname) # Transitive: anything that can reach those mechanisms all_affecting = set(direct) for mech_name in direct: for bname in self.spec.blocks: if self.check_reachability(bname, mech_name): all_affecting.add(bname) return list(all_affecting) def report(self) -> dict: """Run all structural checks, return a summary.""" return { "completeness": self.check_completeness(), "determinism": self.check_determinism(), "type_safety": self.check_type_safety(), "spec_validation": self.spec.validate(), } ``` ### 4.8 query.py — Dependency Analysis ``` from .spec import GDSSpec from .blocks import Mechanism, BoundaryAction, ControlAction, Policy class SpecQuery: """Query engine for exploring GDSSpec structure. A clean query API for exploring spec structure — parameter mapping, dependency graphs, and transitive impact analysis. """ def __init__(self, spec: GDSSpec): self.spec = spec def param_to_blocks(self) -> dict[str, list[str]]: """Map each parameter to the blocks that use it.""" mapping = {p: [] for p in self.spec.parameters} for bname, block in self.spec.blocks.items(): for param in block.params_used: if param in mapping: mapping[param].append(bname) return mapping def block_to_params(self) -> dict[str, list[str]]: """Map each block to the parameters it uses.""" return { bname: list(block.params_used) for bname, block in self.spec.blocks.items() } def entity_update_map(self) -> dict[str, dict[str, list[str]]]: """Map entity → variable → list of mechanisms that update it.""" result = {} for ename, entity in self.spec.entities.items(): result[ename] = {vname: [] for vname in entity.variables} for bname, block in self.spec.blocks.items(): if isinstance(block, Mechanism): for ename, vname in block.updates: if ename in result and vname in result[ename]: result[ename][vname].append(bname) return result def dependency_graph(self) -> dict[str, set[str]]: """Full block dependency DAG (who feeds whom).""" adj = {} for wiring in self.spec.wirings.values(): for wire in wiring.wires: if wire.source not in adj: adj[wire.source] = set() adj[wire.source].add(wire.target) return adj def blocks_by_kind(self) -> dict[str, list[str]]: """Group blocks by their GDS role.""" result = { "boundary": [], "control": [], "policy": [], "mechanism": [], "generic": [], } for bname, block in self.spec.blocks.items(): result[block.kind].append(bname) return result def spaces_used_by(self, block_name: str) -> dict[str, list[str]]: """Which spaces does a block consume and produce?""" block = self.spec.blocks[block_name] return { "domain": [s.name for s in block.domain], "codomain": [s.name for s in block.codomain], } ``` ### 4.9 serialize.py — JSON Round-Trip ``` import json from .spec import GDSSpec from .types import TypeDef from .spaces import Space from .blocks import Block, BoundaryAction, ControlAction, Policy, Mechanism from .state import Entity, StateVariable from .wiring import Wiring, Wire def spec_to_dict(spec: GDSSpec) -> dict: """Serialize a GDSSpec to a plain dict (JSON-compatible).""" return { "name": spec.name, "description": spec.description, "types": { name: { "name": t.name, "python_type": t.python_type.__name__, "description": t.description, "units": t.units, # Note: constraint functions are not serializable } for name, t in spec.types.items() }, "spaces": { name: { "name": s.name, "schema": { fname: tdef.name for fname, tdef in s.schema.items() }, "description": s.description, } for name, s in spec.spaces.items() }, "entities": { name: { "name": e.name, "description": e.description, "variables": { vname: { "name": v.name, "type": v.typedef.name, "description": v.description, "symbol": v.symbol, } for vname, v in e.variables.items() }, } for name, e in spec.entities.items() }, "blocks": { name: _block_to_dict(b) for name, b in spec.blocks.items() }, "wirings": { name: { "name": w.name, "description": w.description, "blocks": list(w.blocks.keys()), "wires": [ { "source": wire.source, "target": wire.target, "space": wire.space.name, "optional": wire.optional, } for wire in w.wires ], } for name, w in spec.wirings.items() }, "parameters": { name: {"name": t.name, "type": t.python_type.__name__} for name, t in spec.parameters.items() }, } def _block_to_dict(b: Block) -> dict: d = { "name": b.name, "kind": b.kind, "domain": [s.name for s in b.domain], "codomain": [s.name for s in b.codomain], "description": b.description, "params_used": b.params_used, "constraints": b.constraints, } if isinstance(b, Mechanism): d["updates"] = b.updates if hasattr(b, "options"): d["options"] = b.options return d def spec_to_json(spec: GDSSpec, indent: int = 2) -> str: """Serialize to JSON string.""" return json.dumps(spec_to_dict(spec), indent=indent) # Loading from dict/JSON would reconstruct GDSSpec objects, # but constraint functions need to be re-attached by the user # (they aren't serializable). This is by design — JSON is # an interchange format, not the source of truth. ``` ______________________________________________________________________ ## 5. Example Usage: Predator-Prey ``` from gds_core.types import TypeDef, NonNegativeFloat, PositiveInt from gds_core.spaces import Space, EMPTY from gds_core.blocks import BoundaryAction, Policy, Mechanism from gds_core.state import Entity, StateVariable from gds_core.wiring import StackWiring from gds_core.spec import GDSSpec from gds_core.verify import SpecVerifier # ── Types ── Population = TypeDef("Population", int, constraint=lambda x: x >= 0) Rate = TypeDef("Rate", float, constraint=lambda x: x > 0) # ── Spaces ── PreySignal = Space("PreySignal", {"prey_count": Population}) PredatorSignal = Space("PredatorSignal", {"predator_count": Population}) HuntResult = Space("HuntResult", { "prey_eaten": Population, "predators_fed": Population, }) # ── State ── prey = Entity("Prey", [ StateVariable("population", Population, symbol="N"), ]) predator = Entity("Predator", [ StateVariable("population", Population, symbol="P"), ]) # ── Blocks ── observe = Policy( "Observe Populations", domain=(EMPTY,), codomain=(PreySignal, PredatorSignal), params_used=["birth_rate", "death_rate"], ) hunt = Policy( "Hunt Prey", domain=(PreySignal, PredatorSignal), codomain=(HuntResult,), params_used=["hunt_efficiency"], options=["lotka_volterra", "ratio_dependent"], ) update_prey = Mechanism( "Update Prey Population", domain=(HuntResult,), params_used=["birth_rate"], updates=[("Prey", "population")], ) update_predator = Mechanism( "Update Predator Population", domain=(HuntResult,), params_used=["death_rate"], updates=[("Predator", "population")], ) # ── Wiring ── hunt_wiring = StackWiring( "Hunt Cycle", sequence=[observe, hunt, update_prey], # simplified ) # ── Spec ── spec = GDSSpec("Predator-Prey Model") for t in [Population, Rate]: spec.register_type(t) for s in [PreySignal, PredatorSignal, HuntResult]: spec.register_space(s) for e in [prey, predator]: spec.register_entity(e) for b in [observe, hunt, update_prey, update_predator]: spec.register_block(b) spec.register_wiring(hunt_wiring) for p in ["birth_rate", "death_rate", "hunt_efficiency"]: spec.register_parameter(p, Rate) # ── Validate & Verify ── errors = spec.validate() print(f"Validation errors: {errors}") verifier = SpecVerifier(spec) report = verifier.report() print(f"Completeness: {report['completeness']}") print(f"Determinism: {report['determinism']}") print(f"Type safety: {report['type_safety']}") ``` ______________________________________________________________________ ## 6. What gds-core Does NOT Do These are explicitly out of scope for the core package and should live in separate packages: | Concern | Why it's separate | Candidate package | | --------------------------- | ------------------------------- | ----------------- | | Mermaid diagram rendering | Presentation, not specification | `gds-viz` | | Obsidian vault generation | Report format, not core logic | `gds-reports` | | cadCAD model generation | Execution engine coupling | `gds-cadcad` | | Simulation execution | Runtime, not design-time | `gds-sim` | | Web UI / React frontend | Client concern (BDP's insight) | `gds-studio` | | PDF/LaTeX report generation | Output format | `gds-reports` | The core package should be importable with **zero dependencies** and usable in a Jupyter notebook, a CI pipeline, or as the backbone of any of the above tools. # GDS v0.2 Architecture Design Document ## Parameter Typing, Canonical Projection, and Tag Metadata ______________________________________________________________________ ## 1. Executive Summary GDS v0.2 extends the foundational dynamical system framework with **structural ontology only** — no execution semantics, no rendering. This preserves GDS as a declarative specification layer while enabling canonical formalization. | Feature | Purpose | Layer | | ------------------------ | ------------------------------------------------------ | -------------------- | | **Parameter Typing** | Formal declaration of Θ as distinct from state X | gds-framework (core) | | **Canonical Projection** | Pure structural derivation of `h: X → X` decomposition | gds-framework (core) | | **Tag Metadata** | Inert semantic annotations for downstream consumers | gds-framework (core) | **Key Architectural Decisions:** 1. **GDS is structural ontology, not behavioral engine** — No execution, simulation, optimization, or rendering 1. **Parameters define Θ at specification level only** — GDS does not define how Θ is sampled, assigned, or optimized 1. **Canonical projection is mandatory and pure** — Always derivable from SystemIR; never authoritative 1. **Tags are semantically neutral** — Metadata only; stripped at compile time; never affect verification or composition 1. **Rendering belongs in gds-viz** — All Mermaid, LaTeX, and diagram generation is out of scope for gds-framework **Boundary Constraint:** > GDS parameters define configuration space Θ at the specification level only. GDS does not define how Θ is sampled, assigned, or optimized. Execution engines must interpret Θ. ______________________________________________________________________ ## 2. Mathematical Foundation ### 2.1 Core Dynamical System The canonical GDS object: ``` h : X → X ``` With explicit decomposition: ``` h = f ∘ g ``` Where: - **X** — State space (from Entities) - **Z** — Exogenous signal space (from BoundaryActions) - **D** — Decision space (outputs of Policies) - **g** — Policy mapping: X × Z → D - **f** — State transition: X × D → X ### 2.2 Parameter Space Extension Parameters define configuration space Θ structurally: ``` h_θ : X → X where θ ∈ Θ ``` Θ is metadata at the specification level: - Typed but not bound to values in GDS - Referenced by blocks but not interpreted by GDS - Available for canonical projection and downstream consumers - Execution semantics delegated to domain engines ### 2.3 Invariants - **State (X)** is the only mutable component during execution - **Parameters (Θ)** are typed references, not values — GDS defines their schema, not their binding - **Canonical projection** derives structure without execution - **Tags** are inert metadata, never affecting structure, composition, or verification ______________________________________________________________________ ## 3. Parameter System Design (Structural Only) ### 3.1 Core Classes ``` from typing import Any, Callable from pydantic import BaseModel, Field, ConfigDict class ParameterDef(BaseModel): """ Schema definition for a single parameter. Defines Θ structurally — types and constraints only. No values, no binding, no execution semantics. """ model_config = ConfigDict(frozen=True, arbitrary_types_allowed=True) name: str typedef: TypeDef description: str = "" bounds: tuple[Any, Any] | None = None # Structural constraint class ParameterSchema(BaseModel): """ Defines the parameter space Θ at specification level. Immutable registry of parameter definitions. GDS does not interpret values — only validates structural references. """ model_config = ConfigDict(frozen=True) parameters: dict[str, ParameterDef] = Field(default_factory=dict) def add(self, param: ParameterDef) -> "ParameterSchema": """Return new schema with added parameter (immutable).""" if param.name in self.parameters: raise ValueError(f"Parameter '{param.name}' already exists") new_params = dict(self.parameters) new_params[param.name] = param return self.model_copy(update={"parameters": new_params}) def get(self, name: str) -> ParameterDef: return self.parameters[name] def names(self) -> set[str]: return set(self.parameters.keys()) def validate_references(self, ref_names: set[str]) -> list[str]: """Validate that all referenced parameter names exist in schema.""" errors = [] for name in ref_names: if name not in self.parameters: errors.append(f"Referenced parameter '{name}' not defined in schema") return errors ``` ### 3.2 Integration with Existing Classes ``` class GDSSpec(BaseModel): """Extended to include parameter schema at specification level.""" # ... existing fields ... # NEW: Parameter schema registry (structural only) parameter_schema: ParameterSchema = Field(default_factory=ParameterSchema) def register_parameter(self, param: ParameterDef) -> "GDSSpec": """Register a parameter definition (returns new instance).""" new_schema = self.parameter_schema.add(param) return self.model_copy(update={"parameter_schema": new_schema}) class Mechanism(Block): """Mechanisms can reference parameters from the spec.""" # ... existing fields ... # NEW: Parameter names this mechanism references (structural only) parameters: tuple[str, ...] = () class SystemIR(BaseModel): """Compiled system includes aggregated parameter schema.""" # ... existing fields ... # NEW: All parameters from composed blocks (structural registry) parameter_schema: ParameterSchema = Field(default_factory=ParameterSchema) ``` ### 3.3 Verification Check: Parameter References A new verification check validates parameter reference integrity: ``` def check_parameter_references(system: SystemIR) -> list[Finding]: """ PARAM-001: All parameter references in Mechanisms resolve to definitions in the ParameterSchema. Plugs into the existing verify() system alongside G-001..G-006 and SC-001..SC-004. """ findings = [] for block in system.blocks: if isinstance(block, Mechanism): for param_name in block.parameters: if param_name not in system.parameter_schema.names(): findings.append(Finding( severity=Severity.ERROR, code="PARAM-001", message=( f"Mechanism '{block.name}' references unknown " f"parameter '{param_name}'" ), )) return findings ``` ### 3.4 Use Case Example ``` # 1. Define parameter schema (structural only) beta_param = ParameterDef( name="infection_rate", typedef=TypeDef(float), description="Probability of infection per contact", bounds=(0.0, 1.0), ) spec = GDSSpec( name="SIR Model", blocks=[...], parameter_schema=ParameterSchema().add(beta_param), ) # 2. Mechanism references parameter by name infection_mech = Mechanism( name="Infection", interface=Interface(forward_in=(port("contacts"),)), updates=[("Population", "infected")], parameters=("infection_rate",), # Structural reference only ) # 3. Compile validates references system = compile_system(spec) # Raises if "infection_rate" not defined in schema # 4. Canonical projection includes parameter schema canonical = project_canonical(system) # canonical.Theta contains the parameter schema # 5. Domain package interprets parameters for execution # (GDS does not define execution semantics) ``` ______________________________________________________________________ ## 4. Canonical Projection ### 4.1 Purpose The canonical projection derives the formal GDS structure — X, Θ, U, D, g, f — from compiled SystemIR. It is: - **Pure:** deterministic, stateless, no side effects - **Derived:** always computable from SystemIR, never stored separately - **Not authoritative:** SystemIR is ground truth; the projection is a read-only view - **Cacheable:** same input always produces same output ### 4.2 Data Model ``` class CanonicalGDS(BaseModel): """ Canonical projection of SystemIR to formal GDS structure. Pure derivation — always computable, never authoritative. SystemIR remains ground truth. """ model_config = ConfigDict(frozen=True) # Spaces X: ProductSpace # State space (from Entities) Theta: ParameterSchema # Parameter space (schema only) Z: ProductSpace # Exogenous signal space (from BoundaryAction outputs) D: ProductSpace # Decision space (from Policy outputs) # Structural decomposition policy_blocks: tuple[str, ...] # Block names composing g mechanism_blocks: tuple[str, ...] # Block names composing f mechanism_order: tuple[str, ...] # Topological execution order # Structure metadata is_temporal: bool # True if system has temporal loops ``` **Design note:** `CanonicalGDS` holds **references to blocks by name**, not the blocks themselves. The spaces (X, U, D) are derived product spaces. There are no `PolicyMapping` or `StateTransition` wrapper classes — those are unnecessary abstractions over data already in SystemIR. ### 4.3 Derivation Algorithm ``` def project_canonical(system: SystemIR) -> CanonicalGDS: """ Pure function: SystemIR → CanonicalGDS Deterministic, stateless, cached. Never mutates SystemIR. """ # 1. X = product of all Entity state variable spaces X = _derive_state_space(system) # 2. Θ = parameter schema (pass-through) Theta = system.parameter_schema # 3. U = product of all BoundaryAction forward_out spaces U = _derive_input_space(system) # 4. D = product of all Policy forward_out spaces D = _derive_decision_space(system) # 5. Identify policy and mechanism blocks policy_blocks = tuple( b.name for b in system.blocks if isinstance(b, (BoundaryAction, Policy)) ) mechanism_blocks = tuple( b.name for b in system.blocks if isinstance(b, Mechanism) ) # 6. Topological sort of mechanisms by wiring dependencies mechanism_order = _topological_sort_mechanisms( mechanism_blocks, system.wirings ) # 7. Temporal structure is_temporal = any(w.is_temporal for w in system.wirings) return CanonicalGDS( X=X, Theta=Theta, U=U, D=D, policy_blocks=policy_blocks, mechanism_blocks=mechanism_blocks, mechanism_order=mechanism_order, is_temporal=is_temporal, ) def _derive_state_space(system: SystemIR) -> ProductSpace: """X = product of all Entity state variable spaces.""" return ProductSpace([ sv.space for entity in system.entities for sv in entity.state_variables ]) def _derive_input_space(system: SystemIR) -> ProductSpace: """U = product of all BoundaryAction forward_out spaces.""" return ProductSpace([ block.interface.forward_out_space for block in system.blocks if isinstance(block, BoundaryAction) ]) def _derive_decision_space(system: SystemIR) -> ProductSpace: """D = product of all Policy forward_out spaces.""" return ProductSpace([ block.interface.forward_out_space for block in system.blocks if isinstance(block, Policy) ]) ``` ### 4.4 Verification Check: Canonical Well-formedness ``` def check_canonical_wellformedness(system: SystemIR) -> list[Finding]: """ CANON-001: Canonical projection is structurally valid. Validates: - At least one mechanism exists (f is non-empty) - State space X is non-empty (entities with variables exist) - All mechanism parameter references resolve """ findings = [] canonical = project_canonical(system) if not canonical.mechanism_blocks: findings.append(Finding( severity=Severity.WARNING, code="CANON-001", message="No mechanisms found — state transition f is empty", )) if not canonical.X.components: findings.append(Finding( severity=Severity.WARNING, code="CANON-002", message="State space X is empty — no entity variables defined", )) return findings ``` ______________________________________________________________________ ## 5. Tag Metadata ### 5.1 Core Design Tags are a minimal `dict[str, str]` field on spec-layer objects. They carry no semantics within gds-framework — they exist for downstream consumers (visualization, documentation, domain packages). ``` class Tagged(BaseModel): """ Mixin providing inert semantic tags. Tags never affect compilation, verification, or composition. They are stripped at compile time and do not appear in SystemIR. """ tags: dict[str, str] = Field(default_factory=dict) def with_tag(self, key: str, value: str) -> "Tagged": """Return new instance with added tag.""" new_tags = dict(self.tags) new_tags[key] = value return self.model_copy(update={"tags": new_tags}) def with_tags(self, **tags: str) -> "Tagged": """Return new instance with multiple tags added.""" new_tags = dict(self.tags) new_tags.update(tags) return self.model_copy(update={"tags": new_tags}) def has_tag(self, key: str, value: str | None = None) -> bool: """Check if tag exists (and optionally has specific value).""" if key not in self.tags: return False if value is not None: return self.tags[key] == value return True def get_tag(self, key: str, default: str | None = None) -> str | None: """Get tag value or default.""" return self.tags.get(key, default) ``` Applied to existing classes: ``` class Block(Tagged): # Blocks support tagging class Entity(Tagged): # Entities support tagging class GDSSpec(Tagged): # Specifications support tagging ``` ### 5.2 Compile-Time Stripping Tags are **not preserved** in SystemIR. The compilation pipeline strips them: ``` def compile_system(spec: GDSSpec, name: str = "system") -> SystemIR: """ Tags stripped during compilation. SystemIR has no tags field — semantic neutrality enforced structurally. """ compiled_blocks = [ block.model_copy(update={"tags": {}}) for block in spec.blocks ] compiled_entities = [ entity.model_copy(update={"tags": {}}) for entity in spec.entities ] return SystemIR( blocks=compiled_blocks, entities=compiled_entities, # ... other fields, no tags ... ) ``` ### 5.3 Intended Usage Tags are consumed by **downstream packages**, not by gds-framework itself: ``` # Spec author annotates blocks sensor = AtomicBlock( name="Temperature Sensor", interface=Interface(forward_out=(port("Temperature"),)), ).with_tags( **{"control.role": "sensor", "control.loop": "outer"} ) controller = AtomicBlock( name="PID Controller", interface=Interface( forward_in=(port("Temperature"),), forward_out=(port("Command"),), ), ).with_tags( **{"control.role": "controller", "control.loop": "outer"} ) # gds-framework: tags have no effect on composition or verification system = sensor >> controller # Works identically with or without tags # gds-viz (separate package): reads tags for architecture diagrams # diagram = architecture_view(spec, group_by="control.loop") # Domain package: reads tags for domain-specific validation # findings = control_domain.validate_tags(spec) ``` **What gds-framework does NOT provide for tags:** - No tag styling (`TagStyle`, CSS, colors) - No tag conventions (`AGENT_CONVENTIONS`, `CONTROL_CONVENTIONS`) - No tag validation (`DomainPackage.validate_tags()`) - No tag-based rendering (`spec_to_architecture_mermaid()`) All of these belong in gds-viz or domain packages. ______________________________________________________________________ ## 6. Strict Boundaries & Invariants ### 6.1 GDS as Structural Ontology **Invariant:** GDS defines structure, not behavior. | In scope (gds-framework) | Out of scope (domain/viz packages) | | --------------------------------- | ---------------------------------- | | Type definitions and constraints | Parameter value assignment | | Block interfaces and composition | Execution and simulation | | Parameter schema (Θ structure) | Parameter binding (θ ∈ Θ) | | Canonical projection (data model) | Mermaid/LaTeX rendering | | Tag data field | Tag styling and conventions | | Structural verification | Domain-specific validation | ### 6.2 Spec ≠ Rendering ≠ Execution This is the foundational separation from gds_deepdive.md, **strictly enforced** in v0.2: ``` gds-framework (this package) ├── Types, Spaces, Entities, Blocks ├── Composition operators (>>, |, .feedback(), .loop()) ├── Compilation → SystemIR ├── Verification (G-001..G-006, SC-001..SC-004, PARAM-001, CANON-001) ├── Canonical projection → CanonicalGDS (data model) ├── Parameter schema (structural typing) └── Tag metadata (inert dict[str, str]) gds-viz (separate package, consumes GDSSpec and CanonicalGDS) ├── canonical_to_mermaid(canonical: CanonicalGDS) → str ├── spec_to_architecture_mermaid(spec: GDSSpec) → str ├── system_to_mermaid(system: SystemIR) → str ├── TagStyle, tag-based styling ├── LaTeX rendering └── All diagram generation Domain packages (consume GDSSpec, SystemIR, CanonicalGDS) ├── Parameter value assignment (ParameterAssignment) ├── Stochastic mechanisms (Ω modeling) ├── Execution engines (simulate()) ├── Domain-specific verification ├── Tag conventions and validation └── Optimization, analysis, etc. ``` ### 6.3 Canonical Projection Purity **Invariant:** `project_canonical` is a pure function. - Deterministic: same SystemIR → same CanonicalGDS - Stateless: no side effects - Read-only: never modifies SystemIR - Cacheable: `@lru_cache` safe - Not authoritative: SystemIR is ground truth ### 6.4 Parameter Boundary Rule **Invariant:** GDS parameters define Θ at specification level only. - `ParameterDef` contains type and structural constraints only - No `ParameterAssignment` in gds-framework - No binding logic in gds-framework - Domain packages handle value assignment and interpretation - If a value changes during execution, it is **state**, not a parameter ### 6.5 Tag Isolation **Invariant:** Tags never affect compilation, verification, or composition. - Tags stripped at compile time — SystemIR has no tags field - Verification checks never read tags - Composition operators ignore tags - Tag content is opaque to gds-framework (just `dict[str, str]`) ______________________________________________________________________ ## 7. Migration from v0.1 ### 7.1 Backward Compatibility **Guarantees:** - All existing models work unchanged - `parameter_schema` defaults to empty `ParameterSchema()` - `parameters` on Mechanism defaults to `()` - `tags` defaults to empty `{}` - All 244 existing tests continue to pass - No breaking changes to composition algebra, verification, or IR ### 7.2 Incremental Adoption **Phase 1: No changes required** ``` # Existing v0.1 code works unchanged system = (sensor >> controller >> plant).feedback([...]) ir = compile_system("Thermostat", system) report = verify(ir) ``` **Phase 2: Add parameter schema** ``` spec = GDSSpec( name="My Model", blocks=[...], parameter_schema=ParameterSchema() .add(ParameterDef(name="alpha", typedef=TypeDef(float))) .add(ParameterDef(name="beta", typedef=TypeDef(float), bounds=(0, 1))), ) mechanism = Mechanism( ..., parameters=("alpha", "beta"), # Structural references ) ``` **Phase 3: Use canonical projection** ``` ir = compile_system(spec) canonical = project_canonical(ir) # canonical.X, canonical.Theta, canonical.U, canonical.D available # for downstream consumers (gds-viz, domain packages) ``` **Phase 4: Add tags** ``` block = AtomicBlock( name="Agent A", interface=Interface(...), ).with_tags(**{"agent.role": "decision_maker", "agent.id": "alice"}) # Tags available on spec objects, stripped at compile time ``` ______________________________________________________________________ ## 8. Monorepo Structure: gds-viz ### 8.1 Why Monorepo Visual validation is essential for spec development — you need to *see* your model to know if it's correct. But rendering code doesn't belong in the core specification package. Solution: **gds-viz lives in the same repo as gds-framework**, as a separate package with its own `pyproject.toml`. Both packages share CI, versioning, and development workflow, but have independent install targets and a one-way dependency. ### 8.2 Repository Layout ``` gds-framework/ # repo root ├── pyproject.toml # core: gds-framework on PyPI ├── gds/ # core source (specification layer) │ ├── __init__.py │ ├── blocks/ │ ├── compiler/ │ ├── ir/ │ ├── types/ │ ├── verification/ │ ├── spec.py │ ├── spaces.py │ ├── state.py │ ├── query.py │ └── serialize.py ├── tests/ # core tests ├── packages/ │ └── gds-viz/ # viz: gds-viz on PyPI │ ├── pyproject.toml # depends on gds-framework │ ├── gds_viz/ │ │ ├── __init__.py │ │ ├── mermaid.py # system_to_mermaid, block_to_mermaid (migrated from gds/visualization.py) │ │ ├── canonical.py # canonical_to_mermaid (new: canonical GDS diagrams) │ │ ├── architecture.py # spec_to_architecture_mermaid (new: tag-based views) │ │ └── styles.py # TagStyle, DEFAULT_TAG_STYLES, domain conventions │ └── tests/ ├── examples/ # shared examples (use both packages) └── docs/ ``` ### 8.3 Workspace Configuration Root `pyproject.toml` adds: ``` [tool.uv.workspace] members = ["packages/*"] ``` `packages/gds-viz/pyproject.toml`: ``` [project] name = "gds-viz" version = "0.1.0" description = "Visualization utilities for GDS specifications" requires-python = ">=3.12" dependencies = [ "gds-framework>=0.2.0", ] [build-system] requires = ["hatchling"] build-backend = "hatchling.build" [tool.hatch.build.targets.wheel] packages = ["gds_viz"] ``` ### 8.4 Dependency Direction (Enforced) ``` gds-framework ←──depends── gds-viz │ │ │ NEVER imports from │ imports from gds.* │ gds_viz.* │ ▼ ▼ SystemIR, CanonicalGDS → Mermaid strings, LaTeX, diagrams GDSSpec (with tags) → Architecture-aware views ``` **Rule:** `gds/` never imports from `gds_viz/`. This is enforced by the package boundary — gds-framework has no dependency on gds-viz. ### 8.5 What Lives Where | Function | Current location | v0.2 location | Notes | | -------------------------------- | ---------------------- | ------------------------- | ----------------------------------- | | `system_to_mermaid()` | `gds/visualization.py` | `gds_viz/mermaid.py` | Migrated from core | | `block_to_mermaid()` | `gds/visualization.py` | `gds_viz/mermaid.py` | Migrated from core | | `canonical_to_mermaid()` | (doesn't exist) | `gds_viz/canonical.py` | New: renders CanonicalGDS | | `spec_to_architecture_mermaid()` | (doesn't exist) | `gds_viz/architecture.py` | New: tag-based grouping | | `TagStyle`, styling | (doesn't exist) | `gds_viz/styles.py` | New: tag visual conventions | | Domain tag conventions | (doesn't exist) | `gds_viz/styles.py` | New: AGENT/CONTROL/GAME conventions | ### 8.6 Migration of Existing Visualization Code The existing `gds/visualization.py` (200 lines, tested) moves to gds-viz with a deprecation shim: ``` # gds/visualization.py (v0.2 — deprecation shim) """Deprecated: use gds_viz instead. This module will be removed in v0.3. """ import warnings def system_to_mermaid(*args, **kwargs): warnings.warn( "gds.visualization is deprecated. Install gds-viz and use " "gds_viz.system_to_mermaid instead.", DeprecationWarning, stacklevel=2, ) from gds_viz.mermaid import system_to_mermaid as _impl return _impl(*args, **kwargs) def block_to_mermaid(*args, **kwargs): warnings.warn( "gds.visualization is deprecated. Install gds-viz and use " "gds_viz.block_to_mermaid instead.", DeprecationWarning, stacklevel=2, ) from gds_viz.mermaid import block_to_mermaid as _impl return _impl(*args, **kwargs) ``` This preserves backward compatibility while guiding users to the new package. ### 8.7 Developer Workflow ``` # Install both packages in development mode uv sync # installs gds-framework cd packages/gds-viz && uv sync # installs gds-viz (with editable gds-framework) # Or from repo root with workspace uv sync --all-packages # installs everything # Run core tests uv run pytest tests/ -v # Run viz tests uv run pytest packages/gds-viz/tests/ -v # Validate a model visually uv run python -c " from examples.sir_epidemic.model import build_system from gds_viz import system_to_mermaid print(system_to_mermaid(build_system())) " ``` ### 8.8 gds-viz Scope gds-viz consumes data models from gds-framework and produces visual output: **Inputs (from gds-framework):** - `SystemIR` — compiled block graph - `CanonicalGDS` — formal GDS projection - `GDSSpec` — specification with tags - `Block` — composition tree (pre-compilation) **Outputs (rendering):** - Mermaid flowchart strings - LaTeX mathematical notation (future) - Architecture-aware diagrams grouped by tags - Canonical GDS diagrams (X, Θ, U, D, g, f) **gds-viz owns:** - All Mermaid generation code - Tag styling (`TagStyle`, CSS, colors, shapes) - Domain tag conventions (`AGENT_CONVENTIONS`, `CONTROL_CONVENTIONS`, etc.) - Tag validation against conventions - Architecture view logic (grouping by tag, subgraph layout) - Hierarchy rendering ______________________________________________________________________ ## 9. New Verification Checks v0.2 adds verification checks to the existing pluggable system: | Code | Name | Severity | Description | | --------- | ----------------------------- | -------- | --------------------------------------------------------------------------- | | PARAM-001 | Parameter reference integrity | ERROR | All `Mechanism.parameters` entries resolve to `ParameterSchema` definitions | | CANON-001 | Empty state transition | WARNING | No mechanisms found — f is trivially empty | | CANON-002 | Empty state space | WARNING | No entity variables — X is trivially empty | These plug into the existing `verify(system, checks=None)` infrastructure alongside G-001..G-006 and SC-001..SC-004. ______________________________________________________________________ ## 10. Summary | Component | What gds-framework provides | What gds-viz provides | | ------------------------ | -------------------------------------------------------------- | -------------------------------------------- | | **Parameters** | `ParameterDef`, `ParameterSchema` — structural typing of Θ | (n/a) | | **Canonical Projection** | `CanonicalGDS` data model, `project_canonical()` pure function | `canonical_to_mermaid()` rendering | | **Tags** | `dict[str, str]` on Block/Entity/GDSSpec, stripped at compile | Tag styling, conventions, architecture views | | **Structural viz** | (n/a — migrated out) | `system_to_mermaid()`, `block_to_mermaid()` | **Explicitly NOT in gds-framework v0.2:** - Parameter value assignment / binding (domain package concern) - Execution / simulation semantics (domain package concern) - Stochastic mechanisms / Ω modeling (domain package concern) - Mermaid / LaTeX / diagram generation (gds-viz concern) - Tag styling, conventions, validation (gds-viz / domain package concern) **Implementation order:** Phase 1 — Core features (gds-framework): 1. `ParameterDef` and `ParameterSchema` classes 1. `parameters: tuple[str, ...]` field on `Mechanism` 1. `parameter_schema` field on `GDSSpec` and `SystemIR` 1. `PARAM-001` verification check 1. `ProductSpace` class (for canonical projection) 1. `CanonicalGDS` data model 1. `project_canonical()` derivation function 1. `CANON-001`, `CANON-002` verification checks 1. `Tagged` mixin on Block, Entity, GDSSpec 1. Compile-time tag stripping in compilation pipeline 1. Tests for all of the above Phase 2 — Visualization package (gds-viz): 12. Set up `packages/gds-viz/` with pyproject.toml and uv workspace 13. Migrate `gds/visualization.py` → `gds_viz/mermaid.py` 14. Add deprecation shim in `gds/visualization.py` 15. Implement `gds_viz/canonical.py` — canonical GDS diagrams 16. Implement `gds_viz/architecture.py` — tag-based architecture views 17. Implement `gds_viz/styles.py` — TagStyle, domain conventions 18. Migrate and update visualization tests 19. Update examples to import from `gds_viz` # Research Boundaries and Open Questions > Design note documenting the architectural boundary between structural compositional modeling (validated) and dynamical execution/analysis (next frontier). Written after the third independent DSL (gds-control) compiled cleanly to GDSSpec with no canonical modifications. ______________________________________________________________________ ## Status: What Has Been Validated Six independent DSLs now compile to the same algebraic core: | DSL | Domain | Decision layer (g) | Update layer (f) | Canonical | | --------------- | --------------- | ----------------------------------------------- | ----------------------- | --------------------------------------- | | gds-stockflow | System dynamics | Auxiliaries + Flows | Accumulation mechanisms | Clean | | gds-control | Control theory | Sensors + Controllers | Plant dynamics | Clean | | gds-games (OGS) | Game theory | All games (observation → decision → evaluation) | ∅ (no state update) | Clean — via `compile_pattern_to_spec()` | All three reduce to the same canonical form without modification: ``` d = g(x, z) x' = f(x, d) ``` Key structural facts: - Canonical `h = f ∘ g` has survived three domains with no extensions required. - No DSL compiler emits `ControlAction` blocks -- all non-state-updating blocks map to `Policy`. The `ControlAction` role serves as the output map `y = C(x, d)` for explicit inter-system composition at `>>` boundaries. - Role partition (boundary, policy, mechanism) is complete and disjoint in every case. - Cross-built equivalence (DSL-compiled vs hand-built) has been verified at Spec, Canonical, and SystemIR levels for all validated DSLs. - OGS canonical validation confirms `f = ∅`, `X = ∅` — compositional game theory is a **degenerate dynamical system** where `h = g` (pure policy, no state transition). See [RQ3](#research-question-3-ogs-as-degenerate-dynamical-system) below. A canonical composition pattern has emerged across DSLs: ``` (peripheral observers | exogenous inputs) >> (decision logic) >> (state dynamics) .loop(state → observers) ``` This motif appears in system dynamics, state-space control, and (structurally) reinforcement learning. It is not prescribed by the algebra — it is a convergent pattern discovered through independent DSL development. ______________________________________________________________________ ## Research Question 1: MIMO Semantics in a Compositional Dynamical Substrate ### Background The current architecture represents multi-input multi-output (MIMO) systems structurally as collections of scalar ports. Cross-coupling is encoded inside block-local semantics (e.g., update functions), not in the wiring topology. For example in gds-control: - Each state variable is its own Entity. - Each controller output is a separate port. - Each dynamics mechanism reads multiple scalar control ports. - Coupling (e.g., matrix A or B terms) is embedded inside `f`. This is sufficient for structural modeling and canonical decomposition. However, classical control theory treats `x ∈ R^n`, `u ∈ R^m`, `y ∈ R^p` as vector spaces with explicit matrix semantics. This raises a fundamental architectural question. ### The Question **Should MIMO structure remain decomposed into scalar channels (structural MIMO), or should vector-valued spaces become first-class citizens in the type system (algebraic MIMO)?** ### Option A — Structural MIMO (Current Design) Each dimension is modeled as an independent scalar port. Vector structure emerges from parallel composition. **Properties:** - Canonical remains dimension-agnostic - TypeDef and Space remain scalar - Coupling lives in block-local semantics - Dimensionality is implicit (count of states, inputs, etc.) **Advantages:** - Minimal extension to Layer 0 - Canonical remains purely structural - DSLs remain lightweight - Works across stockflow, games, and control without special treatment **Limitations:** - No static dimension checking - Cannot extract A, B, C, D matrices directly from structure - No structural controllability/observability analysis - No rank-based reasoning - Numerical coupling invisible at IR level **Interpretation:** This treats GDS as a structural substrate, not a linear algebra system. ### Option B — First-Class Vector Spaces Introduce vector-valued spaces with explicit dimensionality: ``` StateSpace(n) InputSpace(m) OutputSpace(p) ``` Ports carry structured types, not scalars. **Properties:** - Dimensionality becomes explicit - Wiring validates dimension compatibility - Canonical operates over vector-valued X and U - Matrix structure potentially extractable **Advantages:** - Enables structural controllability tests - Enables matrix extraction - Enables symbolic linearization - Closer alignment with classical control theory **Costs:** - Type system complexity increases - Cross-DSL consistency must be preserved - Potential leakage of numerical semantics into structural core - Requires careful integration with canonical ### Deeper Structural Question Is GDS intended to be: 1. A compositional topology algebra (structure only), or 1. A compositional linear-algebra-aware modeling language? If (1), structural MIMO is sufficient. If (2), vector semantics become necessary. ### Possible Hybrid Approach - Keep scalar structural core (Layer 0 unchanged) - Add optional dimensional metadata to spaces - Build matrix extraction as a canonical post-processing tool (Layer 4) This preserves architectural purity while enabling analysis. The metadata would be inert — stripped at compile time like tags — but available to projection tools that know how to interpret it. ### Current Recommendation Stay with structural MIMO. The scalar decomposition is correct for the current purpose (structural modeling and canonical decomposition). Vector semantics should be explored only when a concrete analysis tool (e.g., structural controllability) demonstrates that scalar decomposition is genuinely insufficient, not merely inconvenient. ______________________________________________________________________ ## Research Question 2: What Does a Timestep Mean Across DSLs? ### Background Temporal recurrence is represented structurally via `.loop()` and temporal wirings. This operator is used in multiple DSLs with different semantic intentions: | DSL | Temporal Meaning | What `.loop()` Represents | | --------- | ----------------- | --------------------------------------------------- | | StockFlow | State persistence | Stock level at t feeds auxiliaries at t+1 | | Control | State observation | Plant state at t feeds sensors at t+1 | | OGS | Round iteration | Decisions at round t feed observations at round t+1 | At the IR level, all temporal wirings are identical: ``` source → target (temporal, covariant) ``` Canonical treats recurrence purely algebraically — `x' = f(x, g(x, z))` — without encoding evaluation scheduling, delay, or sampling semantics. This is correct structurally. But it is incomplete for execution. ### The Question **What is the formal meaning of a timestep in GDS, and should execution semantics be standardized across DSLs?** ### Current Implicit Assumption All current DSLs assume synchronous discrete-time execution (Moore-style): 1. Compute `d = g(x[t], z[t])` 1. Compute `x[t+1] = f(x[t], d)` 1. All observation and control occur within one step ### Where This Breaks Down Different domains could legitimately interpret `.loop()` differently: | Domain | Temporal Interpretation | | --------------- | ------------------------------------- | | StockFlow | Accumulation (state += flow * dt) | | Control | Sampling (sensor reads current state) | | Delayed control | `x[t-1]` feeds controller, not `x[t]` | | Hybrid systems | Mode-dependent recurrence | | Continuous-time | Integration over dt | The algebra does not distinguish these. The structural fact "information flows from state output to sensor input across timesteps" is the same in all cases. The semantic question "is this observation delayed?" is invisible at the IR level. ### The Core Tension `.loop()` encodes **structural recurrence** but not **scheduling semantics**. If simulation is introduced, the following questions must be answered: - Is temporal wiring zero-delay or one-step delay? - Are updates synchronous or staged? - Does observation occur before or after state update? - Is the timestep uniform across all temporal wirings? Without explicit execution semantics, different DSLs may assume incompatible timestep meanings while sharing the same IR. ### Option A — Canonical Execution Model Define execution directly from canonical: ``` d = g(x, z) # observation + decision x_next = f(x, d) # state update ``` All DSLs must conform to this synchronous discrete-time semantics. A timestep is always: observe, decide, update. No delays. No staging. **Advantages:** - Minimal - Uniform - Canonical becomes directly executable **Limitations:** - Cannot express delayed observation without additional state variables - Continuous-time requires external discretization - Hybrid timing needs extensions beyond `.loop()` ### Option B — Execution Semantics Layer Introduce an explicit execution contract as metadata, not as part of the IR: ``` @dataclass(frozen=True) class ExecutionSemantics: synchronous: bool = True observation_delay: int = 0 integration_scheme: str = "explicit" ``` Keep IR structural. Attach semantics externally. Each DSL declares its own execution contract. A simulation harness reads the contract and dispatches accordingly. **Advantages:** - Clean separation of structure and dynamics - Supports multiple scheduling regimes - Preserves canonical purity - DSLs remain composable at the structural level even with different execution semantics **Costs:** - Additional abstraction layer - Increased conceptual surface area - Cross-DSL simulation becomes a compatibility question rather than a guarantee ### Deeper Question Is GDS: 1. A structural modeling algebra only? 1. Or a full dynamical execution framework? If (1), temporal semantics remain external and domain-local (consistent with the current architecture principle: "Simulation is domain-local"). If (2), a principled shared timestep model must be defined. ### Current Recommendation Temporal semantics should remain external. The architecture document already states: "The protocol provides no execution semantics." This is correct. The right approach is: 1. Keep `.loop()` as purely structural recurrence (no scheduling meaning at Layer 0). 1. Each DSL defines its own execution contract if/when it adds simulation. 1. A shared discrete-time runner (if built) operates on canonical form and assumes synchronous Moore semantics as the default. 1. DSLs that need different timing (delays, continuous, hybrid) declare it explicitly and are not required to be cross-simulatable. ______________________________________________________________________ ## Research Question 3: OGS as Degenerate Dynamical System ### Finding Canonical projection of OGS patterns produces: ``` X = ∅ (no state variables — games have no persistent entities) U = inputs (PatternInput → BoundaryAction) D = all game forward_out ports g = all games (observation → decision → evaluation) f = ∅ (no mechanisms — games don't update state) ``` The canonical decomposition reduces to `h = g`. There is no state transition. The system is **pure policy**. This is not a failure of the projection — it is the correct structural characterization of compositional game theory within GDS. ### Why X = ∅ Is Expected Games compute equilibria. They do not write to persistent state variables. The game-theoretic objects (strategies, utilities, coutilities) flow through the composition as signals, not as state updates. Even corecursive loops (repeated games) carry information forward as observations, not as entity mutations. In category-theoretic terms: open games are morphisms in a symmetric monoidal category with feedback. They are maps, not state machines. The "state" of a repeated game is the sequence of past plays — which in OGS is modeled as observations flowing through the composition (the History game), not as Entity variables. ### Why f = ∅ Is Semantically Correct No OGS game type performs a state update: | Game Type | Port Structure | Role | | --------------------- | ------------------------------- | ------------------------------ | | DecisionGame | (X,Y,R,S) → full 4-port | Policy — strategic choice | | CovariantFunction | (X,Y) → forward only | Policy — observation transform | | ContravariantFunction | (R,S) → backward only | Policy — utility transform | | DeletionGame | (X,∅) → discard | Policy — information loss | | DuplicationGame | (X, X×X) → broadcast | Policy — information copy | | CounitGame | (X,∅,∅,X) → future conditioning | Policy — temporal reference | All six map to `Policy`. None updates an Entity. Therefore `f` is empty and the mechanism layer is vacuous. ### The Spectrum of Canonical Dimensionality Three domains now provide three distinct points on the canonical spectrum: | Domain | |X| | |f| | |g| | Canonical Form | Interpretation | |---|---|---|---|---|---| | OGS (games) | 0 | 0 | all | `h = g` | Stateless — pure equilibrium computation | | Control | n | n | sensors + controllers | `h = f ∘ g` | Full — observation, decision, state update | | StockFlow | n | n | auxiliaries + flows | `h = f ∘ g` | State-dominant — accumulation dynamics | This reveals that `h = f ∘ g` is not merely "a decomposition of dynamical systems." It is a **transition calculus** that gracefully degenerates: - When `f = ∅`: the system is pure policy (games, decision logic, signal processing) - When `g` is thin: the system is state-dominant (accumulation, diffusion) - When both are substantial: the system is a full feedback dynamical system The unifying abstraction is `(x, u) ↦ x'` with varying dimensionality of X. All three domains are specializations of this map. ### Structural Gap That Was Bridged OGS originally had no path to canonical: 1. OGS blocks subclassed `OpenGame(Block)`, not GDS roles (`Policy`/`Mechanism`/`BoundaryAction`) 1. OGS produced `PatternIR → SystemIR`, never `GDSSpec` 1. `project_canonical()` classifies blocks via `isinstance` against role classes The bridge (`compile_pattern_to_spec()`) resolves this by: - Mapping all atomic games to `Policy` blocks (preserving their GDS Interface) - Mapping `PatternInput` to `BoundaryAction` - Resolving flows via the existing compiler, then registering as `SpecWiring` This is a parallel path — `PatternIR` remains for OGS-specific tooling (reports, visualization, game-theoretic vocabulary). The bridge enables canonical projection without replacing the existing pipeline. ### Implication for PatternIR `PatternIR` is no longer required for semantic correctness. Its remaining justifications: 1. **Report generation** — Jinja2 templates reference `OpenGameIR` fields (game_type, signature as X/Y/R/S) 1. **Game-theoretic vocabulary** — `FlowType.OBSERVATION` vs `FlowType.UTILITY_COUTILITY` carries domain meaning 1. **Visualization** — Mermaid generators use game-specific metadata These are view-layer concerns (Layer 4). Whether to consolidate `PatternIR` into `GDSSpec` + metadata is a refactoring question, not a correctness question. The bridge proves they produce equivalent canonical results. ______________________________________________________________________ ## Research Question 4: Cross-Lens Analysis — When Equilibrium and Reachability Disagree ### Background With six DSLs compiling to GDSSpec, the framework now supports two independent analytical lenses on the same system: 1. **Game-theoretic lens** (via PatternIR) — equilibria, incentive compatibility, strategic structure, utility propagation 1. **Dynamical lens** (via GDSSpec/CanonicalGDS) — reachability, controllability, stability, state-space structure These lenses are orthogonal. Neither subsumes the other: - Game equilibrium does not imply dynamical stability (a Nash equilibrium can be an unstable fixed point) - Dynamical stability does not imply strategic optimality (a stable attractor can be Pareto-dominated) - Reachability does not imply incentive compatibility (a reachable state may require irrational agent behavior) ### The Question **When the two lenses disagree for a concrete system, what does that disagreement mean — and which lens, if either, should be treated as normative?** ### Why Neither Lens Can Be Normative If the game-theoretic lens is normative ("redesign dynamics to enforce equilibrium"), you assume the equilibrium concept is correct for the domain. But Nash equilibria can be dynamically unstable, Pareto-dominated, or unreachable from feasible initial conditions. If the dynamical lens is normative ("redesign incentives to force stability"), you assume the target attractor is desirable. But stable attractors can be socially inefficient or represent lock-in traps. ### The Architectural Answer GDS is a **diagnostic instrument**, not a normative engine. The framework's value is in surfacing the disagreement. When equilibrium and reachability conflict, that conflict is information: - "Your incentive design has unintended dynamical consequences" (equilibrium exists but is unreachable) - "Your dynamics have unintended strategic consequences" (stable point exists but is not an equilibrium) The modeler resolves the tension using domain knowledge. The framework provides the structured vocabulary to state the problem precisely. ### Implications for Architecture This means the two-lens architecture must remain genuinely parallel: ``` Pattern ├─ PatternIR → game-theoretic analysis (equilibria, incentives) └─ GDSSpec → dynamical analysis (reachability, stability) ``` Neither representation should absorb the other. If canonical were extended to encode equilibrium concepts, or if PatternIR were extended to encode reachability, the lenses would collapse and the diagnostic power would be lost. The correct architectural move is to build **cross-lens queries** — analyses that take both representations as input and report on their (dis)agreement: - "Is this Nash equilibrium a stable fixed point of the state dynamics?" - "Is this stable attractor consistent with individual rationality?" - "Does this reachable state satisfy incentive compatibility?" These are research-level questions that require both lenses simultaneously. ### Connection to Timestep Semantics (RQ2) Cross-lens disagreement can also arise from implicit timestep incompatibility. If the game-theoretic lens assumes simultaneous play but the dynamical lens assumes sequential evaluation, "equilibrium" and "stability" may refer to different execution models operating on the same structural specification. This reinforces the RQ2 recommendation: temporal semantics must remain explicit and domain-local. Cross-lens analysis must verify that both lenses assume compatible execution semantics before comparing their conclusions. ### Trigger This question becomes concrete when: | Trigger | What It Reveals | | -------------------------------------------------------- | ---------------------------------------------------------------------- | | Building a game-theoretic + dynamical co-analysis tool | Whether the two lenses can be queried simultaneously | | A concrete system where equilibrium ≠ stable fixed point | Whether the framework can express the disagreement | | Mechanism design applications | Whether the framework supports prescriptive (not just descriptive) use | | Lean/formal verification exports | Whether canonical's analytical lossyness causes proof gaps | ### Current Recommendation Do not attempt to resolve the tension architecturally. Keep the lenses parallel. Build cross-lens analysis as a separate concern that consumes both representations. The framework's role is to make the question askable, not to answer it. ______________________________________________________________________ ## Strategic Assessment These questions mark the boundary between: - **Structural compositional modeling** — validated by six DSLs, canonical proven stable - **Dynamical execution and control-theoretic analysis** — the next frontier They are the first genuine architectural fork points after validating canonical centrality. ### What This Means for Development Priority Neither question requires immediate resolution. Both are triggered by concrete future work: | Trigger | Research Question Activated | | ------------------------------------------------------------- | --------------------------- | | Building a structural controllability analyzer | RQ1 (MIMO semantics) | | Building a shared simulation harness | RQ2 (timestep semantics) | | Adding a continuous-time DSL | RQ1 + RQ2 | | Adding a hybrid systems DSL | RQ1 + RQ2 | | Extracting state-space matrices (A, B, C, D) | RQ1 | | Consolidating OGS PatternIR into GDSSpec | RQ3 (refactoring decision) | | Adding a stateless DSL (signal processing, Bayesian networks) | RQ3 (validates X=∅ pattern) | Until one of these triggers occurs, the current architecture is complete and correct for its stated purpose: structural compositional modeling with formal verification and canonical decomposition. ### The Stability Claim After three independent domains with three distinct canonical profiles (`h = g`, `h = f ∘ g` full, `h = f ∘ g` state-dominant): - The composition algebra (Layer 0) is validated and should not change. - The canonical projection (`h = f ∘ g`) is correctly minimal — and gracefully degenerates when `f = ∅`. - The role system (Boundary, Policy, Mechanism) covers all three domains without `ControlAction`. - The type/space system handles semantic separation across all three domains. - The temporal loop pattern is structurally uniform and semantically adequate for structural modeling. - Cross-built equivalence holds at Spec, Canonical, and SystemIR levels for all validated DSLs. The canonical form `(x, u) ↦ x'` with varying dimensionality of X now functions as a **unified transition calculus** — not merely a decomposition of dynamical systems, but a typed algebra of transition structure that absorbs stateless (games), stateful (control), and state-dominant (stockflow) formalisms under one composition substrate. Further DSLs (signal processing, compartmental models, queueing networks) should compile to this same substrate without architectural changes. If they don't, that is a signal that the boundary has been reached — not that the architecture needs extension. # Paper vs. Implementation: Gap Analysis > Systematic comparison of the GDS software implementation against Zargham & Shorish (2022), *Generalized Dynamical Systems Part I: Foundations* (DOI: 10.57938/e8d456ea-d975-4111-ac41-052ce73cb0cc). > > Purpose: identify what the software faithfully implements, what it extends beyond the paper, and what paper concepts remain unimplemented. Concludes with a concrete bridge proposal. ______________________________________________________________________ ## 1. Core Object Mapping ### 1.1 Faithfully Implemented | Paper (Section 2) | Notation | Software | Notes | | ------------------------------ | ------------------------------------------------ | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | | State Space (Def 2.1) | X | `Entity` + `StateVariable`; X = product of all entity variables | Product structure is explicit | | State (Def 2.2) | x in X | Dict of entity -> variable -> value | At runtime only (gds-sim) | | Trajectory (Def 2.3) | x_0, x_1, ... | gds-sim trajectory execution | Deferred to simulation package | | Input Space (Def 2.4) | U (paper) / Z (codebase) | `BoundaryAction.forward_out` ports | Structural only. Codebase uses Z for exogenous signals to avoid conflation with the paper's u (selected action). | | Input (Def 2.4) | u in U (paper) / z in Z (codebase) | Signal on boundary port | At runtime only | | State Update Map (Def 2.6) | f : X x U_x -> X | `Mechanism` blocks with `updates` field | Structural skeleton only -- f_struct (which entity/variable) is captured, f_behav (the function) is not stored | | Input Map (Def 2.8) | g : X -> U_x (paper) / g : X x Z -> D (codebase) | `Policy` blocks | Same: structural identity only. Codebase interposes explicit decision space D and exogenous signals Z. | | State Transition Map (Def 2.9) | h = f | \_x . g | `project_canonical()` computes formula | | GDS (Def 2.10) | {h, X} | `CanonicalGDS` dataclass | Faithful for structural identity | ### 1.2 Structurally Implemented (Steps 1-2 of Bridge Proposal) | Paper (Section 2) | Notation | Software | Notes | | ------------------------------------- | ------------ | --------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- | | Admissible Input Space (Def 2.5) | U_x subset U | `AdmissibleInputConstraint` — dependency graph (which state variables constrain which inputs) | Structural skeleton (R1). The actual constraint predicate is R3/lossy, same as TypeDef.constraint. SC-008 validates references. | | Restricted State Update Map (Def 2.7) | f | \_x : U_x -> X | `TransitionSignature` — read dependencies (which state variables a mechanism reads) | ### 1.3 Not Implemented | Paper (Section 2-4) | Notation | What It Does | Why It Matters | | ------------------------------------------- | ------------------------- | ------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------- | | Admissible Input Map (Def 2.5) | U : X -> P(U) | The actual function computing the admissible input set | R3 — requires runtime evaluation. The structural dependency graph is captured (see 1.2), but the computation is not. | | Metric on State Space (Asm 3.2) | d_X : X x X -> R | Distance between states | Required for contingent derivative, reachability rate | | Attainability Correspondence (Def 3.1) | F : X x R+ x R+ => X | Set of states reachable at time t from (x_0, t_0) | Foundation for reachability and controllability | | Contingent Derivative (Def 3.3) | D'F(x_0, t_0, t) | Generalized rate of change (set-valued) | Connects trajectories to input maps; enables existence proofs | | Constraint Set (Asm 3.5) | C(x, t; g) subset X | Compact, convex set restricting contingent derivatives | Required for Theorem 3.6 (existence of solutions) | | Existence of Solutions (Thm 3.6) | D'F = C(x_0, t_0) | Conditions under which a trajectory exists | The paper's core analytical result | | Reachable Set (Def 4.1) | R(x) = union{f(x,u)} | Immediately reachable states from x | Foundation for configuration space and controllability | | Configuration Space (Def 4.2) | X_C subset X | Mutually reachable connected component | Characterizes the "live" portion of state space | | Local Controllability (Thm 4.4) | 0-controllable from eta_0 | Conditions for steering to equilibrium | Engineering design guarantee | | Observability / Design Invariants (Sec 4.4) | P(x_i) = TRUE | Properties that should hold along trajectories | Design verification (invariant checking) | ### 1.4 Software Extensions Beyond the Paper | Software Concept | Purpose | Paper Status | | ---------------------------------------------------------------- | --------------------------------------------- | ------------------------------------------------------------- | | Composition algebra (>>, | , .feedback(), .loop()) | Build h from typed, composable blocks | | Bidirectional interfaces (F_in, F_out, B_in, B_out) | Contravariant/covariant flow directions | Paper has unidirectional f, g | | Four-role partition (Boundary, Policy, Mechanism, ControlAction) | Typed block classification | Paper has monolithic f and g | | Token-based type system | Structural auto-wiring via port name matching | No counterpart | | Parameters Theta (ParameterDef, ParameterSchema) | Explicit configuration space | Paper alludes to "factors that change h" but never formalizes | | Decision space D | Intermediate space between g and f | Paper's g maps X -> U_x directly | | Verification checks (G-001..G-006, SC-001..SC-009) | Structural validation | Paper assumes well-formed h | | Compiler pipeline (flatten -> wire -> hierarchy) | Build IR from composition tree | No counterpart | | SystemIR intermediate representation | Flat, inspectable system graph | No counterpart | | Domain DSLs (stockflow, control, games, software, business) | Domain-specific compilation to GDS | No counterpart | | Spaces (typed product spaces) | Signal spaces between blocks | Paper has U (input space) only | ______________________________________________________________________ ## 2. Structural Divergences ### 2.1 The Canonical Form Signature **Paper:** ``` g : X -> U_x (input map: state -> admissible input) f : X x U_x -> X (state update: state x input -> next state) h(x) = f(x, g(x)) (autonomous after g is fixed) ``` **Software:** ``` g : X x Z -> D (policy: state x exogenous signals -> decisions) f : X x D -> X (mechanism: state x decisions -> next state) h(x) = f(x, g(x, z)) (not autonomous -- exogenous Z remains) ``` Key differences: 1. **The software interposes a decision space D.** The paper's g selects directly from the input space U. The software's g maps to a separate decision space D, distinct from U. This adds an explicit "observation -> decision" decomposition that the paper leaves inside g. 1. **The software's h is not autonomous.** The paper's h(x) = f(x, g(x)) is a function of state alone -- once g is chosen, the system is autonomous. The software's canonical form retains exogenous inputs U, making h a function of both state and environment. 1. **Admissible input restriction is absent.** The paper's g maps to U_x (state-dependent admissible subset). The software's BoundaryAction produces inputs unconditionally -- U_x = U for all x. ### 2.2 The Role Decomposition The paper has two maps (f and g) with no further internal structure. The software decomposes these into four block roles: ``` Paper g --> Software BoundaryAction + ControlAction + Policy Paper f --> Software Mechanism ``` The ControlAction role (endogenous observation) has no paper analog. It represents an internal decomposition of the paper's g into an observation stage feeding a decision stage. This is an engineering design, not a mathematical distinction from the paper. ### 2.3 What the Paper Assumes That the Software Must Build The paper says "let h : X -> X be a state transition map" and proceeds to analyze its properties. The software must answer: *how do you construct h from components?* The entire composition algebra -- the block tree, the operators, the compiler, the wiring system, the token-based type matching -- is the software's answer to this question. It is the primary contribution of the implementation relative to the paper. ______________________________________________________________________ ## 3. Analytical Machinery Gap The paper devotes approximately 40% of its content (Sections 3-4) to analytical machinery that the software does not implement. This machinery falls into two categories: ### 3.1 Contingent Representation (Paper Section 3) **What it provides:** Given a GDS {h, X} and initial conditions (x_0, t_0), the contingent derivative D'F characterizes the set of directions in which the system can evolve. Under regularity conditions (Assumption 3.5: constraint set C is compact, convex, continuous), Theorem 3.6 guarantees the existence of a solution trajectory. **Why it matters:** This is the paper's mechanism for proving that a GDS specification is *realizable* -- that trajectories satisfying the constraints actually exist. Without it, a GDSSpec is a structural blueprint with no guarantee that it corresponds to a well-defined dynamical process. **Software status:** Entirely absent. The software can verify structural invariants (all variables updated, no cycles, references valid) but cannot determine whether the specified dynamics admit a trajectory. ### 3.2 Differential Inclusion Representation (Paper Section 4) **What it provides:** The reachable set R(x) = union{f(x,u) : u in U_x} defines what's immediately reachable from x. The configuration space X_C is the mutually reachable connected component. Local controllability (Theorem 4.4) gives conditions under which the system can be steered to equilibrium. **Why it matters:** These are the tools for engineering design verification: - Can the system reach a desired operating point? - Can it be steered back after perturbation? - Is the reachable set consistent with safety constraints? **Software status:** SC-003 (reachability) checks structural graph reachability ("can signals propagate from block A to block B through wirings"). This is topological, not dynamical. The paper's reachability asks: "given concrete state x, which states x' can the system reach under some input sequence?" -- a fundamentally different question. ### 3.3 The Remaining Gap The structural skeletons of U_x and f|\_x are now captured (AdmissibleInputConstraint and TransitionSignature). What remains is the analytical machinery that *uses* these structures: the metric on X, the reachable set R(x), the configuration space X_C, and the contingent derivative. These require runtime evaluation of f and g — they are behavioral (R3), not structural. ______________________________________________________________________ ## 4. Bridge Proposal The gap between paper and implementation can be bridged incrementally. Each step adds analytical capability while preserving the existing structural core. Steps are ordered by dependency and increasing difficulty. ### Step 1: Admissible Input Map U_x -- IMPLEMENTED **What:** State-dependent input constraints on the specification. **Paper reference:** Definition 2.5 -- U : X -> P(U). **Implementation:** `gds.AdmissibleInputConstraint` (frozen Pydantic model in `gds/constraints.py`): ``` from gds import AdmissibleInputConstraint spec.register_admissibility( AdmissibleInputConstraint( name="balance_limit", boundary_block="market_order", depends_on=[("agent", "balance")], constraint=lambda state, u: u["quantity"] <= state["agent"]["balance"], description="Cannot sell more than owned balance" ) ) ``` **What was delivered:** - SC-008 (`check_admissibility_references`): validates boundary block exists, is a BoundaryAction, depends_on references valid (entity, variable) pairs - `CanonicalGDS.admissibility_map`: populated by `project_canonical()` - `SpecQuery.admissibility_dependency_map()`: boundary -> state variable deps - OWL export/import with BNode-based tuple reification for depends_on - SHACL shapes for structural validation - Round-trip test (constraint callable is lossy, structural fields preserved) - Keyed by `name` (not `boundary_block`) to allow multiple constraints per BoundaryAction **Structural vs. behavioral split:** - U_x_struct: the dependency relation (boundary -> state variables) -- R1 - U_x_behav: the actual constraint function -- R3 (same as TypeDef.constraint) ### Step 2: Restricted State Update Map f|\_x -- IMPLEMENTED **What:** Mechanism read dependencies (which state variables a mechanism reads). **Paper reference:** Definition 2.7 -- f|\_x : U_x -> X. **Implementation:** `gds.TransitionSignature` (frozen Pydantic model in `gds/constraints.py`): ``` from gds import TransitionSignature spec.register_transition_signature( TransitionSignature( mechanism="Heater", reads=[("Room", "temperature"), ("Environment", "outdoor_temp")], depends_on_blocks=["Controller"], preserves_invariant="energy conservation" ) ) ``` **What was delivered:** - SC-009 (`check_transition_reads`): validates mechanism exists, is a Mechanism, reads references valid (entity, variable) pairs, depends_on_blocks references registered blocks - `CanonicalGDS.read_map`: populated by `project_canonical()` - `SpecQuery.mechanism_read_map()`, `SpecQuery.variable_readers()` - OWL export/import with BNode-based tuple reification for reads - SHACL shapes for structural validation - Round-trip test (structural fields preserved) - `writes` deliberately omitted -- `Mechanism.updates` already tracks those - One signature per mechanism (intentional simplification) ### Step 3: Metric on State Space **What:** Equip X with a distance function, enabling the notion of "how far" states are from each other. **Paper reference:** Assumption 3.2 -- d_X : X x X -> R, a metric. **Implemented** in `gds-framework` (`constraints.py`): ``` class StateMetric(BaseModel, frozen=True): name: str variables: list[tuple[str, str]] # (entity, variable) pairs metric_type: str = "" # annotation: "euclidean", etc. distance: Callable[[Any, Any], float] | None = None # R3 lossy description: str = "" ``` Runtime analysis in `gds-analysis` (`metrics.py`): ``` trajectory_distances(spec, trajectory, metric_name=None) -> dict[str, list[float]] ``` **Impact:** - Enables Delta_x = d_X(x+, x) -- rate of change between successive states - Foundation for reachable set computation (Step 5) - Foundation for contingent derivative (Step 6) **Prerequisite:** Runtime state representation (gds-sim integration). **Structural vs. behavioral:** The metric itself is R3 (arbitrary callable). The declaration that "these state variables participate in the metric" is R1. The metric's properties (e.g., "Euclidean", "Hamming") could be R2 if annotated as metadata. ### Step 4: Reachable Set R(x) **What:** Given a concrete state x and the state update map f, compute the set of immediately reachable next states. **Paper reference:** Definition 4.1 -- R(x) = union\_{u in U_x} {f(x, u)}. **Implemented** in `gds-analysis` (`reachability.py`): ``` def reachable_set( model: Model, state: dict[str, Any], *, input_samples: list[dict[str, Any]], state_key: str | None = None, exhaustive: bool = False, float_tolerance: float | None = None, ) -> ReachabilityResult: ``` For discrete input spaces, pass exhaustive samples with `exhaustive=True`. For continuous spaces, results are approximate (no coverage guarantee). Returns `ReachabilityResult` with `states`, `n_samples`, `n_distinct`, `is_exhaustive` metadata. ### Step 5: Configuration Space X_C **What:** The mutually reachable connected component of the state space -- the set of states from which any other state in X_C is reachable. **Paper reference:** Definition 4.2 -- X_C subset X such that for each x in X_C, there exists x_0 and a reachable sequence reaching x. **Implemented** in `gds-analysis` (`reachability.py`): ``` def reachable_graph(model, initial_states, *, input_samples, max_depth, ...) -> dict def configuration_space(graph: dict) -> list[set]: """Iterative Tarjan SCC. Returns SCCs sorted by size (largest = X_C).""" ``` `reachable_graph` builds the adjacency dict via BFS; `configuration_space` finds strongly connected components. For discrete systems with exhaustive input samples, the result is exact. **Impact:** - Answers "is the target state reachable from the initial condition?" - Identifies disconnected components (states the system can never reach) - Foundation for controllability analysis **Prerequisite:** Step 4 (reachable set). ### Step 6: Contingent Derivative (Research Frontier) **What:** The generalized derivative that characterizes the set of directions in which the system can evolve, given constraints. **Paper reference:** Definition 3.3, Theorem 3.6. **Why this is hard:** The contingent derivative requires: 1. A metric on X (Step 3) 1. The attainability correspondence F (requires iterating R(x) over time) 1. Convergence analysis (sequences converging to x_0 with rate limit) 1. The constraint set C(x, t; g) to be compact, convex, continuous This is the paper's deepest analytical contribution and the hardest to implement. It may be more appropriate as a separate analytical package (e.g., gds-analysis) rather than part of gds-framework. **Concrete approach:** For the special case of discrete-time systems with finite input spaces: - The contingent derivative reduces to the set of finite differences Delta_x / Delta_t for all admissible transitions - Compactness of C is automatic (finite set) - Convexity may or may not hold (depends on the transitions) - Continuity must be checked numerically For continuous state spaces: - Requires interval arithmetic or symbolic differentiation - Significantly harder; likely requires external tools (e.g., sympy, scipy, or a dedicated reachability library like CORA or JuliaReach) **Prerequisite:** Steps 3-5. Likely a separate package. ### Step 7: Controllability Analysis (Research Frontier) **What:** Conditions under which the system can be steered from any state in a neighborhood to a target state. **Paper reference:** Theorem 4.4 (local controllability). **Requires:** - Reachable set R(x) with metric (Steps 3-4) - The boundary mapping partial_R to be Lipschitzian (numerical check) - Closed, convex values (property of R) - A controllable closed, strictly convex reachable process near target This is the most advanced analytical capability in the paper and would represent a significant research contribution if implemented. It connects GDS to classical control theory results (controllability, observability) in a generalized setting. **Suggested approach:** Start with the linear case (where controllability reduces to rank conditions on matrices) and generalize incrementally. This connects directly to RQ1 (MIMO semantics) in research-boundaries.md. ______________________________________________________________________ ## 5. Dependency Graph ``` Step 1: AdmissibleInputConstraint (U_x declaration) -- DONE (gds-framework) Step 2: TransitionSignature (f|_x declaration) -- DONE (gds-framework) | v Step 3: StateMetric (d_X on X) -- DONE (gds-framework) | v Step 4: Reachable Set R(x) -- DONE (gds-analysis) | v Step 5: Configuration Space X_C -- DONE (gds-analysis) | v Step 6: Contingent Derivative D'F -- research frontier Step 7: Local Controllability -- research frontier ``` Steps 1-3 are structural annotations in gds-framework with full OWL/SHACL support in gds-owl. Steps 4-5 are runtime analysis in gds-analysis, which bridges gds-framework to gds-sim. Steps 6-7 are research-level and may require external tooling (SymPy, JuliaReach). ______________________________________________________________________ ## 6. Package Placement | Step | Where | Status | | ------------------- | --------------------------------------- | -------------------------------------------------------------- | | 1 (U_x) | gds-framework (constraints.py, spec.py) | **Done** — SC-008, OWL, SHACL | | 2 (f | \_x signature) | gds-framework (constraints.py, spec.py, canonical.py) | | 3 (metric) | gds-framework (constraints.py, spec.py) | **Done** — StateMetric, OWL, SHACL | | 4 (R(x)) | gds-analysis (reachability.py) | **Done** — `reachable_set()`, `ReachabilityResult` | | 5 (X_C) | gds-analysis (reachability.py) | **Done** — `configuration_space()` (iterative Tarjan SCC) | | 5b (B(T)) | gds-analysis (backward_reachability.py) | **Done** — `backward_reachable_set()` + `extract_isochrones()` | | 6 (D'F) | gds-analysis (future) | Research frontier | | 7 (controllability) | gds-analysis (future) | Research frontier | The `gds-analysis` package depends on both `gds-framework` (for GDSSpec, CanonicalGDS) and `gds-sim` (for state evaluation), sitting at the top of the dependency graph: ``` gds-framework <-- gds-sim <-- gds-analysis ^ | | | +----------------------------------+ ``` ______________________________________________________________________ ## 7. What Does Not Need Bridging Some paper concepts are intentionally absent for good architectural reasons: 1. ~~**Continuous-time dynamics (xdot = f(x(t)))**~~ -- **Now implemented.** The `gds-continuous` package provides an ODE engine (RK45, Radau, BDF, etc.) with event detection and parameter sweeps. `gds-analysis` uses it for backward reachability. `gds-symbolic` compiles symbolic ODEs to `gds-continuous` callables via lambdify. The discrete/continuous split is bridged, not absent. 1. **The full attainability correspondence F as an axiomatic foundation** -- The paper notes (Section 3.2) that Roxin's original work defined GDS via the attainability correspondence. The software defines GDS via the composition algebra instead. These are equivalent starting points that lead to different tooling. 1. **Convexity requirements on C(x, t; g)** -- The paper's existence theorem (Theorem 3.6) requires convexity. Many real systems (discrete decisions, combinatorial action spaces) violate this. The software should not impose convexity as a requirement -- it should report when convexity holds (as metadata) and note when existence theorems apply. # Semantic Web Integration: What We Learned A team summary of GDS + OWL/SHACL/SPARQL integration via `gds-owl`. ## The Short Version We can export **85% of a GDS specification** to Turtle/RDF files and import it back losslessly. The 15% we lose is Python callables (transition functions, constraint predicates, distance functions). This is a mathematical certainty, not a gap we can close. ## What Gets Exported (R1 -- Fully Representable) Everything structural round-trips perfectly through Turtle: | GDS Concept | RDF Representation | Validated By | | -------------------------------------------- | ----------------------------- | ----------------- | | Block names, roles, interfaces | OWL classes + properties | SHACL shapes | | Port names and type tokens | Literals on Port nodes | SHACL datatype | | Wiring topology (who connects to whom) | Wire nodes with source/target | SHACL cardinality | | Entity/StateVariable declarations | Entity + StateVariable nodes | SHACL | | TypeDef (name, python_type, units) | TypeDef node + properties | SHACL | | Space fields | SpaceField blank nodes | SHACL | | Parameter schema (names, types, bounds) | ParameterDef nodes | SHACL | | Mechanism update targets (what writes where) | UpdateMapEntry nodes | SHACL | | Admissibility dependencies (what reads what) | AdmissibilityDep nodes | SHACL | | Transition read dependencies | TransitionReadEntry nodes | SHACL | | State metric variable declarations | MetricVariableEntry nodes | SHACL | | Canonical decomposition (h = f . g) | CanonicalGDS node | SHACL | | Verification findings | Finding nodes | SHACL | **13 SHACL shapes** enforce structural correctness on the RDF graph. **7 SPARQL query templates** enable cross-node analysis (blocks by role, dependency paths, entity update maps, parameter impact, verification summaries). ## What Requires SPARQL (R2 -- Structurally Representable) Some properties can't be checked by SHACL alone (which validates individual nodes) but CAN be checked by SPARQL queries over the full graph: | Property | SPARQL Feature | Why SHACL Can't | | ------------------------ | ------------------------- | ------------------------------------ | | Acyclicity (G-006) | Transitive closure (`p+`) | No path traversal in SHACL-core | | Completeness (SC-001) | `FILTER NOT EXISTS` | No "for all X, exists Y" | | Determinism (SC-002) | `GROUP BY` + `HAVING` | No cross-node aggregation | | Dangling wirings (G-004) | `FILTER NOT EXISTS` | Name existence, not class membership | These all terminate (SPARQL over finite graphs always does) and are decidable. ## What Cannot Be Exported (R3 -- Not Representable) These are **fundamentally** non-exportable. Not a tooling gap -- a mathematical impossibility (Rice's theorem for callables, computational class separation for string processing): | GDS Concept | Why R3 | What Happens on Export | | ---------------------------------------------- | ------------------------------- | ------------------------------------------------------------ | | `TypeDef.constraint` (e.g. `lambda x: x >= 0`) | Arbitrary Python callable | Exported as boolean flag `hasConstraint`; imported as `None` | | `f_behav` (transition functions) | Arbitrary computation | Not stored in GDSSpec -- user responsibility | | `AdmissibleInputConstraint.constraint` | Arbitrary callable | Exported as boolean flag; imported as `None` | | `StateMetric.distance` | Arbitrary callable | Exported as boolean flag; imported as `None` | | Auto-wiring token computation | Multi-pass string processing | Results exported (WiringIR edges); process is not | | Construction validation | Python `@model_validator` logic | Structural result preserved; validation logic is not | **Key insight:** The *results* of R3 computation are always R1. Auto-wiring produces WiringIR edges (R1). Validation produces pass/fail (R1). Only the *process* is lost. ## The Boundary in One Sentence > **You can represent everything about a system except what its programs actually do.** The canonical decomposition `h = f . g` makes this boundary explicit: `g` (topology) and `f_struct` (update targets) are fully representable; `f_behav` (how state actually changes) is not. ## Practical Implications ### What You Can Do With the Turtle Export 1. **Share specs between tools** -- any RDF-aware tool (Protege, GraphDB, Neo4j via neosemantics) can import a GDS spec 1. **Validate specs without Python** -- SHACL processors (TopBraid, pySHACL) can check structural correctness 1. **Query specs with SPARQL** -- find all mechanisms that update a given entity, trace dependency paths, check acyclicity 1. **Version and diff specs** -- Turtle is text, diffs are meaningful 1. **Cross-ecosystem interop** -- other OWL ontologies can reference GDS classes/properties ### What You Cannot Do 1. **Run simulations from Turtle** -- you need the Python callables back 1. **Verify behavioral properties** -- "does this mechanism converge?" requires executing `f_behav` 1. **Reproduce auto-wiring** -- the token overlap computation can't run in SPARQL ### Round-Trip Fidelity Tested with property-based testing (Hypothesis): 100 random GDSSpecs generated, exported to Turtle, parsed back, reimported. All structural fields survive. Known lossy fields: - `TypeDef.constraint` -> `None` - `TypeDef.python_type` -> falls back to `str` for non-builtin types - `AdmissibleInputConstraint.constraint` -> `None` - `StateMetric.distance` -> `None` - Port/wire ordering -> set-based (RDF is unordered) - Blank node identity -> content-based comparison, not node ID ## Numbers | Metric | Count | | ----------------------------------------- | ------- | | R1 concepts (fully representable) | 13 | | R2 concepts (SPARQL-needed) | 3 | | R3 concepts (not representable) | 7 | | SHACL shapes | 18 | | SPARQL templates | 7 | | Verification checks expressible in SHACL | 6 of 15 | | Verification checks expressible in SPARQL | 6 more | | Checks requiring Python | 2 of 15 | | Round-trip PBT tests | 26 | | Random specs tested | ~2,600 | ## Paper Alignment The structural/behavioral split is a **framework design choice**, not a paper requirement. The GDS paper (Zargham & Shorish 2022) defines `U: X -> P(U)` as a single map; we split it into `U_struct` (dependency graph, R1) and `U_behav` (constraint predicate, R3) for ontological engineering. Same for `StateMetric` and `TransitionSignature`. The canonical decomposition `h = f . g` IS faithful to the paper. ## Open Question: Promoting Common Constraints to R2 Zargham's feedback: *"We can probably classify them as two different kinds of predicates -- those associated with the model structure (owl/shacl/sparql) and those associated with the runtime."* Currently all `TypeDef.constraint` callables are treated as R3 (lossy). But many common constraints ARE expressible in SHACL: - `lambda x: x >= 0` --> `sh:minInclusive 0` - `lambda x: 0 <= x <= 1` --> `sh:minInclusive 0` + `sh:maxInclusive 1` - `lambda x: x in {-1, 0, 1}` --> `sh:in (-1 0 1)` A constraint classifier could promote these from R3 to R2, making them round-trippable through Turtle. The general case (arbitrary callable) remains R3. See #152 for the design proposal. ## Files - `packages/gds-owl/` -- the full export/import/SHACL/SPARQL implementation - `docs/research/formal-representability.md` -- the 800-line formal analysis - `docs/research/verification/r3-undecidability.md` -- proofs for the R3 boundary - `docs/research/verification/representability-proof.md` -- R1/R2 decidability + partition independence # Representability Analysis: GDS in OWL/SHACL/SPARQL A design rationale document classifying which GDS concepts can and cannot be represented in semantic web formalisms, grounded in the compositionality-temporality boundary and the canonical decomposition h = f ∘ g. ______________________________________________________________________ ## Overview ### The representation boundary is h = f ∘ g The GDS canonical decomposition h = f ∘ g is not just mathematical notation — it is the exact line where formal representation changes character: - **g** (policy mapping): which blocks connect to what, in what roles, through what wires. Fully representable — by design, GDSSpec stores no behavioral content here. - **f_struct** (update map): "Mechanism M updates Entity E variable V." Fully representable — a finite relation. - **f_behav** (transition function): "Given state x and decisions d, compute new state x'." Not representable — arbitrary computation. Everything to the left of f_behav is topology. Everything at f_behav and beyond is computation. OWL/SHACL/SPARQL live on the topology side. Python lives on both sides. ### Composition: structure preserved, process lost The five composition operators (>>, |, feedback, loop) and their resulting trees survive OWL round-trip perfectly. You can reconstruct exactly how a system was assembled. The *process* of composition — auto-wiring via token overlap, port matching, construction-time validation — requires Python string computation that SPARQL cannot replicate. But this gap is **moot in practice**: gds-owl materializes both the tokens (as `typeToken` literals) and the wired connections (as explicit `WiringIR` edges) during export. The RDF consumer never needs to recompute what Python already computed. This reveals a general design principle: **materialize computation results as data before export, and the representation gap closes for practical purposes.** ### Temporality: structure preserved, semantics lost A `TemporalLoop` (physical state at t feeds sensors at t+1) and a `CorecursiveLoop` (decisions at round t feed observations at round t+1) have identical RDF representation: covariant wiring from inner.forward_out to inner.forward_in with an exit_condition string. OWL captures "there is a loop" but not "what kind of time this loop means." The interpretation requires knowing which DSL compiled the system — that is metadata (preserved via `gds-ir:sourceLabel`), not topology. State evolution itself — computing x\_{t+1} = f(x_t, g(x_t, u_t)) — is fundamentally not representable. You need a runtime, period. ### The data/computation duality The same pattern recurs at every level of GDS: | Data (representable) | Computation (not representable) | | ----------------------------------------------- | ------------------------------------------------------- | | Token sets on ports | The `tokenize()` function that produces them | | Wired connections | The auto-wiring process that discovers them | | Constraint bounds (0 \<= x \<= 1) | Arbitrary `Callable[[Any], bool]` constraints | | Update map (M updates E.V) | Transition function (how V changes) | | Read map (M reads E.V) | Actual data flow at runtime | | Admissibility deps (B depends on E.V) | Admissibility predicate (is input legal given state?) | | Equilibrium structure (which games compose how) | Equilibrium computation (finding Nash equilibria) | | Wiring graph topology (can A reach B?) | Signal propagation (does A's output actually affect B?) | If you materialize computation results as data before crossing the boundary, the gap shrinks to what genuinely requires a runtime: simulation, constraint evaluation, and equilibrium solving. ### The validation stack The three semantic web formalisms serve architecturally distinct roles — a validation stack, not a containment chain: | Layer | Formalism | Role | Example | | ----------------- | ---------- | ---------------------------------- | ----------------------------------------------------------- | | Vocabulary | OWL | Defines what things *are* | "A Mechanism is a kind of AtomicBlock" | | Local constraints | SHACL-core | Validates individual nodes | "Every Mechanism must update >= 1 state variable" | | Graph patterns | SPARQL | Validates cross-node relationships | "No two mechanisms update the same (entity, variable) pair" | | Computation | Python | Evaluates functions, evolves state | "Given x=0.5, f(x) = 0.7" | Each step adds expressiveness and loses decidability guarantees. The R1/R2/R3 tier system in this document maps directly onto this stack. ### Architectural consequences 1. **RDF is a viable structural interchange format.** Of 15 verification checks, 6 are SHACL-expressible, 6 more with SPARQL, only 2 genuinely need Python. The structural skeleton carries the vast majority of system information. 1. **Games are naturally ontological.** When h = g (no state, no f), the GDSSpec projection is lossless. Games are morphisms between spaces, not state machines. Game composition maps cleanly to OWL because it is all structure. 1. **Dynamical systems degrade gracefully.** Each mechanism contributes one representable fact (what it updates) and one non-representable fact (how it computes). The structural skeleton is always complete; what degrades is the fraction of total content it represents. 1. **The canonical form is architecturally load-bearing.** By separating "what connects to what" (g) from "what the connections compute" (f_behav), GDS provides a clean cut point for partial representation, cross-tool interop, and formal reasoning. The representability boundary is Rice's theorem applied to system specifications: you can represent everything about a system except what its programs actually do. The canonical decomposition h = f ∘ g makes this boundary explicit and exploitable. > **Paper alignment note.** Rice's theorem applies here because the *software implementation* uses arbitrary Python callables for f_behav. The paper's mathematical proofs (Theorem 3.6, existence) assume the constraint set is compact, convex, and continuous (Assumption 3.5) — a much more restricted class than Turing-complete programs. The R3 boundary reflects the implementation's scope, not the paper's mathematical scope. ______________________________________________________________________ ## 1. Preliminaries ### 1.1 GDS Formal Objects **Definition 1.1 (Composition Algebra).** The GDS composition algebra is a tuple (Block, >>, |, fb, loop) with operations inspired by symmetric monoidal categories with feedback. The operations satisfy the expected algebraic properties (associativity of >> and |, commutativity of |) by construction, but the full categorical axioms (interchange law, coherence conditions, traced monoidal structure for feedback) have not been formally verified. > **Paper alignment note.** The foundational paper (Zargham & Shorish 2022) defines GDS via standard function composition h(x) = f(x, g(x)) and does not mandate categorical structure. The paper explicitly contrasts ACT (Applied Category Theory) with GDS, noting ACT "can be difficult to implement computationally." The categorical semantics here are a *framework design choice* for compositionality, not a mathematical requirement of the paper's GDS definition. The components are: - **Objects** are Interfaces: I = (F_in, F_out, B_in, B_out), each a tuple of Ports - **Morphisms** are Blocks: typed components with bidirectional interfaces - **>>** (sequential): first ; second with token-overlap validation - **|** (parallel): left tensor right, no shared wires - **fb** (feedback): contravariant backward flow within evaluation - **loop** (temporal): covariant forward flow across temporal boundaries **Definition 1.2 (Token System).** Port names carry structural type information via tokenization: ``` tokenize : PortName -> P(Token) Split on {' + ', ', '}, then lowercase each part. "Temperature + Setpoint" |-> {"temperature", "setpoint"} "Heater Command" |-> {"heater command"} ``` Token overlap is the auto-wiring predicate: ``` compatible(p1, p2) := tokenize(p1.name) ∩ tokenize(p2.name) != empty ``` **Definition 1.3 (GDSSpec).** A specification is an 9-tuple: ``` S = (T, Sp, E, B, W, Theta, A, Sig) T : Name -> TypeDef (type registry) Sp : Name -> Space (typed product spaces) E : Name -> Entity (state holders with typed variables) B : Name -> Block (typed compositional blocks) W : Name -> SpecWiring (named compositions with explicit wires) Theta : Name -> ParameterDef (configuration space) A : Name -> AdmissibleInputConstraint (state-dependent input constraints) Sig : MechName -> TransitionSignature (mechanism read dependencies) ``` While presented as an 9-tuple, these components are cross-referencing: blocks reference types, wirings reference blocks, entities reference types, admissibility constraints reference boundary blocks and entity variables, transition signatures reference mechanisms and entity variables. GDSSpec is more precisely a labeled graph of registries with typed edges. **Definition 1.4 (Canonical Decomposition).** The projection pi : GDSSpec -> CanonicalGDS yields: ``` C = (X, U, D, Theta, g, f, h, A_deps, R_deps) X = product_{(e,v) in E} TypeDef(e.variables[v]) state space Z = {(b, p) : b in B_boundary, p in b.forward_out} exogenous signal space D = {(b, p) : b in B_policy, p in b.forward_out} decision space g : X x Z -> D policy mapping f : X x D -> X state transition h_theta: X -> X where h = f ∘ g composed transition A_deps = {(name, {(e,v)}) : ac in A} admissibility dependencies R_deps = {(mech, {(e,v)}) : sig in Sig} mechanism read dependencies ``` **Definition 1.5 (Role Partition).** Blocks partition into disjoint roles: ``` B = B_boundary disjoint-union B_control disjoint-union B_policy disjoint-union B_mechanism B_boundary : forward_in = empty (exogenous input) B_mechanism : backward_in = backward_out = empty (state update) B_policy : no structural constraints (decision logic) B_control : no structural constraints (endogenous feedback) ``` **Definition 1.6 (TypeDef).** A type definition carries two levels: ``` TypeDef = (name, python_type, constraint, units) python_type : type (language-level type object) constraint : Optional[Any -> bool] (runtime validation predicate) ``` The constraint field admits arbitrary Callable — this is Turing-complete. ### 1.2 Semantic Web Formal Objects **Definition 1.7 (OWL DL).** OWL DL is based on the description logic SROIQ(D). It provides class-level entailment under the **open-world assumption** (OWA): absence of a statement does not imply its negation. - **Class declarations**: C, with subsumption C1 sqsubseteq C2 - **Object properties**: R : C1 -> C2 (binary relations between individuals) - **Datatype properties**: R : C -> Literal (attributes with XSD types) - **Restrictions**: cardinality (min/max), value constraints, disjointness Key property: **every entailment query terminates** (decidable). **Definition 1.8 (SHACL).** The Shapes Constraint Language validates RDF graphs against declared shapes under the **closed-world assumption** (CWA): the graph is taken as complete, and missing data counts as a violation. - **Node shapes**: target a class, constrain its properties - **Property shapes**: cardinality (sh:minCount, sh:maxCount), datatype, class membership - **SPARQL-based constraints**: sh:sparql embeds SELECT queries as validators SHACL is not a reasoning system — it validates data, not entailment. **Definition 1.9 (SPARQL 1.1).** A query language for pattern matching and aggregation over RDF graphs: - **Property paths**: transitive closure (p+), alternatives (p1|p2) - **Negation**: FILTER NOT EXISTS { pattern } - **Aggregation**: GROUP BY, HAVING, COUNT - **Graph patterns**: triple patterns with variables, OPTIONAL, UNION Key limitation: **no mutable state, no unbounded recursion, no string computation** beyond regex matching. **Remark 1.10 (Complementary formalisms).** OWL, SHACL, and SPARQL solve different problems under different assumptions: - OWL DL: class-level entailment (OWA, monotonic) - SHACL: graph shape validation (CWA, non-monotonic) - SPARQL: graph pattern queries with aggregation and negation They do not form a simple containment chain. However, for the specific purpose of **enforcing constraints on GDS-exported RDF graphs**, we distinguish three tiers of validation expressiveness: ``` SHACL-core (node/property shapes) < SPARQL graph patterns < Turing-complete ``` OWL defines the vocabulary (classes, properties, subsumption). SHACL-core — node shapes, property shapes with cardinality/datatype/class constraints, but *without* sh:sparql — validates individual nodes against local constraints. SPARQL graph patterns (standalone or embedded in SHACL via sh:sparql) can express cross-node patterns: negation-as-failure, transitive closure, aggregation. None can express arbitrary computation. This three-level ordering directly motivates the R1/R2/R3 tiers in Definition 2.2: R1 maps to OWL + SHACL-core, R2 maps to SPARQL, R3 exceeds all three formalisms. ______________________________________________________________________ ## 2. Representability Classification **Definition 2.1 (Representation Function).** Let rho be the mapping from GDS concepts to RDF graphs implemented by gds-owl's export functions: ``` rho_spec : GDSSpec -> Graph (spec_to_graph) rho_ir : SystemIR -> Graph (system_ir_to_graph) rho_can : CanonicalGDS -> Graph (canonical_to_graph) rho_ver : VerificationReport -> Graph (report_to_graph) ``` And rho^{-1} the inverse mapping (import functions): ``` rho^{-1}_spec : Graph -> GDSSpec (graph_to_spec) rho^{-1}_ir : Graph -> SystemIR (graph_to_system_ir) rho^{-1}_can : Graph -> CanonicalGDS (graph_to_canonical) rho^{-1}_ver : Graph -> VerificationReport (graph_to_report) ``` **Remark 2.1 (Bijectivity caveats).** rho is injective on structural fields but not surjective onto all possible RDF graphs (only well-formed GDS graphs are in the image). rho^{-1} is a left inverse on structural fields: rho^{-1}(rho(c)) =\_struct c. Three edge cases weaken strict bijectivity: 1. **Ordering**: RDF multi-valued properties are unordered. Port lists and wire lists may return in different order after round-trip. Equality is set-based, not sequence-based. 1. **Blank nodes**: Space fields and update map entries use RDF blank nodes. These have no stable identity across serializations. Structural equality compares by content (field name + type), not by node identity. 1. **Lossy fields**: TypeDef.constraint and AdmissibleInputConstraint.constraint are always None after import. TypeDef.python_type falls back to `str` for types not in the built-in map. These are documented R3 losses, not bijectivity failures. The round-trip suite (test_roundtrip.py: TestSpecRoundTrip, TestSystemIRRoundTrip, TestCanonicalRoundTrip, TestReportRoundTrip) verifies structural equality under these conventions for all four rho/rho^{-1} pairs. **Definition 2.2 (Representability Tiers).** A GDS concept c belongs to: **R1 (Fully representable)** if rho^{-1}(rho(c)) is structurally equal to c. The round-trip preserves all fields. Invariants on c are expressible as OWL class/property structure or SHACL cardinality/class shapes. **R2 (Structurally representable)** if rho preserves identity, topology, and classification, but loses behavioral content. Validation requires SPARQL graph pattern queries (negation-as-failure, transitive closure, aggregation) that exceed SHACL's node/property shape expressiveness. The behavioral projection, if any, is not representable. **R3 (Not representable)** if no finite OWL/SHACL/SPARQL expression can capture the concept. The gap is fundamental — it follows from: - **Rice's theorem**: any non-trivial semantic property of programs is undecidable - **The halting problem**: arbitrary Callable may not terminate - **Computational class separation**: string parsing and temporal execution exceed the expressiveness of all three formalisms ______________________________________________________________________ ## 3. Layer 0 Representability: Composition Algebra **Property 3.1 (Composition Tree is R1).** The block composition tree — including all 5 block types (AtomicBlock, StackComposition, ParallelComposition, FeedbackLoop, TemporalLoop) with their structural fields — is fully representable in OWL. *Argument.* The representation function rho maps: ``` AtomicBlock |-> gds-core:AtomicBlock (owl:Class) StackComposition |-> gds-core:StackComposition + first, second (owl:ObjectProperty) ParallelComposition |-> gds-core:ParallelComposition + left, right FeedbackLoop |-> gds-core:FeedbackLoop + inner TemporalLoop |-> gds-core:TemporalLoop + inner ``` The OWL class hierarchy mirrors the Python class hierarchy. The `first`, `second`, `left`, `right`, `inner` object properties capture the tree structure. The round-trip test `test_roundtrip.py:: TestSystemIRRoundTrip` verifies structural equality after Graph -> Turtle -> Graph -> Pydantic. The interface (F_in, F_out, B_in, B_out) is represented via four object properties (hasForwardIn, hasForwardOut, hasBackwardIn, hasBackwardOut) each pointing to Port individuals with portName and typeToken datatype properties. Port ordering within a direction may differ (RDF is unordered), but the *set* of ports is preserved. **Property 3.2 (Token Data R1, Auto-Wiring Process R3).** The materialized token data (the frozenset of strings on each Port) is R1. The auto-wiring process that uses tokenize() to discover connections is R3. *Argument.* Each Port stores `type_tokens: frozenset[str]`. These are exported as multiple `gds-core:typeToken` literals per Port individual. The round-trip preserves the token set exactly (unordered collection -> multi-valued RDF property -> unordered collection). Since gds-owl already materializes tokens during export, the RDF consumer never needs to run tokenize(). The tokens are data, not computation. The R3 classification applies specifically to **auto-wiring as a process**: discovering which ports should connect by computing token overlap from port name strings. This requires the tokenize() function (string splitting + lowercasing). While SPARQL CONSTRUCT can generate new triples from pattern matches, it cannot generate an unbounded number of new nodes from a single string value — the split points must be known at query-write time. Since GDS port names use variable numbers of delimiters, a fixed SPARQL query cannot handle all cases. In practice this is a moot point: the *wired connections* are exported as explicit WiringIR edges (R1). Only the *process of discovering them* is not replicable. **Property 3.3 (Block Roles are R1).** The role partition B = B_boundary disjoint-union B_control disjoint-union B_policy disjoint-union B_mechanism is fully representable as OWL disjoint union classes (owl:disjointUnionOf). *Argument.* Each role maps to an OWL class (gds-core:BoundaryAction, gds-core:Policy, gds-core:Mechanism, gds-core:ControlAction), all declared as subclasses of gds-core:AtomicBlock. Role-specific structural constraints are SHACL-expressible: - BoundaryAction: `sh:maxCount 0` on `hasForwardIn` (no forward inputs) - Mechanism: `sh:maxCount 0` on `hasBackwardIn` and `hasBackwardOut` - Mechanism: `sh:minCount 1` on `updatesEntry` (must update state) **Proposition 3.4 (Operators: Structure R1, Validation R3).** The composition operators `>>`, `|`, `.feedback()`, `.loop()` are R1 as *structure* (the resulting tree is preserved in RDF) but R3 as *process* (the validation logic run during construction cannot be replicated). Specifically: - `>>` validates token overlap between first.forward_out and second.forward_in — requires tokenize() (R3) - `.loop()` enforces COVARIANT-only on temporal_wiring — the flag check is R1 (SHACL on direction property), but port matching uses tokens (R3) ______________________________________________________________________ ## 4. Layer 1 Representability: Specification Framework **Property 4.1 (GDSSpec Structure is R1).** The 9-tuple S = (T, Sp, E, B, W, Theta, A, Sig) round-trips through OWL losslessly for all structural fields. *Argument.* Each component maps to an OWL class with named properties: ``` TypeDef |-> gds-core:TypeDef + name, pythonType, units, hasConstraint Space |-> gds-core:Space + name, description, hasField -> SpaceField Entity |-> gds-core:Entity + name, description, hasVariable -> StateVariable Block |-> gds-core:{role class} + name, kind, hasInterface, usesParameter, ... SpecWiring |-> gds-core:SpecWiring + name, wiringBlock, hasWire -> Wire ParameterDef |-> gds-core:ParameterDef + name, paramType, lowerBound, upperBound AdmissibleInputConstraint |-> gds-core:AdmissibleInputConstraint + name, constrainsBoundary, hasDependency -> AdmissibilityDep TransitionSignature |-> gds-core:TransitionSignature + signatureForMechanism, hasReadEntry -> TransitionReadEntry ``` The `test_roundtrip.py::TestSpecRoundTrip` suite verifies: types, spaces, entities, blocks (with role, params, updates), parameters, wirings, admissibility constraints, and transition signatures all survive the round-trip. Documented exceptions: TypeDef.constraint (Property 4.2) and AdmissibleInputConstraint.constraint (Property 4.5) — both lossy for the same reason (arbitrary Callable). **Property 4.2 (Constraint Predicates).** The constraints used in practice across all GDS DSLs (numeric bounds, non-negativity, probability ranges) are expressible in SHACL (`sh:minInclusive`, `sh:maxInclusive`). This covers Probability, NonNegativeFloat, PositiveInt, and most GDS built-in types. These specific constraints are R2. The general case — TypeDef.constraint : Optional\[Callable\[[Any], bool\]\] — is R3. By Rice's theorem, any non-trivial semantic property of such functions is undecidable: - Given two constraints c1, c2, the question "do c1 and c2 accept the same values?" is undecidable (equivalence of arbitrary programs) - Given a constraint c, the question "does c accept any value?" is undecidable (non-emptiness of the accepted set) OWL DL is decidable (SROIQ). SHACL with SPARQL constraints is decidable on finite graphs. Neither can embed an undecidable problem. This theoretical limit rarely applies to real GDS specifications, where constraints are simple numeric bounds. **Observation 4.3 (Policy Mapping g is R1 by Design).** By design, GDSSpec stores no behavioral content for policy blocks. A Policy block is defined by what it connects to (interface, wiring position, parameter dependencies), not by what it computes. Consequently, the policy mapping g in the canonical form h = f ∘ g is fully characterized by structural fields, all of which are R1 (Property 3.1, 3.3, 4.1). This is a design decision, not a mathematical necessity — one could imagine a framework that attaches executable policy functions to blocks. GDS deliberately does not, keeping the specification layer structural. **Property 4.4 (State Transition f Decomposes).** The state transition f decomposes as a tuple f = ⟨f_struct, f_read, f_behav⟩ where: ``` f_struct : B_mechanism -> P(E x V) The explicit write mapping from mechanisms to state variables. "Mechanism M updates Entity E variable V." This is a finite relation — R1. (Stored in Mechanism.updates.) f_read : B_mechanism -> P(E x V) The explicit read mapping from mechanisms to state variables. "Mechanism M reads Entity E variable V to compute its update." This is a finite relation — R1. (Stored in TransitionSignature.reads.) f_behav : X x D -> X The endomorphism on the state space parameterized by decisions. "Given current state x and decisions d, compute next state x'." This is an arbitrary computable function — R3. ``` Together, f_struct and f_read provide a complete structural data-flow picture of each mechanism: what it reads and what it writes. Only f_behav — the function that transforms reads into writes — remains R3. The composed system h = f ∘ g inherits: the structural decomposition (which blocks compose into h, via what wirings) is R1. The execution semantics (what h actually computes given inputs) is R3. **Property 4.5 (Admissible Input Constraints follow the f_struct/f_behav pattern).** An AdmissibleInputConstraint (Paper Def 2.5: U_x) decomposes as: > **Paper alignment note.** The paper defines the Admissible Input Map as a single function U: X -> P(U) (Def 2.5) with no structural/behavioral decomposition. The split below into U_x_struct (dependency graph) and U_x_behav (constraint predicate) is a *framework design choice* for ontological representation, enabling the dependency graph to be serialized as R1 while the predicate remains R3. ``` U_x_struct : A -> P(E x V) The dependency relation: "BoundaryAction B's admissible outputs depend on Entity E variable V." This is a finite relation — R1. U_x_behav : (state, input) -> bool The actual admissibility predicate: "is this input admissible given this state?" This is an arbitrary Callable — R3. ``` The structural part (name, boundary_block, depends_on) round-trips through OWL. The constraint callable is exported as a boolean `admissibilityHasConstraint` flag (present/absent) and imported as None — the same pattern as TypeDef.constraint. SC-008 validates that the structural references are well-formed (boundary block exists and is a BoundaryAction, depends_on references valid entity.variable pairs). **Property 4.6 (Transition Signatures follow the same pattern).** A TransitionSignature (Paper Def 2.7: f|\_x) provides: > **Paper alignment note.** The paper defines f|\_x : U_x -> X (Def 2.7) as a single restricted map. The decomposition into f_read (which variables are read) and f_block_deps (which blocks feed this mechanism) is a *framework design choice* to capture data-flow dependencies structurally. ``` f_read : Sig -> P(E x V) The read dependency relation: "Mechanism M reads Entity E variable V." This is a finite relation — R1. f_block_deps : Sig -> P(B) Which upstream blocks feed this mechanism. This is a finite relation — R1. ``` Combined with the existing update_map (f_struct: which variables a mechanism *writes*), TransitionSignature completes the structural picture: now both reads and writes of every mechanism are declared. SC-009 validates that the structural references are well-formed. The actual transition function (what M computes from its reads to produce new values for its writes) remains R3 — it is an arbitrary computable function, never stored in GDSSpec. ______________________________________________________________________ ## 5. Verification Check Classification Each of the 15 GDS verification checks is classified by whether SHACL/SPARQL can express it on the exported RDF graph, with practical impact noted. ### 5.1 Generic Checks (on SystemIR) | Check | Property | Tier | Justification | Practical Impact | | --------- | ----------------------------- | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------- | | **G-001** | Domain/codomain matching | **R3** | Requires tokenize() — string splitting computation | Low: wired connections already exported as explicit edges | | **G-002** | Signature completeness | **R1** | Cardinality check on signature fields. SHACL sh:minCount. | Covered by SHACL | | **G-003** | Direction consistency | **R1** (flags) / **R3** (ports) | Flag contradiction is boolean — SHACL expressible. Port matching uses tokens (R3). | Flags covered; port check deferred to Python | | **G-004** | Dangling wirings | **R2** | WiringIR source/target are string literals (datatype properties), not object property references. Checking that a string name appears in the set of BlockIR names requires SPARQL negation-as-failure on string matching. Unlike SC-005 where `usesParameter` is an object property. | Expressible via SPARQL | | **G-005** | Sequential type compatibility | **R3** | Same tokenize() requirement as G-001 | Low: same mitigation as G-001 | | **G-006** | Covariant acyclicity (DAG) | **R2** | Cycle detection = self-reachability under transitive closure on materialized covariant edges. SPARQL: `ASK { ?v gds-ir:covariantSuccessor+ ?v }`. Requires materializing the filtered edge relation (direction="covariant" and is_temporal=false) first. | Expressible with preprocessing | ### 5.2 Semantic Checks (on GDSSpec) | Check | Property | Tier | Justification | Practical Impact | | ---------- | --------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------- | | **SC-001** | Completeness | **R2** | SPARQL: LEFT JOIN Entity.variables with Mechanism.updatesEntry, FILTER NOT EXISTS for orphans. | Expressible | | **SC-002** | Determinism | **R2** | SPARQL: GROUP BY (entity, variable) within wiring, HAVING COUNT(mechanism) > 1. | Expressible | | **SC-003** | Reachability | **R2** | SPARQL property paths on the wiring graph. Note: follows directed wiring edges (wireSource -> wireTarget), respecting flow direction. | Expressible | | **SC-004** | Type safety | **R2** | Wire.space is a string literal; checking membership in the set of registered Space names requires SPARQL, not SHACL sh:class (which works on object properties, as in SC-005). | Expressible via SPARQL | | **SC-005** | Parameter references | **R1** | SHACL sh:class on usesParameter targets. Already implemented in gds-owl shacl.py. | Covered by SHACL | | **SC-006** | f non-empty | **R1** | Equivalent to SHACL `sh:qualifiedMinCount 1` with `sh:qualifiedValueShape [sh:class gds-core:Mechanism]` on the spec node. (SPARQL illustration: `ASK { ?m a gds-core:Mechanism }`) | Covered by SHACL-core | | **SC-007** | X non-empty | **R1** | Same pattern: SHACL `sh:qualifiedMinCount 1` for StateVariable. (SPARQL illustration: `ASK { ?sv a gds-core:StateVariable }`) | Covered by SHACL-core | | **SC-008** | Admissibility references | **R1** | SHACL: `constrainsBoundary` must target a `BoundaryAction` (sh:class). Dependency entries (AdmissibilityDep) validated structurally. | Covered by SHACL | | **SC-009** | Transition read consistency | **R1** | SHACL: `signatureForMechanism` must target a `Mechanism` (sh:class). Read entries (TransitionReadEntry) validated structurally. | Covered by SHACL | ### 5.3 Summary ``` R1 (SHACL-core): G-002, SC-005, SC-006, SC-007, SC-008, SC-009 = 6 R2 (SPARQL): G-004, G-006, SC-001, SC-002, SC-003, SC-004 = 6 R3 (Python-only): G-001, G-005 = 2 Mixed (R1 + R3): G-003 (flag check R1, port matching R3) = 1 ``` The R1/R2 boundary is mechanically determined: R1 = expressible in SHACL-core (no sh:sparql), R2 = requires SPARQL graph patterns. The R3 checks share a single root cause: **token-based port name matching requires string computation that exceeds SPARQL's value space operations**. In practice, this is mitigated by materializing tokens during export — the connections themselves are always R1 as explicit wiring edges. ______________________________________________________________________ ## 6. Classification Summary **Definition 6.1 (Structural/Behavioral Partition).** We define: ``` G_struct = { composition tree, block interfaces, role partition, wiring topology, update targets, parameter schema, space/entity structure, canonical form metadata, admissibility dependency graph (U_x_struct), transition read dependencies (f_read) } G_behav = { transition functions (f_behav), constraint predicates, admissibility predicates (U_x_behav), auto-wiring process, construction-time validation, scheduling/execution semantics } ``` **Consistency Check 6.1.** The structural/behavioral partition we define aligns exactly with the R1+R2 / R3 classification. This is a consistency property of our taxonomy, not an independent mathematical result — we defined G_struct and G_behav to capture what is and isn't representable. By exhaustive classification in Sections 3-5: G_struct concepts and their tiers: - Composition tree: R1 (Property 3.1) - Block interfaces: R1 (Property 3.1) - Role partition: R1 (Property 3.3) - Wiring topology: R1 (Property 4.1) - Update targets: R1 (Property 4.4, f_struct) - Parameter schema: R1 (Property 4.1) - Space/entity structure: R1 (Property 4.1) - Admissibility dependency graph (U_x_struct): R1 (Property 4.5) - Transition read dependencies (f_read): R1 (Property 4.6) - State metric variable declarations (d_X_struct): R1 (Assumption 3.2) [\*] - Acyclicity: R2 (Section 5.1, G-006) - Completeness/determinism: R2 (Section 5.2, SC-001, SC-002) - Reference validation (dangling wirings): R2 (Section 5.1, G-004) G_behav concepts and their tiers: - Transition functions: R3 (Property 4.4, f_behav) - Constraint predicates: R3 (Property 4.2, general case) - Admissibility predicates (U_x_behav): R3 (Property 4.5) - State metric distance callable (d_X_behav): R3 (Assumption 3.2) [\*] - Auto-wiring process: R3 (Property 3.2) > \[*\]* *Paper alignment note.*\* The paper defines d_X : X x X -> R (Assumption 3.2) as a single metric with no structural/behavioral decomposition. The split into variable declarations (R1) and distance callable (R3) follows the same framework pattern as Properties 4.5-4.6 — an ontological design choice, not a paper requirement. > > - Construction validation: R3 (Proposition 3.4) > - Scheduling semantics: R3 (not stored in GDSSpec — external) No G_struct concept is R3. No G_behav concept is R1 or R2. **Property 6.2 (Canonical Form as Representability Boundary).** In the decomposition h = f ∘ g: ``` g is entirely in G_struct (R1, by Observation 4.3) f = ⟨f_struct, f_behav⟩ (R1 + R3, by Property 4.4) h = structural skeleton + behavioral core ``` The canonical form cleanly separates what ontological formalisms can express (g, f_struct) from what requires a runtime (f_behav). **Corollary 6.3 (GDSSpec Projection of Games is Fully Representable).** When h = g (the OGS case: X = empty, f = empty), the GDSSpec-level structure is fully representable. The OGS canonical bridge (spec_bridge.py) maps all atomic games to Policy blocks, producing h = g with no behavioral f component. By Observation 4.3, g is entirely R1. Note: game-theoretic behavioral content — payoff functions, utility computation, equilibrium strategies — resides in OpenGame subclass methods and external solvers, outside GDSSpec scope, and is therefore R3. The corollary applies to the specification-level projection, not to full game analysis. **Corollary 6.4 (Dynamical Systems Degrade Gracefully).** For systems with h = f ∘ g where f != empty, the structural skeleton (g + f_struct) is always complete in OWL. Each mechanism adds one update target to f_struct (R1) and one transition function to f_behav (R3). The "what" is never lost — only the "how." **Remark 6.5 (TemporalLoop vs CorecursiveLoop in OWL).** OWL cannot distinguish a temporal loop (physical state persistence, e.g., control systems) from a corecursive loop (strategic message threading, e.g., repeated games). CorecursiveLoop (defined in gds-games as `ogs.dsl.composition.CorecursiveLoop`, a TemporalLoop subclass for repeated game semantics) shares identical structural representation: both use covariant wiring from inner.forward_out to inner.forward_in with an exit_condition string. The semantic difference — "state at t feeds sensors at t+1" vs "decisions at round t feed observations at round t+1" — is an interpretation, not topology. In practice this is benign: gds-owl preserves the DSL source label (gds-ir:sourceLabel on SystemIR), so consumers can recover which DSL compiled the system and interpret temporal wirings accordingly. ______________________________________________________________________ ## 7. Analysis Domain Classification Each type of analysis on a GDS specification maps to a representability tier based on what it requires: ### 7.1 R1: Fully Expressible (OWL Classes + Properties) | Analysis | Nature | GDS Implementation | Why R1 | | -------------------------------------------- | --------------------- | ---------------------------------------- | ---------------------------- | | What connects to what | Static topology | SpecQuery.dependency_graph() | Wiring graph is R1 | | How blocks compose | Static structure | HierarchyNodeIR tree | Composition tree is R1 | | Which blocks are which roles | Static classification | project_canonical() partition | Role partition is R1 | | Which params affect which blocks | Static dependency | SpecQuery.param_to_blocks() | usesParameter relation is R1 | | Which state variables constrain which inputs | Static dependency | SpecQuery.admissibility_dependency_map() | U_x_struct is R1 | | Which state variables does a mechanism read | Static dependency | SpecQuery.mechanism_read_map() | f_read is R1 | | Game classification | Static strategic | PatternIR game_type field | Metadata on blocks, R1 | ### 7.2 R2: SPARQL-Expressible (Graph Queries + Aggregation) | Analysis | Nature | GDS Implementation | Why R2 | | ------------------------------------------ | ----------------------- | ------------------ | -------------------------------------------- | | Is the wiring graph acyclic? | Structural invariant | G-006 (DFS) | Transitive self-reachability on finite graph | | Does every state variable have an updater? | Structural invariant | SC-001 | Left-join with negation | | Are there write conflicts? | Structural invariant | SC-002 | Group-by with count > 1 | | Are all references valid? | Structural invariant | G-004, SC-004 | Reference validation | | Can block A reach block B? | Structural reachability | SC-003 | Property path on wiring graph | ### 7.3 R3: Python-Only (Requires Runtime) | Analysis | Nature | GDS Implementation | Why R3 | | -------------------------- | ------------------ | ------------------------------- | ------------------------------------------ | | State evolution over time | Dynamic temporal | gds-sim execution | Requires evaluating f repeatedly | | Constraint satisfaction | Dynamic behavioral | TypeDef.constraint() | General case: Rice's theorem | | Auto-wiring computation | Dynamic structural | tokenize() + overlap | String parsing exceeds SPARQL | | Actual signal propagation | Dynamic behavioral | simulation with concrete values | Requires computing g(x, z) | | Scheduling/delay semantics | Dynamic temporal | execution model | Not stored in GDS — external | | Equilibrium computation | Dynamic strategic | game solvers | Computing Nash equilibria is PPAD-complete | Note the distinction: **equilibrium structure** (which games exist, how they compose) is R1. **Equilibrium computation** (finding the actual equilibrium strategies) is R3. This parallels the f_struct / f_behav split: the structure of the analysis is representable; the computation of the analysis is not. ______________________________________________________________________ ## 8. Five Formal Correspondences ### Correspondence 1: Static Topology \<-> OWL Class/Property Hierarchy ``` rho : (blocks, wirings, interfaces, ports) <-> OWL individuals + object properties ``` R1. The composition tree, wiring graph, and port structure map to OWL individuals connected by named object properties. ### Correspondence 2: Structural Invariants \<-> SHACL Shapes + SPARQL Queries ``` {G-002, G-004, G-006, SC-001..SC-009} <-> SHACL + SPARQL ``` R1 or R2 depending on the check. SHACL-core captures cardinality and class-membership constraints (6 checks: G-002, SC-005..SC-009). SPARQL captures graph-pattern queries requiring negation, transitivity, aggregation, or cross-node string matching (6 checks). The 2 remaining checks (G-001, G-005) require tokenization. G-003 splits: flag check R1, port matching R3. ### Correspondence 3: Dynamic Behavior \<-> Python Runtime Only ``` {TypeDef.constraint (general), f_behav, auto-wiring, scheduling} <-> Python ``` R3. Fundamental. These require Turing-complete computation. The boundary is Rice's theorem (for predicates) and computational class separation (for string parsing and temporal execution). ### Correspondence 4: Equilibrium Structure \<-> Naturally Structural ``` h = g (OGS canonical form) <-> GDSSpec projection is lossless ``` R1 for the specification-level projection. When a system has no state (X = empty, f = empty), its GDSSpec is purely compositional. Game-theoretic behavioral content (payoff functions, equilibrium solvers) is outside GDSSpec and therefore R3. ### Correspondence 5: Reachability \<-> Structural Part R2, Dynamical Part R3 ``` Structural reachability : "can signals reach from A to B?" -> R2 (SPARQL property paths) Dynamical reachability : "does signal actually propagate?" -> R3 (requires evaluating g and f) ``` The structural question asks about the *topology* of the wiring graph. SPARQL property paths (`?a successor+ ?b`) answer this on finite graphs. The dynamical question asks about *actual propagation* given concrete state values and policy functions — this requires executing the system. # Representation Gap: Pydantic vs OWL/RDF ## The Core Insight Python (Pydantic) and OWL/RDF are not in a hierarchy — they are **complementary representation systems** with different strengths. The bidirectional round-trip in `gds-owl` proves they overlap almost completely, but the small gap between them is revealing. ## What Each Representation Captures ### What OWL/RDF captures that Python doesn't | Capability | OWL/RDF | Python/Pydantic | | ------------------------ | -------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- | | **Cross-system linking** | Native. A GDSSpec in one graph can reference entities in another via URIs. | Requires custom serialization + shared registries. | | **Open-world reasoning** | OWL reasoners can infer facts not explicitly stated (e.g., "if X is a Mechanism and X updatesEntry Y, then X affects Entity Z"). | Closed-world only. You must write the inference logic yourself. | | **Schema evolution** | Add new properties without breaking existing consumers. Unknown triples are simply ignored. | Adding a field to a frozen Pydantic model is a breaking change. | | **Federated queries** | SPARQL can query across multiple GDS specs in a single query, even from different sources. | Requires loading all specs into memory and writing custom join logic. | | **Provenance** | PROV-O gives audit trails for free (who created this spec, when, derived from what). | Must be implemented manually. | | **Self-describing data** | A Turtle file contains its own schema context via prefixes and class declarations. | A JSON file requires external schema knowledge to interpret. | ### What Python captures that OWL/RDF doesn't | Capability | Python/Pydantic | OWL/RDF | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | | **Constraint functions** | `TypeDef.constraint = lambda x: 0 <= x <= 1` — a runtime predicate that validates actual data values. | Cannot represent arbitrary predicates. Can document them as annotations, but cannot execute them. | | **Composition operators** | `sensor >> controller >> heater` — the `>>`, \` | `, `.feedback()`, `.loop()\` DSL is Python syntax. | | **Construction-time validation** | `@model_validator(mode="after")` enforces invariants the instant a model is created. | SHACL validates after the fact, not during construction. Invalid data can exist in a graph. | | **Type-level computation** | Token-based auto-wiring: `"Temperature + Setpoint"` splits on `+`, lowercases, and checks set overlap. This is a runtime computation. | Can store the resulting tokens as RDF lists, but cannot compute the tokenization. | | **IDE ergonomics** | Autocomplete, type checking, refactoring, debugging. The Python type system is a development tool. | Protege exists, but the tooling ecosystem is smaller and less integrated with modern dev workflows. | | **Performance** | Pydantic model construction: microseconds. | rdflib graph construction: 10-100x slower. SHACL validation via pyshacl: significantly slower than `@model_validator`. | ## The Lossy Fields (Documented) These are the specific fields lost during round-trip. Each reveals a category boundary: ### 1. `TypeDef.constraint` — Runtime Predicate ``` # Python: executable constraint Temperature = TypeDef( name="Temperature", python_type=float, constraint=lambda x: -273.15 <= x <= 1000.0, # physically meaningful range units="celsius", ) # RDF: can only record that a constraint exists # gds-core:hasConstraint "true"^^xsd:boolean ``` **Why it's lossy**: A Python `Callable[[Any], bool]` is Turing-complete. OWL DL is decidable. You cannot embed an arbitrary program in an ontology and have it remain decidable. **Workaround**: Export the constraint as a human-readable annotation (`rdfs:comment`), or as a SHACL `sh:pattern` / `sh:minInclusive` / `sh:maxInclusive` for simple numeric bounds. Complex predicates require linking to the source code via `rdfs:seeAlso`. ### 2. `TypeDef.python_type` — Language-Specific Type ``` # Python: actual runtime type TypeDef(name="Temperature", python_type=float) # RDF: string representation # gds-core:pythonType "float"^^xsd:string ``` **Why it's lossy**: `float` is a Python concept. OWL has `xsd:float`, `xsd:double`, etc., but the mapping isn't 1:1 (Python `float` is IEEE 754 double-precision, which maps to `xsd:double`, not `xsd:float`). For round-trip, we map common type names back via a lookup table, but custom types (e.g., `numpy.float64`) would need a registry. **Impact**: Low. The built-in type map covers `float`, `int`, `str`, `bool` — which account for all current GDS usage. ### 3. Composition Tree — Structural vs Behavioral ``` # Python: live composition with operators system = (sensor | observer) >> controller >> heater system = system.feedback(wiring=[...]) # RDF: can represent the resulting tree # :system gds-core:first :sensor_observer_parallel . # :system gds-core:second :controller_heater_stack . ``` **Why it's partially lossy**: The RDF graph captures the *structure* of the composition tree (what blocks are composed how), but not the *process* of building it. The `>>` operator includes validation logic (token overlap checking) that runs at construction time. This validation is captured in SHACL shapes, but the dynamic dispatch and error messages are Python-specific. **Impact**: None for GDSSpec export (blocks are already composed). Only relevant if you wanted to *construct* a composition from RDF, which would require a builder that re-applies the validation logic. ## Why OWL/SHACL Can't Store "What Things Do" Your intuition — circuit diagrams vs circuit simulations — is almost exactly right. But the deeper reason is worth understanding, because it's not an engineering limitation. It's a mathematical one. ### The Decidability Trade-off OWL is based on **Description Logic** (specifically OWL DL uses SROIQ). Description Logics are fragments of first-order logic that are deliberately restricted so that: 1. **Every query terminates.** Ask "is X a subclass of Y?" and you are guaranteed an answer in finite time. 1. **Consistency is checkable.** Ask "can this ontology ever contain a contradiction?" and you get a definitive yes/no. 1. **Classification is automatic.** The reasoner can infer the complete class hierarchy without human guidance. These guarantees come at a cost: you cannot express arbitrary computation. The moment you allow unrestricted recursion, loops, or Turing-complete predicates, you lose decidability — some queries would run forever. A Python `lambda x: 0 <= x <= 1` is trivial, but the type signature `Callable[[Any], bool]` admits *any* computable function, including ones that don't halt. OWL cannot embed that and remain OWL. ### The Circuit Analogy (Refined) | | Circuit Diagram | Circuit Simulation | | -------------------- | ---------------------------------------------- | ------------------------------------------------------ | | **Analog in GDS** | OWL ontology + RDF instance data | Python Pydantic models + runtime | | **What it captures** | Components, connections, topology, constraints | Voltage, current, timing, behavior over time | | **Can answer** | "Is this resistor connected to ground?" | "What voltage appears at node 3 at t=5ms?" | | **Cannot answer** | "What happens when I flip this switch?" | "Is this the only valid topology?" (needs the diagram) | This is correct, but the analogy goes deeper: **A circuit diagram is a specification. A simulation is an execution.** You can derive a simulation from a diagram (given initial conditions and a solver), but you cannot derive the diagram from a simulation (infinitely many circuits could produce the same waveform). Similarly: - **OWL/RDF is specification.** It says what types exist, how blocks connect, what constraints hold. - **Python is execution.** It actually validates data, composes blocks, runs the token-overlap algorithm. You can derive the RDF from the Python (that's what `spec_to_graph()` does). You can mostly derive the Python from the RDF (that's what `graph_to_spec()` does). But the execution semantics — the `lambda`, the `>>` operator's validation logic, the `@model_validator` — live only in the runtime. ### Three Levels of "Knowing" This maps to a well-known hierarchy in formal systems: | Level | What it captures | GDS example | Formalism | | -------------- | -------------------------------- | ------------------------------------------------------------------ | -------------------------- | | **Syntax** | Structure, names, connections | Block names, port names, wiring topology | RDF triples | | **Semantics** | Meaning, types, constraints | "Temperature is a float in celsius", "Mechanism must update state" | OWL classes + SHACL shapes | | **Pragmatics** | Behavior, computation, execution | `constraint=lambda x: x >= 0`, `>>` auto-wiring by token overlap | Python runtime | OWL lives at levels 1 and 2. Python lives at all three. The gap is level 3 — and it's the same gap that separates every declarative specification language from every imperative programming language. It's not a bug in OWL. It's the price of decidability. ### SHACL Narrows the Gap (But Doesn't Close It) SHACL pushes closer to behavior than OWL alone: ``` # SHACL can express: "temperature must be between -273.15 and 1000" :TemperatureConstraint a sh:NodeShape ; sh:property [ sh:path :value ; sh:minInclusive -273.15 ; sh:maxInclusive 1000.0 ; ] . ``` This covers many real GDS constraints. But SHACL's `sh:sparql` constraints, while powerful, are still not Turing-complete — SPARQL queries always terminate on finite graphs. You cannot write a SHACL shape that says "validate this value by running an arbitrary Python function." SWRL (Semantic Web Rule Language) gets even closer — it can express Horn-clause rules. But it still can't express negation-as-failure, higher-order functions, or stateful computation. The boundary is fundamental: **decidable formalisms cannot embed undecidable computation**. This is not a limitation of OWL's design. It's a consequence of the halting problem. ### What This Means in Practice For GDS specifically, the practical impact is small: - **95% of GDS structure** round-trips perfectly through RDF - **Most constraints** are simple numeric bounds expressible in SHACL - **The composition tree** is fully captured as structure - **Only `Callable` predicates** and language-specific types are truly lost The circuit analogy holds: you design the circuit (OWL), you simulate it (Python), and the design document captures everything except the electrons moving through the wires. ## The GDS Compositionality-Temporality Boundary Is the Same Boundary GDS already discovered this gap internally — between game-theoretic composition and dynamical systems composition. The OWL representation gap is the same boundary, seen from the outside. ### What GDS Found The canonical spectrum across five domains revealed a structural divide: | Domain | dim(X) | dim(f) | Form | Character | | ------------------------ | ------ | ------ | --------- | ------------------------------------- | | OGS (games) | 0 | 0 | h = g | Stateless — pure maps | | Control | n | n | h = f . g | Stateful — observation + state update | | StockFlow | n | n | h = f . g | Stateful — accumulation dynamics | | Software (DFD/SM/C4/ERD) | 0 or n | 0 or n | varies | Diagram-dependent | | Business (CLD/SCN/VSM) | 0 or n | 0 or n | varies | Domain-dependent | Games compute equilibria. They don't write to persistent state. Even corecursive loops (repeated games) carry information forward as *observations*, not as *entity mutations*. In category-theoretic terms: open games are morphisms in a symmetric monoidal category with feedback. They are maps, not machines. Control and stock-flow systems are the opposite. They have state variables (X), state update functions (f), and the temporal loop carries physical state forward across timesteps. Both use the **same structural composition operators** (`>>`, `|`, `.feedback()`, `.loop()`). The algebra is identical. The semantics are orthogonal. ### OWL Lives on the Game-Theory Side of This Boundary This is the key insight: **OWL/RDF is inherently atemporal**. An RDF graph is a set of (subject, predicate, object) triples — relations between things. There is no built-in notion of "before and after," "state at time t," or "update." This means OWL naturally represents the compositional/structural side of GDS (the `g` in `h = f . g`) far better than the temporal/behavioral side (the `f`): | GDS Component | Nature | OWL Fit | | ------------------------------------- | --------------------------------------------------- | ------------------------------------------------------------ | | **g** (policy, observation, decision) | Structural mapping — signals in, signals out | Excellent. Object properties capture flow topology. | | **f** (state update, mechanism) | Temporal mutation — state at t becomes state at t+1 | Partial. Can describe *what* f updates, but not *how*. | | **Composition tree** (>>, | ) | Structural nesting | | **FeedbackLoop** (.feedback()) | Within-timestep backward flow | Good. Structural — just backward edges. | | **TemporalLoop** (.loop()) | Across-timestep forward recurrence | Structural part captured, temporal semantics lost. | | **CorecursiveLoop** (OGS) | Across-round strategic iteration | Same structure as TemporalLoop — OWL can't distinguish them. | The last row is the critical one: **OWL cannot distinguish a corecursive game loop from a temporal state loop**, because the distinction is semantic (what does iteration *mean*?), not structural (how are the wires connected?). This is exactly the same problem GDS faced at Layer 0. The composition algebra treats `TemporalLoop` and `CorecursiveLoop` identically — same wiring pattern, same structural validation. The difference is domain semantics, which lives in the DSL layer (Layer 1+), not in the algebra. ### The Three-Way Isomorphism ``` Game-theoretic composition ←→ OWL/RDF representation (atemporal, structural, (atemporal, structural, maps between spaces) relations between entities) Dynamical systems execution ←→ Python runtime (temporal, behavioral, (temporal, behavioral, state evolving over time) computation producing results) ``` Games and ontologies are both **declarative**: they describe what things are and how they relate. Dynamical systems and programs are both **imperative**: they describe what happens over time. GDS bridges these two worlds with the canonical form `h = f . g`: - `g` is the declarative part (composable, structural, OWL-friendly) - `f` is the imperative part (state-updating, temporal, Python-native) - `h` is the complete system (both sides unified) The round-trip gap in gds-owl is precisely the `f` side leaking through. ### Why OGS Round-Trips Better Than Control This predicts something testable: **OGS specifications should round-trip through OWL with less information loss than control or stock-flow specifications**, because OGS is `h = g` (purely structural/compositional, no state update semantics to lose). And indeed: - OGS blocks are all Policy — no `Mechanism.updates` to reify - OGS has no Entity/StateVariable — no state space to encode - The corecursive loop is structurally identical to a temporal loop — no semantic distinction is lost because there was no temporal semantics to begin with - The canonical form `h = g` maps directly to "all blocks are related by composition" — which is exactly what OWL expresses Control and stock-flow systems lose the `f` semantics: - `Mechanism.updates = [("Room", "temperature")]` becomes a reified triple that says *what* gets updated, but not *how* (the state transition function itself is a Python callable) - The temporal loop says "state feeds back" structurally, but not "with what delay" or "under what scheduling semantics" ### What This Means for the Research Questions The GDS research boundaries document (research-boundaries.md) identified three key open questions. Each maps directly to the OWL representation gap: **RQ1 (MIMO semantics)**: Should vector-valued spaces become first-class? - OWL impact: Vector spaces are harder to represent than scalar ports. RDF naturally represents named relations, not ordered tuples. This is a structural limitation shared by both the composition algebra and OWL. **RQ2 (What does a timestep mean?)**: Different domains interpret `.loop()` differently. - OWL impact: This is *exactly* the gap. OWL captures the loop *structure* but not the loop *semantics*. A temporal loop in control (physical state persistence) and a corecursive loop in OGS (strategic message threading) are the same OWL triples. The distinction requires domain-specific annotation — which is what the dual IR stack (PatternIR + SystemIR) already provides in Python. **RQ3 (OGS as degenerate dynamical system)**: Is X=0, f=0, h=g a valid GDS? - OWL impact: Yes, and it's the *best-represented* case. A system with no state variables and no mechanisms is purely compositional — which is the part of GDS that OWL captures perfectly. The "degenerate" case is actually the one where OWL and Pydantic representations are isomorphic. ### The Circuit Analogy (Revisited) The earlier analogy — circuit diagrams vs circuit simulations — now sharpens: | | Circuit Diagram | Schematic + Netlist | SPICE Simulation | | --------------- | ---------------------------- | ------------------------------------------ | ---------------------------- | | GDS analog | OWL ontology | Composition algebra (Layer 0) | Python runtime (gds-sim) | | Games analog | Strategy profile description | Game tree | Equilibrium solver | | Dynamics analog | Block diagram | State-space model | ODE integrator | | Captures | Topology + component types | Topology + port typing + composition rules | Behavior over time | | Misses | Behavior, timing | Execution semantics | Often loses global structure | OWL is the diagram. The composition algebra is the netlist. Python is the simulator. Games live naturally in the diagram/netlist. Dynamics need the simulator. ## What This Means for the "Ontology-First" Future The gap analysis suggests a three-tier architecture: ``` Tier 1: OWL Ontology (schema) - Class hierarchy, property definitions - SHACL shapes for structural validation - SPARQL queries for analysis → Source of truth for: what things ARE Tier 2: Python DSL (behavior) - Composition operators (>>, |, .feedback(), .loop()) - Runtime constraint predicates - Construction-time validation → Source of truth for: what things DO Tier 3: Instance Data (both) - Pydantic models ↔ RDF graphs (round-trip proven) - Either format can be the serialization layer → The overlap zone where both representations agree ``` The key insight: **you don't have to choose one**. The ontology defines the vocabulary and structural rules. Python defines the computational behavior. Instance data lives in both and can be translated freely. This is analogous to how SQL databases work: the schema (DDL) defines structure, application code defines behavior, and data lives in both the database and application memory. Nobody argues that SQL "stores more" than Python or vice versa — they serve different roles. ## Practical Implications ### When to use OWL/RDF - Publishing a GDS specification for external consumption - Querying across multiple specifications simultaneously - Linking GDS specs to external ontologies (FIBO, ArchiMate, PROV-O) - Archiving specifications with self-describing metadata - Running structural validation without Python installed ### When to use Pydantic - Building and composing specifications interactively - Running constraint validation on actual data values - Leveraging IDE tooling (autocomplete, type checking) - Performance-sensitive operations (construction, validation) - Anything involving the composition DSL (`>>`, `|`, `.feedback()`, `.loop()`) ### When to use both - Development workflow: build in Python, export to RDF for publication - Verification: SHACL for structural checks, Python for runtime checks - Cross-system analysis: export multiple specs to RDF, query with SPARQL - Round-trip: start from RDF (e.g., Protege-edited), import to Python for computation # GDS Formal Verification Plan ## Context The GDS composition algebra `(Block, >>, |, fb, loop)` and its canonical decomposition `h = f . g` are the structural foundation for five domain DSLs. Three claims in [formal-representability.md](https://blockscience.github.io/gds-core/research/formal-representability/index.md) remain unverified: 1. **Categorical axioms** -- interchange law, coherence conditions, and traced monoidal structure for feedback have not been formally proved (Def 1.1) 1. **R1/R2/R3 taxonomy** -- acknowledged as a designed classification, not an independently derived mathematical result (Consistency Check 6.1) 1. **Round-trip bijectivity** -- `rho^{-1}(rho(c)) =_struct c` holds by test suite, but not by proof; blank node isomorphism and ordering are handled ad-hoc This plan establishes a phased verification roadmap with strict dependency ordering. Each phase produces artifacts that the next phase depends on. ______________________________________________________________________ ## Phase 1: Composition Algebra (Categorical Semantics) **Goal:** Prove the GDS block algebra is a symmetric monoidal category with traced structure. ### 1a. Interchange Law Prove: `(g >> f) | (j >> h) = (g | j) >> (f | h)` for all suitably typed blocks. This is the gate. If it fails, parallel and sequential composition are order-dependent, and the diagrammatic syntax is ambiguous. **Approach:** Property-based testing first (Hypothesis), then mechanized proof. - **Hypothesis (Python):** Generate random `AtomicBlock` instances with compatible interfaces, compose both ways, assert structural equality of the resulting `ComposedBlock` interface signatures. - **Coq (ViCAR):** Formalize `Interface` as objects, `Block` as morphisms, `>>` as composition, `|` as tensor. Prove interchange via ViCAR's automated rewriting tactics for string diagrams. ### 1b. Mac Lane Coherence Prove: any two well-typed morphisms built from structural isomorphisms (associator, unitor, braiding) and identity via `>>` and `|` are equal. **Approach:** Coq/ViCAR. Once the interchange law is mechanized, coherence follows from the standard construction. ViCAR provides this for symmetric monoidal categories out of the box once the algebra is instantiated. ### 1c. Traced Monoidal Structure Prove the Joyal-Street-Verity axioms for `fb` (contravariant, within-evaluation) and `loop` (covariant, across-boundary): | Axiom | Statement | | ----------- | ---------------------------------------- | | Vanishing | Tracing over the unit object is identity | | Superposing | Trace commutes with parallel composition | | Yanking | Tracing the braiding yields identity | | Sliding | Morphisms slide around the feedback loop | | Tightening | Trace of a tensor = sequential traces | **Approach:** Coq with Interaction Trees library. The two feedback operators have different variance, so they require separate trace instances: - `fb`: contravariant trace (backward ports looped within evaluation) - `loop`: covariant trace (forward ports carried across temporal boundaries) Hasegawa's correspondence (Conway fixed-point \<-> categorical trace) validates that `fb` computes within-timestep fixed points soundly. ### 1d. Token System Formalization The auto-wiring predicate (token overlap) must be formalized as part of the categorical structure. Tokens define when `>>` is well-typed: sequential composition requires `overlap(out_tokens(f), in_tokens(g))`. **Approach:** Formalize token sets as a preorder on port names. Show that token overlap induces a well-defined composition predicate compatible with the monoidal structure. ### Artifacts | Artifact | Location | Format | | ---------------------------- | --------------------------------------------------------- | ---------- | | Hypothesis interchange tests | `packages/gds-framework/tests/test_algebra_properties.py` | Python | | Coq formalization | `docs/research/verification/coq/` | Coq (`.v`) | | Proof summary | `docs/research/verification/proofs.md` | Markdown | ### Dependencies None -- this is the foundation. ______________________________________________________________________ ## Phase 2: Representability Boundaries (R1/R2/R3 as Theorem) **Goal:** Elevate the R1/R2/R3 classification from taxonomy to theorem. ### 2a. R3 Undecidability Reduction Prove: for any non-trivial property P of `f_behav`, determining whether P holds is undecidable. **Approach:** Standard reduction from the halting problem. The formal representability doc already states the argument (Properties 4.2, 4.4); this phase writes it as a proper proof. - `TypeDef.constraint: Callable[[Any], bool]` -- equivalence of two constraints is undecidable (Rice's theorem) - `f_behav: (x, u) -> x'` -- any non-trivial semantic property of the transition function is undecidable **Format:** Pen-and-paper proof in LaTeX, optionally mechanized. ### 2b. R1/R2 Decidability Bounds Prove: all concepts classified as R1 are expressible in SROIQ(D) + SHACL-core, and all R2 concepts are expressible in SPARQL 1.1. **Approach:** Constructive -- the gds-owl export functions are the witness. For each R1 concept, show the OWL class/property structure is a valid SROIQ axiom. For each R2 concept, show the SPARQL query terminates and correctly validates the property. The existing SHACL shapes (`gds_interchange.owl.shacl`) and SPARQL queries (`gds_interchange.owl.sparql`) serve as constructive evidence. The proof obligation is to show they are *complete* for their respective tiers. ### 2c. Partition Independence Prove: the alignment between `G_struct/G_behav` and R1+R2/R3 is not tautological. This is the hardest part. The formal-representability doc acknowledges this is "a consistency property of our taxonomy, not an independent mathematical result." To strengthen it: - Define `G_struct` and `G_behav` independently of representability (e.g., via the canonical decomposition: `g` and `f_struct` are structural, `f_behav` is behavioral) - Define R1/R2/R3 independently via expressiveness of SROIQ/SPARQL - Show the two partitions coincide **Approach:** Formal argument. May remain a conjecture if the definitions are inherently coupled. ### Artifacts | Artifact | Location | Format | | --------------------------- | ------------------------------------------------------ | ---------------- | | R3 undecidability proof | `docs/research/verification/r3-undecidability.md` | Markdown + LaTeX | | R1/R2 completeness argument | `docs/research/verification/representability-proof.md` | Markdown | ### Dependencies Phase 1 (the topology `g` must be unambiguous before separating it from `f_behav`). ______________________________________________________________________ ## Phase 3: Round-Trip Fidelity (Property-Based Testing) **Goal:** Strengthen round-trip correctness from fixture-based tests to property-based testing with random generation. ### 3a. Hypothesis Strategies for GDSSpec Write Hypothesis strategies that generate valid, random `GDSSpec` instances: ``` strategy: GDSSpec = draw(types: dict[str, TypeDef]) # random type names + python_type + draw(spaces: dict[str, Space]) # fields referencing drawn types + draw(entities: dict[str, Entity]) # variables referencing drawn spaces + draw(blocks: dict[str, AtomicBlock]) # role-partitioned, valid interfaces + draw(wirings: dict[str, SpecWiring]) # source/target referencing drawn blocks + draw(params: ParameterSchema) # optional parameter defs ``` Constraints on generation: - Block interfaces must have valid port names (tokenizable) - Wirings must reference existing blocks - Mechanism.updates must reference existing entity variables - R3 fields (constraint callables) set to `None` -- they are lossy by design ### 3b. Structural Equality Checks The round-trip assertion `rho^{-1}(rho(c)) =_struct c` requires: - **Set-based comparison** for ports, wires, blocks (RDF triples are unordered) - **Content-based blank node matching** (field name + type, not node ID) - **R3 field exclusion** (constraints, python_type fallback to `str`) Implement a `structural_eq(spec1, spec2) -> bool` helper that handles these. ### 3c. SHACL/SPARQL Validation Gate Before reimporting, validate the exported RDF against: - SHACL shapes (R1 tier) -- all node/property constraints pass - SPARQL queries (R2 tier) -- no structural violations detected Only graphs passing both gates are reimported and compared. ### 3d. Extend to SystemIR, CanonicalGDS, VerificationReport The existing `test_roundtrip.py` covers all four `rho/rho^{-1}` pairs with the thermostat fixture. Extend each with Hypothesis strategies: | Round-trip | Current tests | PBT target | | -------------------- | ---------------- | ------------------------------------------- | | `GDSSpec` | 11 fixture tests | Random spec generation | | `SystemIR` | 4 fixture tests | Random IR from compiled specs | | `CanonicalGDS` | 2 fixture tests | Random canonical from `project_canonical()` | | `VerificationReport` | 2 fixture tests | Random reports with varied findings | ### Artifacts | Artifact | Location | Format | | -------------------------- | ---------------------------------------------- | ------ | | Hypothesis strategies | `packages/gds-owl/tests/strategies.py` | Python | | PBT round-trip tests | `packages/gds-owl/tests/test_roundtrip_pbt.py` | Python | | Structural equality helper | `packages/gds-owl/tests/helpers.py` | Python | ### Dependencies Phase 2 (must know which fields are structural vs lossy before writing equality checks). ______________________________________________________________________ ## Phase 4: Dynamical Invariants (Future) **Goal:** Verify existence and controllability properties from Zargham & Shorish (2022). ### 4a. Existence of Solutions (Paper Theorem 3.6) Prove: if the constraint set `C(x, t; g)` is compact, convex, and continuous, then an attainability correspondence exists. This is a runtime/R3 concern -- it requires evaluating `f` on concrete state. The `gds-analysis` package now exists with reachability (`reachable_set`, `configuration_space`, `backward_reachable_set`) but existence proofs require additional analytical machinery beyond trajectory sampling. ### 4b. Local Controllability (Paper Theorem 4.4) Prove: under Lipschitzian correspondence with closed convex values, the system is 0-controllable from a neighborhood around equilibrium. ### 4c. Connection to Bridge Proposal Steps 3-7 of the bridge proposal in [paper-implementation-gap.md](https://blockscience.github.io/gds-core/research/paper-implementation-gap/index.md) map to this phase. Steps 1-5 are complete (AdmissibleInputConstraint, TransitionSignature, StateMetric, reachable_set, configuration_space). Steps 6-7 remain open (#142). ### Dependencies Phases 1-3 (structural soundness must be established before reasoning about dynamics). ______________________________________________________________________ ## Prover Selection Summary | Component | Tool | Rationale | | ------------------------------- | ----------------------- | -------------------------------------------- | | Interchange law (quick check) | Hypothesis (Python) | Fast iteration, catches bugs early | | Interchange + coherence (proof) | Coq + ViCAR | String diagram tactics, ZX-calculus heritage | | Traced monoidal structure | Coq + Interaction Trees | Coinductive bisimulation for feedback | | R3 undecidability | LaTeX (pen-and-paper) | Standard reduction, no mechanization needed | | Round-trip fidelity | Hypothesis (Python) | Property-based testing with shrinking | | Dynamical invariants | TBD (gds-analysis) | Requires runtime execution engine | ______________________________________________________________________ ## Execution Priority ``` Phase 1a (interchange PBT) -- immediate, high value, low cost Phase 3a-b (round-trip PBT) -- immediate, extends existing test_roundtrip.py Phase 1a-c (Coq proofs) -- medium-term, requires Coq expertise Phase 2a (R3 reduction) -- medium-term, pen-and-paper Phase 2b-c (R1/R2 bounds) -- medium-term, constructive from existing code Phase 4 (dynamics) -- long-term, blocked on gds-analysis ``` The two immediate actions are: 1. `test_algebra_properties.py` -- Hypothesis tests for interchange law 1. `test_roundtrip_pbt.py` -- Hypothesis strategies for random GDSSpec generation Both produce concrete test artifacts that increase confidence while the formal proofs are developed in parallel. ______________________________________________________________________ ## References - Joyal, Street, Verity. "Traced monoidal categories." (1996) - Hasegawa. "Recursion from cyclic sharing." (1997) - Mac Lane. "Categories for the Working Mathematician." (1971) - Zargham, Shorish. "Generalized Dynamical Systems Part I: Foundations." (2022) - ViCAR: https://github.com/inQWIRE/ViCAR - Interaction Trees: https://github.com/DeepSpec/InteractionTrees - [formal-representability.md](https://blockscience.github.io/gds-core/research/formal-representability/index.md) -- Def 1.1, Properties 4.2/4.4, Check 6.1 - [deep_research.md](https://blockscience.github.io/gds-core/research/deep_research/index.md) -- Full literature review - [paper-implementation-gap.md](https://blockscience.github.io/gds-core/research/paper-implementation-gap/index.md) -- Bridge proposal Steps 1-7 # R3 Non-Representability: Formal Proofs **Claim.** No finite OWL/SHACL/SPARQL expression can capture the semantic properties of GDS concepts classified as R3. The gap has two distinct sources: *undecidability* (Rice's theorem, halting problem) and *computational class separation* (decidable operations that exceed the expressiveness of all three formalisms). ______________________________________________________________________ ## Preliminaries **Definition (R3, per formal-representability.md Def 2.2).** A GDS concept $c$ is R3 if no finite OWL/SHACL/SPARQL expression can capture it. The gap follows from one or more of: - **Rice's theorem**: any non-trivial semantic property of programs is undecidable - **The halting problem**: arbitrary `Callable` may not terminate - **Computational class separation**: string parsing and temporal execution exceed the expressiveness of all three formalisms R3 concepts subdivide into: - **R3-undecidable**: concepts whose semantic properties are undecidable (Rice's theorem / halting problem) - **R3-separation**: concepts that are decidable but exceed SPARQL's computational model (no mutable state, no unbounded string computation, no multi-pass processing) **Theorem (Rice, 1953).** For any non-trivial semantic property $P$ of partial recursive functions, the set ${i \\mid \\varphi_i \\in P}$ is undecidable. A property $P$ is *non-trivial* if it is satisfied by some but not all partial recursive functions. ______________________________________________________________________ ## R3-Undecidable Concepts ### 1. Transition Functions ($f\_{\\text{behav}}$) **Definition.** In the canonical decomposition $h = f \\circ g$, the state update decomposes as $f = \\langle f\_{\\text{struct}}, f\_{\\text{read}}, f\_{\\text{behav}} \\rangle$ where $f\_{\\text{behav}} : (x, d) \\mapsto x'$ is an arbitrary Python callable implementing the mechanism's state transition. **Proposition 1.** Any non-trivial semantic property of $f\_{\\text{behav}}$ is undecidable. *Proof.* Let $P$ be a non-trivial semantic property of transition functions (e.g., "reaches equilibrium", "preserves non-negativity", "is monotone"). Each $f\_{\\text{behav}}$ is implemented as a Python callable, which is Turing-complete. The set of Python callables is recursively enumerable and can encode any partial recursive function. By Rice's theorem, ${f \\mid f \\in P}$ is undecidable. Therefore no static analysis — including any OWL reasoner, SHACL validator, or SPARQL query operating over a finite RDF graph — can determine whether an arbitrary $f\_{\\text{behav}}$ satisfies $P$. OWL DL reasoning is in 2NExpTime (decidable) precisely because $\\mathcal{SROIQ}(\\mathcal{D})$ restricts to the decidable fragment of first-order logic. It cannot express the fixed-point semantics required to evaluate $f\_{\\text{behav}}(x, d)$ for arbitrary inputs. $\\square$ **Classification:** R3-undecidable. ### 2. Constraint Predicates ($\\text{TypeDef.constraint}$) **Definition.** `TypeDef.constraint: Optional[Callable[[Any], bool]]` is an arbitrary predicate over values of a given type. **Proposition 2.** The equivalence and satisfiability of constraint predicates are undecidable. *Proof.* Consider two constraints $c_1, c_2 : \\texttt{Callable\[[Any], bool\]}$. (a) *Equivalence*: "Do $c_1$ and $c_2$ accept the same values?" is the question $\\forall x., c_1(x) = c_2(x)$. This is equivalence of arbitrary programs, which is undecidable by Rice's theorem (the property "computes the same function as $c_2$" is non-trivial and semantic). (b) *Satisfiability*: "Does $c$ accept any value?" is the question $\\exists x., c(x) = \\text{True}$. This reduces to the halting problem: given a program $M$, define $c(x) = [\\text{run } M \\text{ for } x \\text{ steps and check if it halts}]$. Then $c$ is satisfiable iff $M$ halts. SHACL-core can express specific constraint patterns (e.g., `sh:minInclusive`, `sh:maxInclusive`) for the common cases used in practice (Probability, NonNegativeFloat, PositiveInt). These are R2. But the *general* `Callable[[Any], bool]` is R3. $\\square$ **Classification:** R3-undecidable. ### 3. Admissibility Predicates ($U\_{x,\\text{behav}}$) **Definition.** `AdmissibleInputConstraint.constraint: Optional[Callable[[dict, dict], bool]]` maps `(state, input) -> bool`, determining whether an input is admissible given the current state. **Proposition 3.** Any non-trivial semantic property of admissibility predicates is undecidable. *Proof.* The structural skeleton $U\_{x,\\text{struct}}$ (which boundary block, which entity-variable dependencies) is R1 — it is a finite relation exported as RDF triples. But the behavioral component $U\_{x,\\text{behav}}$ is a `Callable` with the same Turing-completeness as $f\_{\\text{behav}}$. By Rice's theorem, any non-trivial semantic property of this callable is undecidable (same argument as Proposition 1). Specifically: "Given state $x$, is the set of admissible inputs non-empty?" requires evaluating $\\exists u., \\text{constraint}(x, u) = \\text{True}$, which reduces to the halting problem by the same construction as Proposition 2(b). $\\square$ **Classification:** R3-undecidable. ______________________________________________________________________ ## R3-Separation Concepts These concepts are *decidable* — they terminate in polynomial time. But they exceed SPARQL 1.1's computational model, which lacks mutable state, multi-pass string processing, and ordered set operations over dynamically generated elements. ### 4. Auto-Wiring Process **Definition.** The `>>` operator discovers connections between blocks by computing token overlap: `tokenize(port_name) -> frozenset[str]`, then checking `frozenset` intersection. **Proposition 4.** The auto-wiring process exceeds SPARQL's expressiveness, despite being decidable in $O(n)$ time. *Proof.* The `tokenize()` function (`tokens.py`) performs: 1. Unicode NFC normalization 1. Split on `+` (space-plus-space delimiter) 1. Split each part on `,` (comma-space delimiter) 1. Strip whitespace and lowercase each token 1. Discard empty strings This is $O(n)$ string processing that always terminates. However, SPARQL 1.1 cannot replicate it because: - **NFC normalization** requires stateful multi-pass character rewriting, which SPARQL's `REGEX()` and string functions cannot express - **Multi-delimiter splitting** with ordered priority (`+` before `,`) requires sequential processing absent from SPARQL's declarative model - **Dynamic set construction** (tokenizing two port names, then computing set intersection) requires mutable intermediate state The *output* of auto-wiring (the discovered `WiringIR` edges) is exported as explicit RDF triples and is fully R1. Only the *computation that discovers them* is R3. $\\square$ **Classification:** R3-separation (decidable, O(n), but beyond SPARQL). ### 5. Construction Validation **Definition.** The `@model_validator` decorators on composition operators and block roles enforce invariants at Python object construction time. **Proposition 5.** The GDS construction validators are decidable but exceed SPARQL's expressiveness. The framework's extensibility makes the *general* case Turing-complete. *Proof.* The *current* GDS validators are decidable and efficient: - `StackComposition`: calls `tokenize()` (O(n), Prop 4) + set intersection (O(min(|A|,|B|))) - `BoundaryAction`: checks `forward_in == ()` (constant time) - `Mechanism`: checks `backward_in == ()` and `backward_out == ()` (constant time) - `TemporalLoop`: checks `direction == COVARIANT` for each wiring (O(k)) These are all polynomial. However, they involve `tokenize()` (proven beyond SPARQL in Prop 4), so they exceed SPARQL's expressiveness. Additionally, the framework is extensible: domain DSLs subclass `AtomicBlock` and may add arbitrary `@model_validator` logic. Since Python `@model_validator` can contain arbitrary code, the *system-level* guarantee is that construction validation is not bounded to any fixed complexity class. Any specific DSL may introduce Turing-complete validators. Therefore: the current validators are R3-separation; the framework's open extension points make the general case R3-undecidable. $\\square$ **Classification:** R3-separation (current validators), R3-undecidable (general extensible case). ### 6. Scheduling Semantics **Definition.** The temporal execution order of blocks within and across timesteps is not stored in `GDSSpec` — it is an external concern determined by the simulation engine (`gds-sim`). **Proposition 6.** Scheduling semantics are R3 because they are not represented in the data model. *Proof.* Scheduling is not a field on any GDS data structure. It exists only at runtime, external to the specification. A concept that does not appear in the data model cannot be serialized to RDF, and therefore cannot be captured by any OWL/SHACL/SPARQL expression operating over the RDF export. $\\square$ **Classification:** R3 (trivially — absent from the data model). ______________________________________________________________________ ## Summary | R3 Concept | Sub-classification | Source | Decidable? | | ----------------------- | --------------------------- | --------------------- | ------------------------- | | $f\_{\\text{behav}}$ | R3-undecidable | Rice's theorem | No | | TypeDef.constraint | R3-undecidable | Rice's + halting | No | | $U\_{x,\\text{behav}}$ | R3-undecidable | Rice's theorem | No | | Auto-wiring process | R3-separation | Computational class | Yes, O(n) | | Construction validation | R3-separation / undecidable | Class + extensibility | Current: yes; general: no | | Scheduling semantics | R3 (absent) | Not in data model | N/A | The R3 boundary has two distinct mechanisms: 1. **Undecidability** (Props 1-3): Arbitrary `Callable` can encode any partial recursive function. No decidable logic can determine non-trivial semantic properties of these functions. 1. **Computational separation** (Props 4-5): The specific operations (tokenization, set intersection) are decidable but exceed what SPARQL 1.1 can express. SPARQL lacks mutable state, multi-pass string processing, and dynamic set construction. Both mechanisms produce the same practical outcome: the concept cannot be represented in OWL/SHACL/SPARQL. The distinction matters for formal precision — R3-undecidable concepts *cannot* be captured by *any* finite formalism, while R3-separation concepts could theoretically be captured by a more expressive but still decidable formalism (e.g., a language with built-in Unicode normalization and set operations). # R1/R2 Decidability Bounds and Partition Independence **Claim.** Every GDS concept classified as R1 is expressible in $\\mathcal{SROIQ}(\\mathcal{D})$ + SHACL-core. Every R2 concept is expressible in SPARQL 1.1. The alignment between $G\_{\\text{struct}} / G\_{\\text{behav}}$ and R1+R2 / R3 is a structural consequence of the canonical decomposition, not a tautology. ______________________________________________________________________ ## Part A: R1 Decidability (Constructive Proof) **Method.** For each R1 concept, we exhibit a constructive witness — the `gds-owl` export function that serializes it to OWL/RDF, and the SHACL-core shape that validates it. The existence of a correct serialization + validation pair constitutes a proof that the concept is expressible in the formalism. ### R1 Concepts and Their Witnesses | Concept | Export witness | SHACL-core witness | OWL construct | | --------------------------------------------- | ------------------------------------- | ------------------------------ | ------------------------------------------------ | | Composition tree | `_block_to_rdf()` | BlockIRShape | `rdf:type` dispatch to role subclasses | | Block interfaces | `_block_to_rdf()` | BoundaryActionShape | Port as blank node, `typeToken` as `xsd:string` | | Role partition | `_block_to_rdf()` | class dispatch | `owl:disjointUnionOf` on role classes | | Wiring topology | `_wiring_to_rdf()` | WiringIRShape | Wire blank nodes with `wireSource`, `wireTarget` | | Update targets ($f\_{\\text{struct}}$) | `_block_to_rdf()` | MechanismShape | UpdateMapEntry blank nodes | | Parameter schema | `_parameter_to_rdf()` | TypeDefShape | ParameterDef class with bounds | | Space/entity structure | `_space_to_rdf()`, `_entity_to_rdf()` | SpaceShape, EntityShape | SpaceField/StateVariable blank nodes | | Admissibility graph ($U\_{x,\\text{struct}}$) | `spec_to_graph()` | AdmissibleInputConstraintShape | AdmissibilityDep blank nodes | | Transition read deps ($f\_{\\text{read}}$) | `spec_to_graph()` | TransitionSignatureShape | TransitionReadEntry blank nodes | **Proof sketch for each concept:** All R1 concepts share the same structure: a *finite set of named entities* with *typed attributes* and *binary relations* to other named entities. This maps directly to OWL DL: $$ \\text{GDS concept} \\xrightarrow{\\rho} \\text{RDF individual} + \\text{OWL class membership} + \\text{datatype/object properties} $$ The SHACL-core shapes enforce: - **Cardinality**: `sh:minCount 1`, `sh:maxCount 1` for required fields (e.g., every block has exactly one name) - **Datatype**: `sh:datatype xsd:string`, `xsd:boolean`, `xsd:float` - **Class membership**: `sh:class` constraints (e.g., mechanism updates reference entities) These are all within the decidable fragment of $\\mathcal{SROIQ}(\\mathcal{D})$. Specifically: - Cardinality restrictions $\\to$ qualified number restrictions (QNR) in $\\mathcal{SROIQ}$ - Datatype constraints $\\to$ concrete domain $\\mathcal{D}$ with XSD datatypes - Class dispatch $\\to$ concept subsumption ($C \\sqsubseteq D$) - Disjoint roles $\\to$ role disjointness axioms Reasoning over these axioms is 2NExpTime-complete but *decidable*, which is the requirement for R1. $\\square$ ______________________________________________________________________ ## Part B: R2 Decidability (Constructive Proof) **Method.** For each R2 concept, we exhibit the SPARQL 1.1 feature required to validate it, and show that the query terminates. We demonstrate that SHACL-core (without `sh:sparql` embedding) is insufficient. ### R2 Concepts and Their Witnesses | Concept | Verification check | SPARQL feature required | Why SHACL-core is insufficient | | ------------------------ | ----------------------------------------------- | ----------------------------------------- | ------------------------------------------------------------------ | | Acyclicity (G-006) | Covariant wiring has no cycles | Property paths (`p+` transitive closure) | SHACL-core has no transitive closure operator | | Completeness (SC-001) | Every mechanism input is wired | `FILTER NOT EXISTS` (negation-as-failure) | SHACL-core cannot express "for all X, there exists Y such that..." | | Determinism (SC-002) | No two mechanisms update the same $(E, V)$ pair | `GROUP BY` + `HAVING (COUNT > 1)` | SHACL-core cannot aggregate across nodes | | Dangling wirings (G-004) | Wiring source/target reference existing blocks | `FILTER NOT EXISTS` | SHACL-core `sh:class` checks class membership, not name existence | **Note on SHACL-core vs SHACL-SPARQL:** SHACL-SPARQL (`sh:sparql`) embeds SPARQL queries inside SHACL shapes and *can* express everything SPARQL can. The insufficiency argument above applies only to SHACL-core (node shapes, property shapes with cardinality/datatype/class constraints, without `sh:sparql`). This is the distinction that separates R1 from R2. **Proof sketch.** SPARQL 1.1 is a query language over finite RDF graphs. Key properties: 1. **Termination**: every SPARQL query over a finite graph terminates, because: 1. Pattern matching is bounded by the graph size 1. Property paths (`p+`) are bounded by the number of nodes 1. Aggregation operates over finite result sets 1. There is no recursion, no mutable state, no unbounded iteration 1. **Expressiveness**: SPARQL 1.1 is equivalent in expressive power to relational algebra extended with transitive closure and aggregation. This suffices for the four R2 properties: 1. *Acyclicity*: Express as `ASK { ?x gds:wiresTo+ ?x }` — if the query returns true, a cycle exists. The `+` operator computes transitive closure over finitely many nodes. 1. *Completeness*: Express as negation-as-failure: `SELECT ?port WHERE { ?block gds:hasPort ?port . FILTER NOT EXISTS { ?wiring gds:wireTarget ?port } }` — unmatched ports indicate incompleteness. 1. *Determinism*: Express as aggregation: `SELECT ?entity ?var WHERE { ?mech gds:updatesEntity ?entity ; gds:updatesVariable ?var } GROUP BY ?entity ?var HAVING (COUNT(?mech) > 1)` — results indicate non-deterministic updates. 1. *Dangling references*: Express as negation-as-failure: `SELECT ?wiring WHERE { ?wiring gds:wireSource ?name . FILTER NOT EXISTS { ?block gds:hasName ?name } }` — unresolved references. 1. **SHACL-core insufficiency**: SHACL-core validates the *local neighborhood* of a node. It cannot express cross-node constraints that require: 1. Transitive closure (reachability across arbitrarily many edges) 1. Negation-as-failure over the entire graph (not just a node's properties) 1. Aggregation across independent nodes Therefore R2 concepts require SPARQL but remain decidable because SPARQL queries over finite graphs always terminate. $\\square$ ______________________________________________________________________ ## Part C: Partition Independence **Question.** Is the alignment between $G\_{\\text{struct}} / G\_{\\text{behav}}$ and R1+R2 / R3 a tautology (we defined them to match) or a structural consequence (they match because of deeper reasons)? ### Independent Definitions **Definition C.1 (Structural/Behavioral partition — from canonical decomposition).** Given $h = f \\circ g$ where $g$ is the policy map and $f = \\langle f\_{\\text{struct}}, f\_{\\text{read}}, f\_{\\text{behav}} \\rangle$: $$ G\_{\\text{struct}} = {c \\in \\text{GDS} \\mid c \\text{ is determined by } g, f\_{\\text{struct}}, \\text{ or } f\_{\\text{read}}} $$ $$ G\_{\\text{behav}} = {c \\in \\text{GDS} \\mid c \\text{ depends on evaluating } f\_{\\text{behav}}, \\text{ a } \\texttt{Callable}, \\text{ or a computation requiring mutable intermediate state or ordered multi-pass processing}} $$ This definition depends on the canonical decomposition and the computational character of the concept. It does not reference OWL, SHACL, or SPARQL — only the intrinsic computational requirements of the concept. **Remark.** Definition C.1 places auto-wiring and construction validation in $G\_{\\text{behav}}$ because they involve computation (tokenization, set intersection) that goes beyond what the static data model encodes. The *results* of auto-wiring (WiringIR edges) are $G\_{\\text{struct}}$; the *process* is $G\_{\\text{behav}}$. **Definition C.2 (Representability tiers — from formal language expressiveness, per formal-representability.md Def 2.2).** $$ \\text{R1} = {c \\mid c \\text{ is expressible in } \\mathcal{SROIQ}(\\mathcal{D}) + \\text{SHACL-core}} $$ $$ \\text{R2} = {c \\mid c \\text{ requires SPARQL 1.1 but is decidable over finite graphs}} $$ $$ \\text{R3} = {c \\mid \\text{no finite OWL/SHACL/SPARQL expression can capture } c} $$ R3 has two sub-sources (see [r3-undecidability.md](https://blockscience.github.io/gds-core/research/verification/r3-undecidability/index.md)): - **R3-undecidable**: semantic properties of arbitrary `Callable` are undecidable by Rice's theorem - **R3-separation**: decidable computations that exceed SPARQL's model (no mutable state, no multi-pass string processing) This definition depends only on computational expressiveness. It does not reference the canonical decomposition. ### The Coincidence Theorem **Theorem C.3.** $G\_{\\text{struct}} = \\text{R1} \\cup \\text{R2}$ and $G\_{\\text{behav}} = \\text{R3}$. *Proof.* Four containments: **($G\_{\\text{struct}} \\subseteq \\text{R1} \\cup \\text{R2}$):** Every structural concept is a finite set of named entities with typed attributes and binary relations (Part A shows these are R1), or a graph-global property checkable by a terminating SPARQL query (Part B shows these are R2). Proved constructively above. **($G\_{\\text{behav}} \\subseteq \\text{R3}$):** Every behavioral concept either involves evaluating an arbitrary `Callable` or performing computation beyond SPARQL's expressiveness. - For R3-undecidable concepts ($f\_{\\text{behav}}$, `TypeDef.constraint`, $U\_{x,\\text{behav}}$): these accept arbitrary `Callable`, which can encode any partial recursive function. By Rice's theorem, any non-trivial semantic property of such callables is undecidable. Therefore no finite OWL/SHACL/SPARQL expression can capture them. - For R3-separation concepts (auto-wiring, construction validation): these involve multi-pass string processing (Unicode NFC + ordered delimiter splitting) and dynamic set construction, which exceed SPARQL 1.1's computational model (no mutable state, no ordered multi-pass processing). Proved in [r3-undecidability.md](https://blockscience.github.io/gds-core/research/verification/r3-undecidability/index.md), Propositions 4-5. **($\\text{R1} \\cup \\text{R2} \\subseteq G\_{\\text{struct}}$):** Suppose $c \\in \\text{R1} \\cup \\text{R2}$ but $c \\in G\_{\\text{behav}}$. Then $c$ either depends on evaluating an arbitrary `Callable` or involves computation beyond SPARQL. - If $c$ depends on an arbitrary `Callable`: by Rice's theorem, any non-trivial semantic property of that callable is undecidable. But R1 $\\cup$ R2 concepts are decidable (R1 by 2NExpTime OWL reasoning, R2 by SPARQL termination over finite graphs). Contradiction. - If $c$ involves computation beyond SPARQL but no `Callable`: then $c$ is R3-separation, which by definition is not in R1 $\\cup$ R2. Contradiction. **($\\text{R3} \\subseteq G\_{\\text{behav}}$):** Suppose $c \\in \\text{R3}$ but $c \\in G\_{\\text{struct}}$. Then $c$ is determined by $g$, $f\_{\\text{struct}}$, or $f\_{\\text{read}}$, all of which are finite relations over named entities. Properties of finite relations over finite graphs are decidable — expressible in OWL (R1) for local constraints, or SPARQL (R2) for global constraints. Therefore $c \\in \\text{R1} \\cup \\text{R2}$, contradicting $c \\in \\text{R3}$. $\\square$ ### Why This Is Not Tautological The formal-representability document (Check 6.1) states: "we defined $G\_{\\text{struct}}$ and $G\_{\\text{behav}}$ to capture what is and isn't representable." This is true *operationally* — the definitions were designed with representability in mind. But the coincidence is not *logically* tautological because: 1. **The definitions are independently testable.** Given a new GDS concept, you can classify it as structural/behavioral by examining the canonical decomposition, and independently classify it by representability tier by attempting to express it in OWL/SPARQL. If the partition were tautological, these would be the same test. They are not — one asks "does this depend on a `Callable` or computation beyond SPARQL?" and the other asks "can $\\mathcal{SROIQ}$ or SPARQL express this?" 1. **The coincidence could fail in a modified framework.** If GDS stored transition functions as symbolic expressions (e.g., polynomial arithmetic over state variables), then $f\_{\\text{behav}}$ would move from R3-undecidable to R2 (polynomial evaluation is decidable and expressible in SPARQL via arithmetic). The structural/behavioral partition would remain the same (it still updates state), but the representability boundary would shift. Similarly, if SPARQL were extended with Unicode normalization primitives, auto-wiring would move from R3-separation to R2. 1. **The key structural insight**: the canonical decomposition $h = f \\circ g$ separates topology from computation, and the OWL+SHACL-core+SPARQL stack can express exactly the topological content. This is not a definition — it is a claim about the expressiveness of description logic and SPARQL relative to the structure of GDS specifications. The claim holds because GDS made the design choice to allow arbitrary `Callable` for behavioral components and to use string-based tokenization for wiring — different design choices would yield a different boundary. ### Status Theorem C.3 is **proved** under the current GDS design. The proof depends on two GDS design choices: 1. Behavioral components accept arbitrary `Callable` (making them R3-undecidable via Rice's theorem) 1. Auto-wiring uses Python string processing (making it R3-separation via computational class gap) If either design choice changed, the theorem would need revision. ______________________________________________________________________ ## References - [r3-undecidability.md](https://blockscience.github.io/gds-core/research/verification/r3-undecidability/index.md) — R3 non-representability proofs (Props 1-6) - [formal-representability.md](https://blockscience.github.io/gds-core/research/formal-representability/index.md) — Properties 3.1-4.6, Def 2.2, Check 6.1, Property 6.2 - [deep_research.md](https://blockscience.github.io/gds-core/research/deep_research/index.md) — SROIQ expressiveness analysis - Horrocks, I. et al. "The Even More Irresistible SROIQ." (2006) — 2NExpTime decidability of $\\mathcal{SROIQ}$ - Rice, H.G. "Classes of Recursively Enumerable Sets and Their Decision Problems." (1953) # GDS Formal Verification — Research Journal Structured log of verification research for the Generalized Dynamical Systems ecosystem. Each entry records motivation, method, outcome, and next steps. Verification plan: [verification-plan.md](https://blockscience.github.io/gds-core/research/verification-plan/index.md) Issues: [#134](https://github.com/BlockScience/gds-core/issues/134), [#135](https://github.com/BlockScience/gds-core/issues/135), [#136](https://github.com/BlockScience/gds-core/issues/136), [#137](https://github.com/BlockScience/gds-core/issues/137), [#138](https://github.com/BlockScience/gds-core/issues/138) ______________________________________________________________________ ## Entry 001 — 2026-03-28 **Subject:** Project setup — research directory, verification plan, literature review ### Motivation Three formal claims in the GDS ecosystem remain unverified: 1. The composition algebra `(Block, >>, |, fb, loop)` satisfies categorical axioms (interchange law, coherence, traced monoidal structure) — stated in `formal-representability.md` Def 1.1 but not proved. 1. The R1/R2/R3 representability taxonomy is a designed classification, not an independently derived result — acknowledged in Check 6.1. 1. Round-trip bijectivity `rho^{-1}(rho(c)) =_struct c` is tested with a single thermostat fixture, not with randomized inputs. ### Actions 1. **Consolidated research artifacts** into `docs/research/` to separate from published package code (`5a3f1fa`). Moved 5 documents: 1. `research-boundaries.md` (from `docs/guides/`) 1. `paper-implementation-gap.md` (from `docs/guides/`) 1. `formal-representability.md` (from `docs/owl/guide/`) 1. `representation-gap.md` (from `packages/gds-owl/docs/`) 1. `deep_research.md` (from `packages/gds-owl/docs/`) Updated all cross-references in mkdocs.yml and 5 markdown files. Removed duplicate `packages/gds-owl/docs/formal-representability.md`. Verified `mkdocs build --strict` passes. 1. **Literature review** via NotebookLM using three sources: 1. Zargham & Shorish (2022) — GDS foundations paper 1. `formal-representability.md` — representability analysis 1. `deep_research.md` — categorical semantics survey Key findings: - **Prover selection:** Coq + ViCAR (string diagram tactics) + Interaction Trees (traced monoidal categories) is the best fit for GDS's bidirectional block algebra with two feedback operators. - **PROPs** (Products and Permutations categories) are the natural formalization target — objects are natural numbers, tensor is addition. - **Hasegawa's correspondence** (Conway fixed-point \<-> categorical trace) validates that `fb` computes within-timestep fixed points. - **Phase ordering matters:** categorical axioms must hold before the representability boundary can be cleanly stated, and that boundary must be established before round-trip equality checks are meaningful. 1. **Wrote verification plan** (`docs/research/verification-plan.md`) with 4 phases: 1. Phase 1: Composition algebra (Hypothesis PBT + Coq/ViCAR) 1. Phase 2: Representability boundaries (formal reduction) 1. Phase 3: Round-trip fidelity (Hypothesis PBT) 1. Phase 4: Dynamical invariants (future gds-analysis) 1. **Created 5 GitHub issues** with `verification` label: 1. # 134: Phase 1a — interchange law PBT 1. # 135: Phase 1b-c — Coq mechanized proofs 1. # 136: Phase 2 — R1/R2/R3 as theorem 1. # 137: Phase 3 — round-trip PBT 1. # 138: Phase 4 — dynamical invariants ### References - Joyal, Street, Verity. "Traced monoidal categories." (1996) - Hasegawa. "Recursion from cyclic sharing." (1997) - Mac Lane. "Categories for the Working Mathematician." (1971) - ViCAR: https://github.com/inQWIRE/ViCAR - Interaction Trees: https://github.com/DeepSpec/InteractionTrees ______________________________________________________________________ ## Entry 002 — 2026-03-28 **Subject:** Phase 1a + Phase 3 — property-based testing implementation ### Motivation The two immediate, low-cost verification actions identified in the plan: 1. Hypothesis tests for the interchange law (Phase 1a, #134) 1. Hypothesis strategies for random GDSSpec round-trip testing (Phase 3, #137) ### Method #### Phase 1a: Composition Algebra Properties **File:** `packages/gds-framework/tests/test_algebra_properties.py` Wrote Hypothesis strategies for: - `port_tuples` — generate tuples of `Port` with distinct lowercase names - `interfaces` — random `Interface` with ports on all four slots - `named_block` — random `AtomicBlock` with name/interface - `stackable_pair` — two blocks sharing a token for `>>` compatibility - `interchange_quadruple` — four blocks (f, g, h, j) with two distinct shared tokens enabling both sides of the interchange law Tested properties (200 random examples each): - **Interchange law:** `(g >> f) | (j >> h)` has same interface as `(g | j) >> (f | h)` — port name sets match, flattened block sets match - **Sequential associativity:** `(a >> b) >> c` = `a >> (b >> c)` - **Parallel associativity:** `(a | b) | c` = `a | (b | c)` - **Parallel commutativity:** `a | b` has same port name sets as `b | a` - **Identity:** empty-interface block is identity for both `>>` and `|` - **Structural concatenation:** composed interface is tuple concatenation of constituent interfaces **Result:** All 11 tests pass. No interchange law violations found across 2,200+ random compositions. #### Phase 3: OWL Round-Trip PBT **File:** `packages/gds-owl/tests/strategies.py` Wrote `gds_specs()` strategy generating random valid `GDSSpec` instances: - 1-3 TypeDefs (int/float/str/bool, optional units) - 1-2 Spaces with 1-2 fields each - 1 Entity with 1-2 state variables - 2-5 blocks in a sequential chain (BoundaryAction -> Policy\* -> Mechanism) - 1 SpecWiring connecting the chain - All R3 fields (constraints) set to None Generated names filter out Python keyword collisions (`name`, `description`, `symbol`) with the `gds.space()` / `gds.entity()` factory functions. **File:** `packages/gds-owl/tests/test_roundtrip_pbt.py` 14 property-based tests (100 random specs each): - Name, type names, python_types, units survive - Constraints are correctly lossy (always None after round-trip) - Space names and field names survive (set-based comparison) - Entity names and variable names survive - Block names and roles (kind) survive - Mechanism.updates survive (set-based comparison) - Wiring names and wire counts survive **Result:** All 14 tests pass. No round-trip fidelity violations found across 1,400 random specifications. ### Outcome | Suite | Tests | Random examples | Failures | | ------------------------------ | ------ | --------------- | -------- | | Composition algebra (Phase 1a) | 11 | ~2,200 | 0 | | OWL round-trip (Phase 3) | 14 | ~1,400 | 0 | | **Total new coverage** | **25** | **~3,600** | **0** | Added `hypothesis` as dev dependency for both `gds-framework` and `gds-owl`. Commit: `835ac83` ### Observations 1. The interchange law test compares **port name sets**, not port tuple order. This is correct for the monoidal structure (commutativity of `|`) but a stronger test would verify that the flattened block evaluation order is consistent — which it is, as confirmed by `test_flatten_same_blocks`. 1. The round-trip strategy generates linear block chains only (no parallel composition or feedback in the spec). Future work should extend to branching topologies and multiple wirings. 1. The `gds_specs` strategy currently generates 2-5 blocks. Scaling to larger specs (10-20 blocks) would stress-test the RDF serialization for blank node management and property ordering. ### Next Steps - [ ] Extend `gds_specs` to generate specs with parallel block groups - [ ] Add SystemIR, CanonicalGDS, and VerificationReport PBT round-trips - [ ] Add SHACL/SPARQL validation gate before reimport (Phase 3c) - [ ] Investigate Coq/ViCAR for mechanized interchange law proof (Phase 1b) - [ ] Write R3 undecidability reduction (Phase 2a) ______________________________________________________________________ ## Entry 003 — 2026-03-28 **Subject:** Phase 1a + Phase 3 — audit fixes and CI reproducibility ### Motivation Code review of PBT tests identified 7 issues (2 critical, 4 important, 1 minor). Hypothesis tests also needed deterministic seeds for CI reproducibility. ### Fixes Applied | # | Severity | Issue | Fix | | --- | --------- | ------------------------------------------------ | ------------------------------------------------------------------- | | 1 | Critical | `stackable_pair` duplicate port names | `port_tuples` now accepts `exclude` param; shared token excluded | | 2 | Critical | Missing backward port assertions for `>>` | Added `backward_in` + `backward_out` assertions | | 3 | Important | Identity tests only covered right-identity | Split into 4 tests: left/right for both `>>` and `\|` | | 4 | Important | `gds_specs` generated 0 Policies 25% of the time | Changed `min_blocks=2` to `min_blocks=3` | | 5 | Important | Vacuous lossiness test | Removed (fixture test in `test_roundtrip.py` covers real lossiness) | | 6 | Important | Wire round-trip only checked count | Now checks `(source, target, space)` set equality | | 7 | Minor | Inline import of `Mechanism` | Moved to module level | ### Reproducibility Added Hypothesis profiles for deterministic CI: - `ci` profile: `derandomize=True`, `database=None` — same inputs every run, no `.hypothesis/` database needed - `dev` profile: randomized (default) — broader exploration locally - CI workflow sets `HYPOTHESIS_PROFILE=ci` env var - Both profiles auto-loaded via `settings.load_profile(os.getenv(...))` Commit: `65cd552` ### Updated Test Counts | Suite | Tests | Random examples | | ------------------------------ | -------------------------------- | --------------- | | Composition algebra (Phase 1a) | 13 (+2 identity tests) | ~2,600 | | OWL round-trip (Phase 3) | 13 (-1 vacuous, +1 wire content) | ~1,300 | ______________________________________________________________________ ## Entry 004 — 2026-03-28 **Subject:** Phase 2 — formal proofs for R1/R2/R3 representability ### Motivation The R1/R2/R3 taxonomy in `formal-representability.md` is acknowledged as a designed classification (Check 6.1). Phase 2 elevates it to a theorem with formal proofs. ### Method Produced two documents in `docs/research/verification/`: **`r3-undecidability.md`** — R3 non-representability proofs. Key structural improvement: R3 has two distinct sub-classifications: - **R3-undecidable**: concepts whose semantic properties are undecidable (Rice's theorem / halting problem). Applies to `f_behav`, `TypeDef.constraint`, `U_x_behav`. - **R3-separation**: decidable computations that exceed SPARQL's model. Applies to auto-wiring (O(n) but needs NFC + multi-pass splitting) and construction validation (polynomial but needs tokenize()). Six propositions proved: 1. `f_behav` — Rice's theorem (R3-undecidable) 1. `TypeDef.constraint` — Rice's + halting problem (R3-undecidable) 1. `U_x_behav` — Rice's theorem (R3-undecidable) 1. Auto-wiring — SPARQL expressiveness gap (R3-separation) 1. Construction validation — current: R3-separation; extensible: R3-undecidable 1. Scheduling — not in data model (trivially R3) **`representability-proof.md`** — R1/R2 decidability + partition independence. - Part A: 9 R1 concepts with constructive witnesses (export functions + 13 SHACL-core shapes) - Part B: 4 R2 concepts with SPARQL witnesses (transitive closure, negation-as-failure, aggregation) - Part C: Theorem C.3 proving $G\_{\\text{struct}} = \\text{R1} \\cup \\text{R2}$ and $G\_{\\text{behav}} = \\text{R3}$ Commit: `384e32c` ### Audit and Revisions Two rounds of review identified 9 issues total: **Round 1 (8 findings):** | # | Severity | Issue | Fix | | --- | --------- | ------------------------------------------------ | ----------------------------------------------------------- | | 1 | Critical | R3 definition inconsistency between documents | Aligned to Def 2.2; introduced R3-undecidable/R3-separation | | 2 | Critical | Auto-wiring called "undecidable" but is O(n) | Reclassified as R3-separation | | 3 | Critical | Construction validation proved general case only | Split current (decidable) vs extensible (undecidable) | | 4 | Important | "OWL (R1)" should be "R1 ∪ R2" | Fixed in Theorem C.3 reverse containment | | 5 | Important | Missing Rice's theorem in forward containment | Added explicit invocation | | 6 | Important | SHACL-core qualifier dropped in Part B | Consistent throughout; added SHACL-SPARQL note | | 7 | Minor | Fragile line-number references | Replaced with function names | | 8 | Minor | Redundant argument in Prop 6 | Simplified to "not in data model" | Commit: `3072f5c` **Round 2 (1 finding):** Definition C.1 mentioned "computation beyond SPARQL" while claiming not to reference SPARQL — a self-contradiction that weakened the non-tautology argument. Fixed by replacing with intrinsic computational characterization: "computation requiring mutable intermediate state or ordered multi-pass processing." This makes the counterfactual genuine: extending SPARQL with NFC primitives would move auto-wiring from R3 to R2, but it stays in $G\_{\\text{behav}}$ (still requires multi-pass processing), so the partitions diverge — proving non-tautology. ### Observations 1. The R3-undecidable/R3-separation distinction is a genuine structural improvement. It clarifies that auto-wiring is a *design choice* (use Python string processing) not a *mathematical necessity* (undecidable problem). A framework with built-in SPARQL tokenization primitives would have a different boundary. 1. Theorem C.3 now depends on two GDS design choices: (a) arbitrary `Callable` for behavioral components, (b) Python string processing for auto-wiring. Changing either would require revising the theorem. 1. The constructive witnesses (export functions + SHACL shapes + SPARQL templates) serve double duty: they are both the *proof* of representability and the *implementation* of it. ### Next Steps - [x] ~~Write R3 undecidability reduction (Phase 2a)~~ - [x] ~~Write R1/R2 decidability bounds (Phase 2b)~~ - [x] ~~Write partition independence argument (Phase 2c)~~ - [ ] Extend `gds_specs` to generate specs with parallel block groups - [ ] Investigate Coq/ViCAR for mechanized interchange law proof (Phase 1b) ______________________________________________________________________ ## Entry 005 — 2026-03-28 **Subject:** Phase 3c-d — SHACL/SPARQL validation gate + derived property preservation tests ### Motivation Phase 3a-b (GDSSpec round-trip PBT) was complete but lacked: - SHACL validation of exported RDF before reimport (3c) - Round-trip coverage for SystemIR, CanonicalGDS, VerificationReport (3d) ### Method #### Phase 3c: Validation Gates Added two separate test classes to avoid coupling SPARQL tests to pyshacl: - **TestSHACLConformance** (pyshacl-gated, 30 examples): validates exported GDSSpec and SystemIR RDF against structural SHACL shapes - **TestSPARQLConformance** (no gate, 30 examples): verifies `blocks_by_role` query returns expected block names #### Phase 3d: Derived Property Preservation New strategies in `strategies.py`: - `system_irs()`: compose blocks with `>>`, `compile_system()` - `specs_with_canonical()`: `project_canonical()` on random spec - `specs_with_report()`: `verify()` on compiled random spec - `_compose_sequential()`: shared helper for block composition **TestDerivedPropertyPreservation** (8 tests, 50 examples each): - SystemIR: name, block names, wiring content (source/target pairs), composition type - CanonicalGDS: boundary/policy/mechanism block sets, state variables - VerificationReport: system name, finding count + check_id distribution (Counter-based, catches collapsed duplicates) **TestStrategyInvariants** (2 tests, 50 examples each): - G-002 failures only on BoundaryAction + Mechanism (expected by design) - Same invariant holds after report round-trip Commits: `a90cad8`, `e51d53f` ### Audit and Fixes 6 findings from code review: | # | Severity | Issue | Fix | | --- | --------- | ---------------------------------------- | ------------------------------------------------------ | | 1 | Important | SPARQL test gated behind HAS_PYSHACL | Moved to own TestSPARQLConformance class | | 2 | Important | G-002 test didn't round-trip, misplaced | Split into TestStrategyInvariants + round-trip version | | 3 | Important | Finding set comparison lost multiplicity | Counter + len() assertion | | 4 | Important | SystemIR wiring content not checked | Added source/target pair comparison | | 5 | Minor | Dead else branch (min_blocks=3) | Removed | | 6 | Minor | Duplicated composition logic | Extracted `_compose_sequential()` | ### Outcome | Class | Tests | Examples | Purpose | | ------------------------------- | ------ | ---------- | ------------------------------- | | TestSpecRoundTripPBT | 13 | 100 each | GDSSpec structural fidelity | | TestSHACLConformance | 2 | 30 each | SHACL shape validation | | TestSPARQLConformance | 1 | 30 each | SPARQL query correctness | | TestDerivedPropertyPreservation | 8 | 50 each | IR/Canonical/Report round-trips | | TestStrategyInvariants | 2 | 50 each | G-002 invariant + survival | | **Total** | **26** | **~2,060** | | Combined with Phase 1a (13 algebra tests, ~2,600 examples), total PBT coverage is **39 tests generating ~4,660 random examples**. ### Issues Closed - # 137 (Phase 3: Property-based round-trip testing for OWL export/import) ### Remaining Open Issues | Issue | Phase | Status | | ----- | --------------------------- | ------------------------------------- | | #135 | 1b-c: Coq mechanized proofs | Open (long-term, needs Coq expertise) | | #138 | 4: Dynamical invariants | Closed (superseded by #140-#142) | ______________________________________________________________________ ## Entry 006 — 2026-03-28 **Subject:** StateMetric (bridge Step 3) + gds-analysis package (Steps 4-5) ### Motivation The bridge proposal (paper-implementation-gap.md) maps paper definitions to code in 7 incremental steps. Steps 1-2 were done prior. Steps 3-5 were identified as the next actionable work — Step 3 is structural (same pattern as Steps 1-2), and Steps 4-5 require runtime but are now unblocked by gds-sim's existence. ### Actions #### Step 3: StateMetric (Paper Assumption 3.2) Added `StateMetric` to gds-framework following the exact AdmissibleInputConstraint / TransitionSignature pattern: - `constraints.py`: frozen Pydantic model with `name`, `variables` (entity-variable pairs), `metric_type` (annotation), `distance` (R3 lossy callable), `description` - `spec.py`: `GDSSpec.register_state_metric()` + `_validate_state_metrics()` (checks entity/variable references exist, rejects empty variables) - `__init__.py`: exported as public API - `export.py`: RDF export as `StateMetric` class + `MetricVariableEntry` blank nodes - `import_.py`: round-trip import with `distance=None` (R3 lossy) - `shacl.py`: `StateMetricShape` (name required, xsd:string) - 9 new framework tests + 1 OWL round-trip test Commit: `f9168ee` #### gds-analysis Package (#140) New package bridging gds-framework structural annotations to gds-sim runtime. Dependency graph: ``` gds-framework <-- gds-sim <-- gds-analysis ^ | +----------------------------------+ ``` Three modules: - **`adapter.py`**: `spec_to_model(spec, policies, sufs, ...)` maps GDSSpec blocks to `gds_sim.Model`. BoundaryAction + Policy → policies, Mechanism.updates → SUFs keyed by state variable. Auto-generates initial state from entities. Optionally wraps BoundaryAction policies with constraint guards. - **`constraints.py`**: `guarded_policy(fn, constraints)` wraps a policy with AdmissibleInputConstraint enforcement. Three violation modes: warn (log + continue), raise (ConstraintViolation), zero (empty signal). - **`metrics.py`**: `trajectory_distances(spec, trajectory)` computes StateMetric distances between successive states. Extracts relevant variables by `EntityName.VariableName` key, applies distance callable. 21 tests, 93% coverage, including end-to-end thermostat integration (spec → model → simulate → measure distances). Commit: `447fc62` #### Reachable Set R(x) and Configuration Space X_C (#141) Added `reachability.py` to gds-analysis: - **`reachable_set(spec, model, state, input_samples)`**: Paper Def 4.1. For each input sample, runs one timestep with overridden policy outputs, collects distinct reached states. Deduplicates by state fingerprint. - **`reachable_graph(spec, model, initial_states, input_samples, max_depth)`**: BFS expansion from initial states, applying `reachable_set()` at each node. Returns adjacency dict of state fingerprints. - **`configuration_space(graph)`**: Paper Def 4.2. Tarjan's algorithm for strongly connected components. Returns SCCs sorted by size — the largest is X_C. 11 new tests covering single/multiple/duplicate inputs, empty inputs, BFS depth expansion, SCC cases (self-loop, cycle, DAG, disconnected), and end-to-end thermostat integration. Commit: `081cb9c` ### Bridge Status | Step | Paper | Annotation / Function | Status | | ---- | -------------- | ------------------------- | --------------------- | | 1 | Def 2.5 | AdmissibleInputConstraint | Done (prior) | | 2 | Def 2.7 | TransitionSignature | Done (prior) | | 3 | Assumption 3.2 | StateMetric | **Done** | | 4 | Def 4.1 | `reachable_set()` | **Done** | | 5 | Def 4.2 | `configuration_space()` | **Done** | | 6 | Def 3.3 | Contingent derivative D'F | Open (#142, research) | | 7 | Theorem 4.4 | Local controllability | Open (#142, research) | ### Issue Tracker | Issue | Status | | -------------------------- | ------------------------ | | #134 Phase 1a | Closed | | #135 Phase 1b-c (Coq) | Open | | #136 Phase 2 | Closed | | #137 Phase 3 | Closed | | #138 Phase 4 (original) | Closed (superseded) | | #140 gds-analysis | **Closed** | | #141 R(x) + X_C | **Closed** | | #142 D'F + controllability | Open (research frontier) | ### Observations 1. gds-sim has zero dependency on gds-framework. This is correct architecture — gds-sim is a generic trajectory executor, gds-analysis is the GDS-specific bridge. The adapter pattern keeps both packages clean. 1. The `_step_once()` implementation creates a temporary Model per input sample, which is simple but not performant for large input spaces. A future optimization would batch inputs or use gds-sim's parameter sweep directly. 1. `reachable_set()` is trajectory-based (Monte Carlo), not symbolic. It cannot prove that a state is *unreachable* — only that it wasn't reached in the sampled inputs. For formal reachability guarantees, symbolic tools (Z3, JuliaReach) would be needed. 1. Steps 6-7 (contingent derivative, controllability) are genuinely research-level. They require convergence analysis and Lipschitz conditions that go beyond trajectory sampling. ______________________________________________________________________ ## Entry 007 — 2026-03-28 **Subject:** gds-analysis audits and Phase 1-2 fixes ### Motivation Two independent audits (software architecture + data science methodology) identified 9+8 findings across gds-analysis. Phase 1 and Phase 2 items addressed in this session. ### Software Architecture Audit (7 findings) | # | Finding | Severity | | --- | ---------------------------------------------------- | ------------------------- | | 1 | Adapter collapses topology (single StateUpdateBlock) | Documented | | 2 | State key convention undocumented | **Fixed** | | 3 | Phantom `spec` parameter in reachability | **Fixed** | | 4 | No batch execution for reachability | Documented (future) | | 5 | guarded_policy wrapping invisible | Minor (has `__wrapped__`) | | 6 | No bridge/analysis module separation | Documented (future) | | 7 | PBT strategies linear-only | Documented (future) | ### Data Science Audit (8 findings) | # | Finding | Severity | | --- | -------------------------------------------------- | ----------------------------------------------- | | 1 | No coverage guarantee for Monte Carlo reachability | **Fixed** (ReachabilityResult) | | 2 | Euclidean distance meaningless on discrete states | Documented | | 3 | SIR discrete-time lacks stability analysis | Low risk (1e-6 tolerance has 5 orders headroom) | | 4 | SCC only valid for sampled graph | **Fixed** (documented caveat) | | 5 | Unreachability by enumeration not proof | **Fixed** (exhaustive flag + comments) | | 6 | PBT low coverage (same as SW #7) | Documented | | 7 | No baseline for trajectory distances | Future | | 8 | 1e-6 conservation tolerance unjustified | Low risk | ### Fixes Applied **Phase 1 (commits `a304d86`):** - Removed phantom `spec` parameter from `reachable_set()` and `reachable_graph()` (15 call sites updated) - Documented state key convention ("Entity.Variable") in adapter docstring - Added multi-update Mechanism warning - Documented exhaustive/sampled distinction in all reachability docstrings **Phase 2 (commits `250308b`, `4c5967e`):** - `ReachabilityResult` dataclass: `states`, `n_samples`, `n_distinct`, `is_exhaustive` metadata for coverage tracking - `float_tolerance` parameter: rounds float values before fingerprinting to absorb rounding noise (number of decimal places) - `exhaustive=True` flag on crosswalk tests (3 inputs are provably exhaustive for the discrete space) - Tightened float tolerance test assertion to `assertEqual(n_distinct, 1)` **Prior audit fixes (commit `85e2f4a`):** - `_step_once` strips metadata keys from state dicts - ControlAction blocks handled by adapter - `depends_on` projected at runtime in constraint enforcement - Iterative Tarjan SCC (no recursion limit) - `assert` replaced with `ValueError` in metrics - Docstring corrections ### Outcome gds-analysis now has 52 tests at 94% coverage. The API is cleaner (no phantom parameters), the reachability results are interpretable (coverage metadata), and the float fingerprinting is robust. ______________________________________________________________________ ## Entry 008 — 2026-03-28 **Subject:** Session scorecard — full verification + analysis arc ### What Landed (Single Session) **Verification Framework (Phases 1a, 2, 3):** - 13 algebra PBT tests (interchange law, associativity, commutativity, identity) with Hypothesis CI reproducibility profiles - R3 undecidability proofs (6 propositions, R3-undecidable/R3-separation sub-classification) + R1/R2 decidability bounds + Theorem C.3 partition independence - 26 OWL round-trip PBT tests (SHACL gate, SPARQL conformance, derived property preservation, strategy invariants) **Bridge Proposal Steps 1-5:** - Step 1: AdmissibleInputConstraint (prior) - Step 2: TransitionSignature (prior) - Step 3: StateMetric (this session) - Step 4: `reachable_set()` with ReachabilityResult metadata - Step 5: `configuration_space()` via iterative Tarjan SCC **gds-analysis Package (new):** - `spec_to_model()` adapter: GDSSpec → gds_sim.Model - `guarded_policy()`: runtime constraint enforcement with `depends_on` projection - `trajectory_distances()`: StateMetric computation on trajectories - `reachable_set()` / `reachable_graph()` / `configuration_space()`: reachability analysis with float tolerance and exhaustive/sampled distinction - 52 tests, 94% coverage **Ecosystem Extensions (closed today):** - #77: Nashpy equilibrium computation (11 tests) - #122: gds-continuous ODE engine (49 tests) - #125: SymPy symbolic math (29 tests) - #126: Phase portrait visualization (10 tests) **Integration Examples:** - SIR epidemic: spec → simulate → conservation check → distances → reachable set - Crosswalk: spec → simulate → Markov transition verification → reachability graph → SCC (configuration space) - Heating-cooling example (9 tests) **Quality:** - 3 rounds of code review on PBT tests - 2 independent audits (software architecture + data science methodology) with all Phase 1-2 fixes applied - CI fix: gds-viz phase portrait tests now skip when gds-continuous unavailable (was failing CI on main) **Research Documentation:** - Verification plan (4 phases) - Formal proofs (R3 undecidability + representability bounds) - Research journal (8 entries) - 5 GitHub issues created and closed, 3 new issues for future work ### Issue Scorecard | Issue | Title | Status | | ----- | --------------------------------- | ------------------- | | #134 | Phase 1a: Interchange law PBT | Closed | | #135 | Phase 1b-c: Coq mechanized proofs | Open (long-term) | | #136 | Phase 2: R1/R2/R3 as theorem | Closed | | #137 | Phase 3: Round-trip PBT | Closed | | #138 | Phase 4: Dynamical invariants | Closed (superseded) | | #139 | Verification + StateMetric PR | Merged to main | | #140 | gds-analysis package | Closed | | #141 | Reachable set R(x) + X_C | Closed | | #142 | Contingent derivative (Steps 6-7) | Open (research) | | #146 | gds-analysis + audits PR | Merged to main | ### Remaining Open | Issue | Title | Blocker | | ----- | --------------------------------------- | --------------------- | | #76 | Lean 4 export | Toolchain | | #123 | Continuous-time differential games | Research | | #124 | Optimal control / Hamiltonian | Research | | #127 | Backward reachable set (gds-control) | gds-analysis | | #135 | Coq mechanized proofs | Toolchain | | #142 | Contingent derivative + controllability | Research | | #143 | Package consolidation | Architecture decision | ### Key Observations 1. The bridge proposal (Steps 1-5) is now structurally complete. The gap between the paper's mathematical definitions and the code is closed for the non-research items. Steps 6-7 remain genuinely research-level (contingent derivative, controllability). 1. gds-analysis connects gds-framework to gds-sim without either package knowing about the other. The adapter pattern preserves the clean dependency graph. 1. The verification framework provides three layers of confidence: 1. PBT (Hypothesis): empirical confidence across random inputs 1. Formal proofs (markdown + LaTeX): mathematical arguments 1. SHACL/SPARQL validation: ontological consistency 1. The R3-undecidable/R3-separation distinction is a genuine contribution — it clarifies which GDS design choices create the representability boundary and which could be moved by extending the formalism. ______________________________________________________________________ ## Entry 009 — 2026-03-28 **Subject:** Final audit fixes + Hamiltonian mechanics (#124) ### Audit Fixes Addressed final audit findings from the dev-vs-main diff review: - **F841 lint**: removed unused `r1` variable in float tolerance test - **.hypothesis/ committed**: removed from tracking, added to `.gitignore` (47 constants + example files were machine-specific cache) - **ogs/equilibrium.py** (2 medium findings): - `extract_payoff_matrices()` now raises `ValueError` on unrecognized actions instead of silently skipping (→ zero payoff) - Validates payoff matrix completeness — raises on missing action profiles instead of silent zeros - Numpy import guarded with helpful `ImportError` - 3 new tests: incomplete TC, typo action, missing player ### Hamiltonian Mechanics (#124) Added `hamiltonian.py` module to gds-symbolic, implementing Pontryagin's Maximum Principle via symbolic differentiation: - **`HamiltonianSpec`**: Lagrangian `L(x, u, t)`, terminal cost, control bounds, free-final-time flag - **`derive_hamiltonian()`**: builds H = L + p^T f symbolically, computes costate dynamics dp/dt = -dH/dx via `sympy.diff`, lambdifies the augmented (x, p) system to a plain Python ODE callable - **`derive_from_model()`**: convenience wrapper for `SymbolicControlModel` - **`verify_conservation()`**: checks H = const along a trajectory (for optimality verification) 10 new tests: 1D LQR, 2D harmonic oscillator, parameterized dynamics, missing state equations, SymbolicControlModel integration, conservation checks, and end-to-end ODE integration (derive → integrate → verify). ### Issue Status | Issue | Status | | ---------------------------- | ------------------- | | #124 Hamiltonian mechanics | **Closed** | | #127 Backward reachable sets | Open (next) | | #123 Continuous-time games | Open (needs design) | | #143 Package consolidation | Open (architecture) | | #135 Coq proofs | Open (tooling) | | #142 Controllability | Open (research) | | #76 Lean 4 export | Open (tooling) | ### Session Totals (Full Day) | Metric | Count | | ------------------ | -------------------------------------------------------------- | | Issues closed | 10 (#77, #122, #124, #125, #126, #134, #136, #137, #140, #141) | | Issues superseded | 1 (#138) | | New packages | 2 (gds-analysis, gds-continuous) | | New modules | 3 (hamiltonian.py, reachability.py, equilibrium.py) | | New tests written | ~160 | | Formal proofs | 2 documents (R3 undecidability + representability bounds) | | Audits completed | 5 (3 code reviews + 2 independent audits) | | Journal entries | 9 | | PRs merged to main | 4 (#133, #139, #146, pending) | ______________________________________________________________________ # Ecosystem # GDS Ecosystem The GDS ecosystem is a family of composable packages for specifying, visualizing, and analyzing complex systems. ## Packages | Package | Import | Description | | ----------------- | ----------------------- | ----------------------------------------------------------------- | | **gds-framework** | `gds` | Core engine — blocks, composition algebra, compiler, verification | | **gds-viz** | `gds_viz` | Mermaid diagram renderers for GDS specifications | | **gds-domains** | `gds_domains.stockflow` | Declarative stock-flow DSL over GDS semantics | | | `gds_domains.control` | State-space control DSL over GDS semantics | | | `gds_domains.games` | Typed DSL for compositional game theory (Open Games) | | | `gds_domains.software` | Software architecture DSL (DFD, state machine, C4, ERD, etc.) | | | `gds_domains.business` | Business dynamics DSL (CLD, supply chain, value stream map) | | **gds-sim** | `gds_sim` | Simulation engine (standalone, Pydantic-only) | | **gds-examples** | — | Tutorial models demonstrating framework features | ## Dependency Graph ``` graph TD F[gds-framework] --> V[gds-viz] F --> G[gds-domains.games] F --> SF[gds-domains.stockflow] F --> C[gds-domains.control] F --> SW[gds-domains.software] F --> B[gds-domains.business] F --> E[gds-examples] V --> E G --> E SF --> E C --> E SW --> E B --> E SIM[gds-sim] ``` ## Architecture ``` gds-framework ← core engine (no GDS dependencies) ↑ gds-viz ← visualization (depends on gds-framework) gds-domains.games ← game theory DSL (depends on gds-framework) gds-domains.stockflow ← stock-flow DSL (depends on gds-framework) gds-domains.control ← control systems DSL (depends on gds-framework) gds-domains.software ← software architecture DSL (depends on gds-framework) gds-domains.business ← business dynamics DSL (depends on gds-framework) ↑ gds-examples ← tutorials (depends on gds-framework + gds-viz + all DSLs) gds-sim ← simulation engine (standalone — no gds-framework dep, only pydantic) ``` ## Links - [GitHub Organization](https://github.com/BlockScience) - [GDS Theory Paper](https://doi.org/10.57938/e8d456ea-d975-4111-ac41-052ce73cb0cc) (Zargham & Shorish, 2022) - [cadCAD Ecosystem](https://github.com/cadCAD-org/cadCAD) - [BlockScience](https://block.science/) # Changelog # Changelog All notable changes to the GDS ecosystem are documented here. Each release lists affected packages, breaking changes, and new capabilities. ______________________________________________________________________ ## 2026-04-03 — Tier 0 + Tier 1 Complete Driven by external reviews from Zargham and Jamsheed (Shorish) against the GDS paper (Zargham & Shorish 2022). The reviews identified foundational gaps in notation alignment, formal property statements, ControlAction semantics, temporal agnosticism documentation, and execution contract formalization. This release closes all Tier 0 and Tier 1 roadmap items. ### gds-framework v0.3.0 **Breaking:** `CanonicalGDS.input_ports` now contains only controlled inputs (U_c); disturbance-tagged BoundaryAction ports are in the new `disturbance_ports` field. New capabilities: - **ExecutionContract** — DSL-layer time model declaration (`discrete`, `continuous`, `event`, `atemporal`) with Moore/Mealy ordering. Attached as optional field on GDSSpec. A spec without a contract remains valid for structural verification. - **ControlAction canonical projection** — `CanonicalGDS` now extracts output ports (Y) and renders the output map `y = C(x, d)` in the formula. - **Disturbance input partition** — `disturbance_ports` on `CanonicalGDS` separates policy-bypassing exogenous inputs (W) from controlled inputs (U_c) via `tags={"role": "disturbance"}` on BoundaryAction. - **TypeDef.constraint_kind** — optional named constraint pattern (`non_negative`, `positive`, `probability`, `bounded`, `enum`) enabling round-trip export via SHACL. New verification checks: - **SC-010** — ControlAction outputs must not feed the g pathway (Policy/BoundaryAction) - **SC-011** — ExecutionContract well-formedness - **DST-001** — Disturbance-tagged BoundaryAction must not wire to Policy Documentation: - Formal property statements for all 15 core checks (G-001..G-006, SC-001..SC-009) - Requirement traceability markers on all verification tests - Tests for SC-005..SC-009 (previously untested) - Controller-plant duality design document - Temporal agnosticism invariant with three-layer stack diagram - Audit of 30+ docs replacing "timestep" with temporally neutral vocabulary - Assurance claims document with verification passport template - Execution semantics design document with DSL contract mapping - Disturbance formalization design document - Notation harmonized with Zargham & Shorish (2022) ### gds-owl v0.2.0 - **SHACL constraint promotion** — `TypeDef.constraint_kind` metadata exports as SHACL property shapes (`sh:minInclusive`, `sh:maxInclusive`, `sh:minExclusive`, `sh:in`). Constraints with a `constraint_kind` are now R2 round-trippable; those without remain R3 lossy. Import reconstructs Python callable constraints from SHACL metadata. - New ontology properties: `constraintKind`, `constraintLow`, `constraintHigh`, `constraintValue` - New `build_constraint_shapes()` public API ### gds-psuu v0.2.1 - **Parameter schema validation** — `ParameterSpace.from_parameter_schema()` creates sweep dimensions from declared ParameterDef entries. `validate_against_schema()` catches missing params, type mismatches, and out-of-bounds sweeps. - **PSUU-001 check** following the GDS Finding pattern - Sweep runner validates against schema when `parameter_schema` is provided ### gds-stockflow v0.1.1 - Emit `ExecutionContract(time_domain="discrete")` from `compile_model()` ### gds-control v0.1.1 - Emit `ExecutionContract(time_domain="discrete")` from `compile_model()` ### gds-games v0.3.2 - Emit `ExecutionContract(time_domain="atemporal")` from `compile_pattern_to_spec()` ### gds-software v0.1.1 - Emit `ExecutionContract` from all 6 compilers: `atemporal` for DFD, C4, component, ERD, dependency; `discrete` for state machines ### gds-business v0.1.1 - Emit `ExecutionContract` from all 3 compilers: `discrete` for CLD and supply chain, `atemporal` for value stream maps ### gds-analysis v0.1.1 - Guard `gds-continuous` import in `backward_reachability.py` (previously caused ImportError when gds-continuous wasn't installed) ______________________________________________________________________ ## Earlier Releases ### gds-framework v0.2.3 - Add `StructuralWiring` to public API (compiler intermediate for domain DSL wiring emitters) - Switch to dynamic versioning — `__version__` in `__init__.py` is the single source of truth - Tighten `gds-framework>=0.2.3` lower bound across all dependent packages ### gds-games v0.2.0 - Add canonical bridge (`compile_pattern_to_spec`) — projects OGS patterns to `GDSSpec` - Add view stratification architecture - Require `gds-framework>=0.2.3` ### gds-stockflow v0.1.0 - Initial release — declarative stock-flow DSL over GDS semantics ### gds-control v0.1.0 - Initial release — state-space control DSL over GDS semantics ### gds-viz v0.1.2 - Mermaid renderers for structural, canonical, architecture, parameter, and traceability views