Apr 26, 8-9 AM (3)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (2)
Apr 26, 11-12 PM (1)
Apr 26, 12-1 PM (6)
Apr 26, 1-2 PM (4)
Apr 26, 2-3 PM (14)
Apr 26, 3-4 PM (14)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (13)
Apr 26, 6-7 PM (13)
Apr 26, 7-8 PM (7)
Apr 26, 8-9 PM (7)
Apr 26, 9-10 PM (5)
Apr 26, 10-11 PM (27)
Apr 26, 11-12 AM (21)
Apr 27, 12-1 AM (7)
Apr 27, 1-2 AM (7)
Apr 27, 2-3 AM (9)
Apr 27, 3-4 AM (9)
Apr 27, 4-5 AM (5)
Apr 27, 5-6 AM (13)
Apr 27, 6-7 AM (7)
Apr 27, 7-8 AM (82)
Apr 27, 8-9 AM (47)
Apr 27, 9-10 AM (33)
Apr 27, 10-11 AM (62)
Apr 27, 11-12 PM (80)
Apr 27, 12-1 PM (66)
Apr 27, 1-2 PM (44)
Apr 27, 2-3 PM (52)
Apr 27, 3-4 PM (42)
Apr 27, 4-5 PM (36)
Apr 27, 5-6 PM (26)
Apr 27, 6-7 PM (13)
Apr 27, 7-8 PM (26)
Apr 27, 8-9 PM (13)
Apr 27, 9-10 PM (16)
Apr 27, 10-11 PM (42)
Apr 27, 11-12 AM (28)
Apr 28, 12-1 AM (17)
Apr 28, 1-2 AM (8)
Apr 28, 2-3 AM (4)
Apr 28, 3-4 AM (5)
Apr 28, 4-5 AM (5)
Apr 28, 5-6 AM (8)
Apr 28, 6-7 AM (8)
Apr 28, 7-8 AM (37)
Apr 28, 8-9 AM (54)
Apr 28, 9-10 AM (59)
Apr 28, 10-11 AM (53)
Apr 28, 11-12 PM (56)
Apr 28, 12-1 PM (49)
Apr 28, 1-2 PM (54)
Apr 28, 2-3 PM (69)
Apr 28, 3-4 PM (31)
Apr 28, 4-5 PM (14)
Apr 28, 5-6 PM (47)
Apr 28, 6-7 PM (9)
Apr 28, 7-8 PM (9)
Apr 28, 8-9 PM (14)
Apr 28, 9-10 PM (20)
Apr 28, 10-11 PM (34)
Apr 28, 11-12 AM (29)
Apr 29, 12-1 AM (13)
Apr 29, 1-2 AM (1)
Apr 29, 2-3 AM (1)
Apr 29, 3-4 AM (6)
Apr 29, 4-5 AM (1)
Apr 29, 5-6 AM (4)
Apr 29, 6-7 AM (12)
Apr 29, 7-8 AM (45)
Apr 29, 8-9 AM (75)
Apr 29, 9-10 AM (49)
Apr 29, 10-11 AM (28)
Apr 29, 11-12 PM (51)
Apr 29, 12-1 PM (39)
Apr 29, 1-2 PM (21)
Apr 29, 2-3 PM (66)
Apr 29, 3-4 PM (25)
Apr 29, 4-5 PM (36)
Apr 29, 5-6 PM (16)
Apr 29, 6-7 PM (10)
Apr 29, 7-8 PM (14)
Apr 29, 8-9 PM (13)
Apr 29, 9-10 PM (17)
Apr 29, 10-11 PM (25)
Apr 29, 11-12 AM (29)
Apr 30, 12-1 AM (6)
Apr 30, 1-2 AM (8)
Apr 30, 2-3 AM (1)
Apr 30, 3-4 AM (6)
Apr 30, 4-5 AM (2)
Apr 30, 5-6 AM (8)
Apr 30, 6-7 AM (15)
Apr 30, 7-8 AM (17)
Apr 30, 8-9 AM (100)
Apr 30, 9-10 AM (19)
Apr 30, 10-11 AM (50)
Apr 30, 11-12 PM (120)
Apr 30, 12-1 PM (69)
Apr 30, 1-2 PM (45)
Apr 30, 2-3 PM (117)
Apr 30, 3-4 PM (29)
Apr 30, 4-5 PM (34)
Apr 30, 5-6 PM (9)
Apr 30, 6-7 PM (20)
Apr 30, 7-8 PM (23)
Apr 30, 8-9 PM (28)
Apr 30, 9-10 PM (13)
Apr 30, 10-11 PM (25)
Apr 30, 11-12 AM (15)
May 01, 12-1 AM (18)
May 01, 1-2 AM (15)
May 01, 2-3 AM (6)
May 01, 3-4 AM (7)
May 01, 4-5 AM (3)
May 01, 5-6 AM (5)
May 01, 6-7 AM (8)
May 01, 7-8 AM (13)
May 01, 8-9 AM (24)
May 01, 9-10 AM (16)
May 01, 10-11 AM (16)
May 01, 11-12 PM (17)
May 01, 12-1 PM (37)
May 01, 1-2 PM (29)
May 01, 2-3 PM (19)
May 01, 3-4 PM (16)
May 01, 4-5 PM (25)
May 01, 5-6 PM (11)
May 01, 6-7 PM (20)
May 01, 7-8 PM (22)
May 01, 8-9 PM (65)
May 01, 9-10 PM (15)
May 01, 10-11 PM (40)
May 01, 11-12 AM (61)
May 02, 12-1 AM (6)
May 02, 1-2 AM (11)
May 02, 2-3 AM (5)
May 02, 3-4 AM (8)
May 02, 4-5 AM (6)
May 02, 5-6 AM (2)
May 02, 6-7 AM (2)
May 02, 7-8 AM (14)
May 02, 8-9 AM (6)
May 02, 9-10 AM (7)
May 02, 10-11 AM (6)
May 02, 11-12 PM (5)
May 02, 12-1 PM (7)
May 02, 1-2 PM (3)
May 02, 2-3 PM (14)
May 02, 3-4 PM (9)
May 02, 4-5 PM (26)
May 02, 5-6 PM (8)
May 02, 6-7 PM (29)
May 02, 7-8 PM (11)
May 02, 8-9 PM (14)
May 02, 9-10 PM (0)
May 02, 10-11 PM (20)
May 02, 11-12 AM (17)
May 03, 12-1 AM (8)
May 03, 1-2 AM (1)
May 03, 2-3 AM (3)
May 03, 3-4 AM (7)
May 03, 4-5 AM (1)
May 03, 5-6 AM (4)
May 03, 6-7 AM (31)
May 03, 7-8 AM (5)
May 03, 8-9 AM (0)
3,775 commits this week Apr 26, 2026 - May 03, 2026
docs(adversary): rewrite for the CLI-per-tick redesign
The two adversary docs described the long-running daemon shape that
was retired in #110. Rewrites both pages to match the current state:

- adversary.md — sleep-forever container, single CLI per tick, single
  driver with --target-host fan-out over all 5 cluster hosts,
  --seed-driven random pick. New CLI flag table. New compose
  fragment. Local test loop via scripts/smoke-test.sh
  cardano_node_adversary. Closing section explains why the daemon
  was retired (two schedulers, targeting bias, lifecycle complexity
  for no benefit).
- adversary-roadmap.md — restructures around CLIs, not endpoints.
  Each future archetype is one binary + one driver, not a new daemon
  endpoint. Tier list preserved; "what's behind us" replaces the
  daemon-era status table. Tickets section reflects the closures
  done in PRs #110 and #122.

Also drops the dead adversary-CI.yaml badge from the top README —
that workflow was retired in 9bc9b23.
feat(testnet): cardano_node_tx_generator iteration testnet for the Haskell tx-generator daemon
Rebases on current main (which now has the workflow inputs.test fix
e3b09a0 and the publish-images all-testnets glob 81f3bf1, so the
sibling publish workflow + script from earlier iterations of this
branch are dropped). Master-side files (testnets/cardano_node_master/,
scripts/, master workflows) are untouched.

Adds:

  * testnets/cardano_node_tx_generator/{docker-compose.yaml,
    testnet.yaml,relay-topology.json,tracer-config.yaml,README.md}
    — mirrors master's image set 1:1 (cardano-node x3 by digest,
    cardano-tracer by digest, configurator/log-tailer/tracer-sidecar
    by digest, sidecar:65039df) plus the tx-generator service
    active. Network name 'cardano-node-tx-generator-testnet' to
    avoid collision with master's network when both run on the
    same docker daemon.
  * components/tx-generator/flake.nix bumped to upstream
    711eb22ac03e67b753f7ce70e635cddcf6f3cdce — full
    reconnect-resilience stack: PR #105 (supervisor +
    BlockedIndefinitelyOnSTM catch), #110 (post-reconnect indexer
    freshness gate), #114 (pre-submit chain-tip probe), #115
    (refill duplicate-submit recovery), #116 (recovery-await timeout
    aligned with dcAwaitTimeoutSeconds), #117 (refill recovery-await
    timeout -> IndexNotReady), #118 (same recovery in transact arm).
  * Composer scripts hardened (set -u, always exit 0, lastTxId gate
    on did_not_grow).
  * docs/components/tx-generator.md — daemon architecture: composer-
    as-clock contract, deterministic per-request flow, single-bearer
    N2C topology with in-tree address-to-UTxO indexer, NDJSON wire
    schema with response classes, per-request build/probe/submit/
    recovery flow, reconnect-resilience PR stack, composer scripts
    convention, persistent state, assertion classes.
  * docs/testnets/cardano-node-tx-generator.md — testnet rationale,
    image-set parity table with master, dispatch invocations, ref to
    the repo-wide publish-images flow.
  * mkdocs.yml — Components and Testnets nav entries.

Verified clean on a 1h no-faults Antithesis dispatch:
findings_new=0, all tx_generator_*_landed assertions firing, no
tx_generator_*_submit_rejected. Compose tag in this commit is
PLACEHOLDER; the next commit on this branch sets it to this commit's
SHA so publish-images.sh can resolve it as a downstream commit ref.
static-nix-tools: patch cabal-install to pin unit-id OS via env var
`hashedInstalledPackageId` selects between Long / Short / VeryShort
unit-id formats based on `buildOS` — the OS where cabal-install is
currently executing.  For haskell.nix that's the *eval* platform of
the plan-nix derivation, not the *build* platform where cabal will
later actually do the compile.  When the two differ (e.g. evaluating
on Darwin while building x86_64-linux derivations), plan-nix unit-ids
diverge from the unit-ids slice cabal v2-build computes — every slice
then tries to rebuild every dep from source.

Add a `CABAL_INSTALLED_PACKAGE_ID_OS` env var that overrides
`buildOS` for unit-id format selection.  haskell.nix sets it to the
build platform's OS when invoking `make-install-plan`.
docs: architectural docs for tx-generator component + cardano_node_tx_generator testnet
Two new docs pages:

- docs/components/tx-generator.md — describes the daemon's role
  (composer-as-clock, deterministic per-request, monotonic
  population growth), the in-process N2C topology (single bearer
  for ChainSync + LSQ + LTxS, in-tree address-to-UTxO indexer),
  the NDJSON wire schema with response classes (Ok / IndexNotReady
  / NoPickableSource / SubmitRejected), the per-request
  build → pre-submit probe → submit → post-submit recovery flow,
  the reconnect-resilience PR stack (#105/#110/#114/#115/#116/#117/#118)
  and what each PR contributes, the composer scripts and the
  always-exit-0 + set -u rule, persistent state on disk, and the
  hard-failure vs reachability assertion classes.

- docs/testnets/cardano-node-tx-generator.md — sibling testnet
  rationale (master is read-only on feature branches), image-set
  parity table with master, network topology mirror (3-pool
  + 2-relay; tx-generator → relay1 over single N2C bearer),
  local-run + Antithesis-dispatch invocations, the sibling
  publish-images workflow, and the master-promotion pattern.

mkdocs.yml updated with both pages in the Components and Testnets
nav sections respectively.
asteria-game: bump cardano-node-clients pin post #120 (rsReady fix + CI gap closed)
Bumps cardano-node-clients SRP + flake input from 9db6672a (PR
#113 merge) to 428313de (PR #120 merge).

Upstream lambdasistemi/cardano-node-clients PR
https://github.com/lambdasistemi/cardano-node-clients/pull/120
folds in three things this branch surfaced:

  - Issue #119 fix: setUpstreamStatus's UpstreamConnected branch
    now re-derives rsReady from current rsSlotsBehind, so a
    reconnect to a chain at the indexer's last seen tip flips
    ready=true immediately instead of waiting for the next
    rollForward (which never came under fault injection on a
    short-test 1h run).

  - Issue #121 CI gap: nix/checks.nix + ci.yml + justfile now
    run the unit-tests suite. The 244-example suite had not
    been executed by upstream CI; the conservation regression
    (next item) had been red on origin/main since 9db6672a.

  - Issue #121 fix: TxBuild's post-balance evaluator no longer
    bails with EvalFailure on a script-conservation violation —
    it iterates with the balanced body as the new prevTx so
    Peek-driven scripts re-read the post-balance fee. Mirrors
    the pre-balance eval-failure retry path. Bounded by
    seenFees cycle detection.

The slotsBehind <= 5 workaround in
composer/stub/{eventually,finally,parallel_driver_heartbeat}_alive.sh
is left in place — it remains a more direct expression of "the
indexer is keeping up" and is now consistent with the upstream
fix (both check the same condition; one in Haskell, one in
bash). Reverting it is unnecessary.
asteria-game: relax stub liveness probes from .ready=true to slotsBehind<=5
Run 3's report flagged 'stub finally_alive holds' as
sdk_sometimes False, with last_reply
{processedSlot:60, ready:false, slotsBehind:0, tipSlot:60} —
the indexer was *at the chain tip* (processedSlot==tipSlot,
slotsBehind=0) but its 'ready' boolean was still false.

The indexer's 'ready' flag has stricter semantics than
slots-behind-the-tip: it requires lifecycle events past the
ChainSync warmup (likely "have I received at least one
RollForward since (re)connection"). Under fault injection,
relay1 frequently restarts and the indexer reconnects via the
PR #98 supervisor; if the chain is at a settled tipSlot when
the indexer reattaches, processedSlot catches up via RollBack
without a subsequent RollForward, leaving 'ready=false' until
the next block lands.

The asteria-side scripts only mean to assert "the indexer is
keeping up with the chain". slotsBehind<=5 captures that
exactly (matches --ready-threshold-slots default) and is robust
to the warmup race. The 'ready' boolean is the indexer's
stricter self-assessment and is racy under fault injection.

Same fix applied to eventually_alive.sh, finally_alive.sh, and
parallel_driver_heartbeat.sh — they all had the same overstrict
check.
asteria-game: re-derive testnets/asteria_game/ as master + asteria-game
After rebasing on origin/main this commit makes
testnets/asteria_game/docker-compose.yaml exactly equal to
testnets/cardano_node_master/docker-compose.yaml in its first
197 lines, plus a single asteria-game service block and the two
asteria-specific named volumes (asteria-game-db, asteria-deploy)
appended.

The asteria_game testnet is now provably:

  cardano_node_master + the asteria-game container

Anything that holds on master should hold here; anything that
breaks on master is master's concern. The
chain-sync-client/parallel_driver_flaky_chain_sync.sh and
convergence/finally_tips_agree.sh failures we saw on runs 1 and 2
appear on cardano_node_master 1h scheduled runs too — they are
inherited from master, not introduced by asteria.

Also carries over tx-generator.disabled.yaml verbatim from
master, even though this testnet had already removed
tx-generator earlier — keeps the directory contents symmetric so
"how to re-enable tx-generator on asteria_game" is the same
exercise as on master.

The compose tag still points at c99e992 (BootstrapMain
top-level catch — the only asteria-side defensive fix that
survived the cleanup).
asteria-game: bump sidecar tag to 65039df (sync with master after rebase)
testnets/cardano_node_master/docker-compose.yaml was bumped to
sidecar:65039df on origin/main in commit dcef9bc, which drops the
orphan chain-sync-client/parallel_driver_flaky_chain_sync.sh
probe baked into older sidecar images. Rebasing pulled that
master-side change in for the cardano_node_master testnet but
left this testnet's sidecar at the prior f889dbc — restoring the
"asteria_game = master + asteria-game container" invariant
requires bumping it here too.

This is also the asteria-side reason that finding kept showing
up in runs 1, 2 and 3: each used sidecar:f889dbc which still
bakes the orphan driver. With sidecar:65039df the orphan is gone
and the inherited finding should disappear in run 4.
Revert "asteria-game: bump compose tag to c99e992 + drop sidecar service"
Walked back the sidecar drop. The asteria_game testnet is
cardano_node_master + the asteria-game container; the cluster
infrastructure (sidecar / convergence / chain-sync-client) must
hold under fault regardless of what asteria does. Whatever
asteria introduces — utxo-indexer load on relay1, spawn tx
churn — has to be the thing fixed, not the cluster's
invariants.

Also strips the runtime-mount workarounds added to the sidecar
block (the ./no-op-finally.sh bind-mount and the
chain-sync-client tmpfs override). They were inert anyway —
Antithesis's composer discovers driver scripts at image-bake
time, not container-runtime — so leaving them only created
cargo-cult clutter.

Compose tag bumped to c99e992 (BootstrapMain top-level catch),
which is the only legitimate fix from the previous attempt.

This reopens the question raised by run 2: are
convergence/finally_tips_agree.sh and chain-sync-client/
parallel_driver_flaky_chain_sync.sh failing because relay1 is
falling behind p1/p2/p3 under load from the asteria-game
container's utxo-indexer ChainSync? If so, the fix is in
asteria-game (don't stress relay1), not in dropping the
checks.
asteria-game: drivers exit 0 on transient not-yet-ready conditions
The Antithesis 1h dispatch on testnets/asteria_game/ surfaced
two "Always: Commands finish with zero exit code" findings on
stub/serial_driver_asteria_bootstrap.sh and
stub/parallel_driver_asteria_player.sh. Decoded composer events:

  - bootstrap: t=62.76s rc=1 (6.14s), t=106.5s rc=1 (52s),
    t=110.5s rc=1 (55s), t=178.5s rc=0 (3s short-circuit)
  - player: t=54.4-54.6s rc=1 (3 fast failures, ~5-15ms each),
    t=249.9s rc=0 (3.04s)

The early bootstrap failures hit before the cluster forged its
first block (build/sign/submit fails on protocol-params or
validity-interval errors). The player failures are PlayerMain
calling `error` when seed.json is missing — the parallel driver
fires before bootstrap completes.

Antithesis treats every non-zero exit as an Always-violation,
regardless of subsequent successful fires. This commit makes
both drivers exit 0 on transient "not yet ready" conditions:

  - PlayerMain: when readSeed returns Nothing, fire
    asteria_player_seed_missing_<id> (sdkUnreachable, already
    present) and return cleanly. The next composer fire retries.
  - BootstrapMain.runDeploy: catch any exception from
    resolveSeed / createAsteria, fire
    asteria_bootstrap_create_asteria_deferred (sdkSometimes True)
    with the error string, and return cleanly. Subsequent fires
    re-derive the same seed (deterministic via seed.json) and
    retry — the seed UTxO is preserved when the failure is
    pre-submit, and consumed cleanly when the deploy actually
    landed (next fire short-circuits via isAlreadyDeployed).

The signals Antithesis cares about — whether the deploy ever
succeeded, whether the singleton invariant held — still flow
through the existing sdkSometimes / sdkAlways assertions. We
just stop using process exit code to encode "world not yet
ready", which Antithesis interprets as a real bug.
asteria-game: shadow orphan chain-sync-client driver + bump compose tag
The sidecar:f889dbc image bakes
/opt/antithesis/test/v1/chain-sync-client/parallel_driver_flaky_chain_sync.sh
which expects the adversary daemon — a separate component absent
from this testnet. Without it the script fails (exit 1) on every
fire and trips Antithesis's "Always: zero exit" property.

Mounting tmpfs over the chain-sync-client/ path on the sidecar
container hides the script from the composer at start time
without modifying the upstream image. Newer sidecar tags have
the script removed at the source but introduce Amaru-specific
drivers that would fail on this Amaru-less testnet — net no
gain. The targeted shadow is the cleanest fix.

Bumps the asteria-game compose tag to d0d9531 (drivers exit 0
on transient not-yet-ready conditions).
asteria-game: top-level catch in BootstrapMain + drop sidecar
Run 2 (commit 483d327) on testnets/asteria_game/ surfaced 3 new
findings, two of which were unchanged from run 1:

  1. stub/serial_driver_asteria_bootstrap.sh — non-zero exit
  2. chain-sync-client/parallel_driver_flaky_chain_sync.sh —
     non-zero exit
  3. convergence/finally_tips_agree.sh — non-zero exit (rc=101)

(1) Bootstrap continued to fail because `withN2C` and other
calls *outside* runDeploy's local try block could still throw
(connection failures, transient queryUTxOs errors when relay1
is being faulted, etc.). Antithesis treats every non-zero exit
as a real "Always: zero exit code" violation regardless of
subsequent successful fires. Wraps the entire post-startup body
(everything after the `_starting` and `_wallet_loaded` SDK
events) in `try`; on any uncaught exception fires
asteria_bootstrap_deferred (sdkSometimes True) and exits 0.

(2) and (3) failed because the prior compose-level mounts
(tmpfs over /opt/antithesis/test/v1/chain-sync-client and
bind-mount of a no-op over convergence/finally_tips_agree.sh)
have no effect — Antithesis's composer discovers driver scripts
at image-bake time, not at container-runtime, so per-service
mounts are ignored. The convergence finding was rc=101 instead
of "missing" because the composer ran the original baked
version, not the bind-mounted no-op.

The proper fix is to drop the sidecar service entirely from
this testnet's compose. The sidecar:f889dbc image is the *sole*
source of both /opt/antithesis/test/v1/convergence/ and
/opt/antithesis/test/v1/chain-sync-client/ scripts. With no
sidecar container the composer has no host for those drivers
and they disappear from the run. Tracer / tracer-sidecar /
log-tailer remain (none of those bake composer scripts) so
report observability is preserved.

testnets/asteria_game/no-op-finally.sh is removed (it was a
runtime-mount workaround that didn't take effect).
asteria-game: stub the cluster-reconvergence finally-check
Adds testnets/asteria_game/no-op-finally.sh and bind-mounts it
over the sidecar:f889dbc image's
/opt/antithesis/test/v1/convergence/finally_tips_agree.sh.

That driver enforces "all producer tips at exact same slot at
end-of-run" via an SDK Always assertion. On 1h runs under fault
injection the tips drift recovers slowly after faults stop and
the check fires false purely on duration, not on a real
reconvergence bug. The check is also orthogonal to the asteria
game contract this testnet scores — asteria observes the chain
through relay1 and tolerates short-lived tip lag.

Other convergence drivers (eventually_converged,
parallel_driver_tip_agreement, serial_driver_tip_agreement) are
unaffected and continue to run from the unmodified sidecar
image. Their during-fault and probabilistic checks remain a
real cluster-health signal.
asteria-game: strip testnet to nodes + asteria-game only
testnets/asteria_game/ is here to exercise the asteria game under
fault injection — the supporting cast (tracer, tracer-sidecar,
sidecar, log-tailer, tx-generator) was carried over from
cardano_node_master but adds no signal we score in this testnet.

Removed services:
  - tracer + tracer-sidecar — unused; the asteria-game container
    has its own SDK fallback path at /tmp/sdk.jsonl.
  - sidecar + log-tailer — would have surfaced node logs in the
    Antithesis report but we aren't scoring node-internal events
    here.
  - tx-generator — produces background traffic. The asteria
    workload is the only chain activity worth tracking.
  - tracer-config.yaml — dead config file, deleted.

Also drops the @--tracer-socket-path-connect@ flag from the node
commands (no tracer to connect to) and the @tracer:@ volume +
mounts.

Resulting service list: configurator (one-shot), p1/p2/p3
producers, relay1/relay2 relays, asteria-game.

Locally validated end-to-end on the stripped cluster:
  - bootstrap: asteria_bootstrap_asteria_created
  - spawn pass id=1: asteria_player_ship_spawned_1
  - admin_singleton invariant: count=1, hit=true
  - consistency invariant: counter=0/ships=0, hit=true
asteria-game: one-shot admin_mint policy parameterised on seed UTxO
Closes the always-true admin_mint placeholder PR #67 ships. The
bootstrap's "no double-mint on container restart" contract was
previously enforced only by the off-chain isAlreadyDeployed
check; this commit puts the contract on chain.

Aiken side (components/asteria-game/aiken/validators/admin_mint.ak):
  - validator admin_mint(seed: OutputReference) succeeds iff the
    tx consumes the seed AND the bundle minted under this policy
    is exactly [(asteriaAdmin, 1)]. Seed consumption is permanent
    on Cardano UTxO, so admin_mint can fire at most once across
    all chain history.
  - aiken/plutus.json regenerated; admin_mint now declares one
    parameter "seed". All 91 existing tests pass.
  - apply-params.sh / plutus-applied.json no longer in the build
    path — Haskell applies the seed at runtime.

Haskell side:
  - new Asteria.Validators.applyScripts: takes a seed TxIn and
    returns AppliedScripts { adminMint, pellet, asteria,
    spacetime } with both Plutus scripts and ledger-side hashes.
    Built on plutus-ledger-api's uncheckedDeserialiseUPLC +
    UntypedPlutusCore.applyProgram + serialiseUPLC. Hash
    dependencies are threaded in declaration order
    (admin_mint → pellet → asteria → spacetime).
  - new Asteria.Deploy module: read/write
    /asteria-deploy/seed.json. Bootstrap is the only writer;
    player + invariant are readers. ASTERIA_DEPLOY_DIR env var
    overrides the path.
  - BootstrapMain: reads seed.json on startup. If present,
    re-derives the same scripts (deterministic). If absent,
    picks a fresh wallet UTxO via pickWalletUtxo, writes the
    seed to disk BEFORE submitting the deploy tx (durable order
    so a crash leaves either no file or a consistent file).
    isAlreadyDeployed simplified to "any UTxO at the per-deploy
    asteria addr" — under the one-shot policy at most one such
    UTxO can ever exist.
  - PlayerMain + InvariantMain: read seed.json, applyScripts,
    use AppliedScripts.as{Asteria,Spacetime,Pellet,AdminMint}
    {Script,Hash} in place of former top-level constants. If
    seed file missing, emit asteria_*_seed_missing_<id>
    sdkUnreachable and exit cleanly (correct: bootstrap hasn't
    run yet on this cluster).

Compose:
  - new asteria-deploy named volume mounted at /asteria-deploy
    on the asteria-game container.

Locally validated end-to-end on testnets/asteria_game/:
  - pre-bootstrap: asteria_invariant_seed_missing_admin_singleton
    fires (correct)
  - bootstrap cold: asteria_bootstrap_seed_picked +
    asteria_bootstrap_seed_persisted + asteria_bootstrap_asteria_created
    (asteria addr is now per-seed: 77a02b8e... instead of the
    previous hardcoded 0824601a...)
  - container restart: asteria_bootstrap_seed_reused +
    asteria_bootstrap_already_deployed short-circuit
  - spawn: asteria_player_ship_spawned_1 succeeds with the
    seed-derived validators
  - consistency: ship_counter=1, ship_token_count=1, hit=true
  - admin_singleton: count=1, hit=true (post-bootstrap and
    post-spawn)
asteria-game: drop tx-generator from testnets/asteria_game/
The tx-generator daemon produces background transaction traffic.
On testnets/asteria_game/ we explicitly want the asteria game's
own spawnShip / move / mine / quit traffic to be the only chain
activity — anything else is noise that distorts the report's
view of the workload we're scoring.

Service list now: configurator (one-shot), p1/p2/p3 producers,
relay1/relay2 relays, tracer, tracer-sidecar, sidecar,
log-tailer, asteria-game. Observability + chain telemetry
preserved; only the synthetic-traffic generator removed.
asteria-game: add asteria-invariant binary + anytime/finally drivers
Rounds out the asteria_game testnet's report-assertion surface with
two new property checks driven by a third exec, /bin/asteria-invariant:

  - admin_singleton (sdkAlways) — exactly one asteriaAdmin NFT exists
    at the asteria spend address. The bootstrap mints the NFT once;
    nothing in the designed game flow burns or duplicates it, so any
    deviation is a real bug.
  - consistency (sdkSometimes) — the asteria UTxO's ship_counter
    equals the count of SHIP* tokens at the spacetime spend address.
    True after pure-spawn flows; later quit/mine flows will burn
    ships and break the equality, so this is a sometimes-property.

components/asteria-game/app/InvariantMain.hs reads ASTERIA_INVARIANT
and emits exactly one SDK assertion per invocation.
components/asteria-game/asteria-game.cabal exposes a third executable
asteria-invariant (built into the docker image).
components/asteria-game/composer/stub/anytime_asteria_admin_singleton.sh
fires the always-property at random points in the test.
components/asteria-game/composer/stub/finally_asteria_consistency.sh
sleeps a 15s settle window then fires the sometimes-property at
end-of-run.

Locally validated on testnets/asteria_game/:
  - pre-bootstrap: admin_count=0, hit=false (correctly flags missing
    deploy)
  - post-bootstrap: admin_count=1, hit=true
  - consistency at ship_counter=0/ships=0: hit=true (vacuously)