Apr 24, 9-10 AM (41)
Apr 24, 10-11 AM (72)
Apr 24, 11-12 PM (57)
Apr 24, 12-1 PM (100)
Apr 24, 1-2 PM (57)
Apr 24, 2-3 PM (35)
Apr 24, 3-4 PM (19)
Apr 24, 4-5 PM (17)
Apr 24, 5-6 PM (38)
Apr 24, 6-7 PM (27)
Apr 24, 7-8 PM (12)
Apr 24, 8-9 PM (42)
Apr 24, 9-10 PM (17)
Apr 24, 10-11 PM (30)
Apr 24, 11-12 AM (16)
Apr 25, 12-1 AM (8)
Apr 25, 1-2 AM (1)
Apr 25, 2-3 AM (10)
Apr 25, 3-4 AM (5)
Apr 25, 4-5 AM (3)
Apr 25, 5-6 AM (13)
Apr 25, 6-7 AM (1)
Apr 25, 7-8 AM (4)
Apr 25, 8-9 AM (24)
Apr 25, 9-10 AM (17)
Apr 25, 10-11 AM (4)
Apr 25, 11-12 PM (4)
Apr 25, 12-1 PM (13)
Apr 25, 1-2 PM (3)
Apr 25, 2-3 PM (10)
Apr 25, 3-4 PM (6)
Apr 25, 4-5 PM (10)
Apr 25, 5-6 PM (16)
Apr 25, 6-7 PM (13)
Apr 25, 7-8 PM (30)
Apr 25, 8-9 PM (55)
Apr 25, 9-10 PM (13)
Apr 25, 10-11 PM (21)
Apr 25, 11-12 AM (22)
Apr 26, 12-1 AM (5)
Apr 26, 1-2 AM (0)
Apr 26, 2-3 AM (2)
Apr 26, 3-4 AM (5)
Apr 26, 4-5 AM (2)
Apr 26, 5-6 AM (2)
Apr 26, 6-7 AM (3)
Apr 26, 7-8 AM (8)
Apr 26, 8-9 AM (3)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (2)
Apr 26, 11-12 PM (1)
Apr 26, 12-1 PM (6)
Apr 26, 1-2 PM (4)
Apr 26, 2-3 PM (14)
Apr 26, 3-4 PM (14)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (13)
Apr 26, 6-7 PM (13)
Apr 26, 7-8 PM (7)
Apr 26, 8-9 PM (7)
Apr 26, 9-10 PM (5)
Apr 26, 10-11 PM (27)
Apr 26, 11-12 AM (21)
Apr 27, 12-1 AM (7)
Apr 27, 1-2 AM (7)
Apr 27, 2-3 AM (9)
Apr 27, 3-4 AM (9)
Apr 27, 4-5 AM (5)
Apr 27, 5-6 AM (13)
Apr 27, 6-7 AM (7)
Apr 27, 7-8 AM (82)
Apr 27, 8-9 AM (47)
Apr 27, 9-10 AM (33)
Apr 27, 10-11 AM (62)
Apr 27, 11-12 PM (80)
Apr 27, 12-1 PM (66)
Apr 27, 1-2 PM (44)
Apr 27, 2-3 PM (52)
Apr 27, 3-4 PM (42)
Apr 27, 4-5 PM (36)
Apr 27, 5-6 PM (26)
Apr 27, 6-7 PM (13)
Apr 27, 7-8 PM (26)
Apr 27, 8-9 PM (13)
Apr 27, 9-10 PM (15)
Apr 27, 10-11 PM (42)
Apr 27, 11-12 AM (28)
Apr 28, 12-1 AM (17)
Apr 28, 1-2 AM (8)
Apr 28, 2-3 AM (4)
Apr 28, 3-4 AM (5)
Apr 28, 4-5 AM (5)
Apr 28, 5-6 AM (8)
Apr 28, 6-7 AM (8)
Apr 28, 7-8 AM (37)
Apr 28, 8-9 AM (54)
Apr 28, 9-10 AM (59)
Apr 28, 10-11 AM (53)
Apr 28, 11-12 PM (56)
Apr 28, 12-1 PM (49)
Apr 28, 1-2 PM (54)
Apr 28, 2-3 PM (69)
Apr 28, 3-4 PM (31)
Apr 28, 4-5 PM (14)
Apr 28, 5-6 PM (47)
Apr 28, 6-7 PM (9)
Apr 28, 7-8 PM (9)
Apr 28, 8-9 PM (14)
Apr 28, 9-10 PM (20)
Apr 28, 10-11 PM (34)
Apr 28, 11-12 AM (29)
Apr 29, 12-1 AM (13)
Apr 29, 1-2 AM (1)
Apr 29, 2-3 AM (1)
Apr 29, 3-4 AM (6)
Apr 29, 4-5 AM (1)
Apr 29, 5-6 AM (4)
Apr 29, 6-7 AM (12)
Apr 29, 7-8 AM (45)
Apr 29, 8-9 AM (70)
Apr 29, 9-10 AM (49)
Apr 29, 10-11 AM (28)
Apr 29, 11-12 PM (51)
Apr 29, 12-1 PM (39)
Apr 29, 1-2 PM (21)
Apr 29, 2-3 PM (66)
Apr 29, 3-4 PM (25)
Apr 29, 4-5 PM (36)
Apr 29, 5-6 PM (16)
Apr 29, 6-7 PM (10)
Apr 29, 7-8 PM (14)
Apr 29, 8-9 PM (13)
Apr 29, 9-10 PM (17)
Apr 29, 10-11 PM (25)
Apr 29, 11-12 AM (29)
Apr 30, 12-1 AM (6)
Apr 30, 1-2 AM (8)
Apr 30, 2-3 AM (1)
Apr 30, 3-4 AM (6)
Apr 30, 4-5 AM (2)
Apr 30, 5-6 AM (8)
Apr 30, 6-7 AM (15)
Apr 30, 7-8 AM (17)
Apr 30, 8-9 AM (100)
Apr 30, 9-10 AM (19)
Apr 30, 10-11 AM (50)
Apr 30, 11-12 PM (120)
Apr 30, 12-1 PM (69)
Apr 30, 1-2 PM (45)
Apr 30, 2-3 PM (117)
Apr 30, 3-4 PM (29)
Apr 30, 4-5 PM (34)
Apr 30, 5-6 PM (9)
Apr 30, 6-7 PM (20)
Apr 30, 7-8 PM (23)
Apr 30, 8-9 PM (28)
Apr 30, 9-10 PM (13)
Apr 30, 10-11 PM (25)
Apr 30, 11-12 AM (15)
May 01, 12-1 AM (18)
May 01, 1-2 AM (15)
May 01, 2-3 AM (6)
May 01, 3-4 AM (7)
May 01, 4-5 AM (3)
May 01, 5-6 AM (5)
May 01, 6-7 AM (7)
May 01, 7-8 AM (11)
May 01, 8-9 AM (21)
May 01, 9-10 AM (8)
3,956 commits this week Apr 24, 2026 - May 01, 2026
Disable protocol version check in the header for testnets until Dijkstra
This check that was introduced for the protocol version in the block header
proved to be problematic for testnets. Which makes sense, since it was
designed for mainnet in mind and its introduction was needed to be done
urgently since it was blocking 10.6.2 release. See #5595 for more context

In order to allow for testnets to continue being able to produce blocks
for older protocol versions with latest version of the node we lift this
restriction, but only for any network with `Testnet` network id. This is
implemented as a temporary measure and will be properly fixed in
Dijkstra era. See #5763 for more context
Fix an inconsistency in `GOV` rule:
* Verification of `PrevGovActionId` in `proposalsAddAction` happens on
  accumulated `proposals`
* While `preceedingHardFork` check happens on original `st`.

In a usual application, which has identifiers assigned, this inconsistency
would have been a problem. However, in a transaction's `GovActionId`, which is
derived from a hash of a transaction itself, it is impossible to reference
a previous governance action within the same transaction, since one would need
to know the hash of a transaction that contains the proposal it tries to
reference.

Therefore, this commit fixes not an issue, but a mere inconsistency.
This is done in order to avoid developers even considering this edge case.
asteria-game: wire player parallel driver (single-pass loop)
Adds the player workload to the asteria_game testnet's composer
harness. PlayerMain.hs is reshaped from a forever-loop to a
single-pass binary so the Antithesis composer can re-fire it on
its own schedule (a forever loop blocks exclusive scheduling for
serial drivers).

components/asteria-game/composer/stub/parallel_driver_asteria_player.sh:
  - Picks ASTERIA_PLAYER_ID in {1,2,3} based on the wallclock so
    different timelines exercise different players.
  - Player 1 attempts the spawn (PlayerMain gates on id == "1");
    players 2 and 3 observe the asteria UTxO without acting,
    exercising the read path.

components/asteria-game/app/PlayerMain.hs:
  - main: drop the forever loop and the in-process IORef Bool
    "have I spawned yet" guard. Spawn idempotence comes from chain
    state — once the asteria UTxO has been consumed-and-replaced
    with a higher ship_counter, the next attempt's tx is rejected
    by the validator as expected, and reported via the existing
    asteria_player_ship_spawn_failed_<id> sdkUnreachable assertion.
  - Adds asteria_player_pass_errored_<id> and
    asteria_player_pass_completed_<id> SDK assertions so the report
    can score one pass per fire even when the inner observation
    bombs out.

Locally validated on testnets/asteria_game/ compose (3-pass run,
one per player_id):
  - bootstrap idempotence still holds (cold deploy + 2 short-circuit)
  - player 1 attempts spawn end-to-end (observe → build → sign →
    submit → reject); players 2, 3 observe-only
  - sdk.jsonl shows ship_counter and move_planned per pass

KNOWN ISSUE — submission is currently rejected by the spacetime
validator with "PlutusV3 script failed: overspending the budget"
(CekError, ~600k cpu over). This is a pre-existing issue in the
lifted PR #67 Aiken validators — the off-chain wiring works
correctly, but the on-chain validator over-spends its execution
units. Tracked as a follow-up; does not block landing this driver
since the wiring + report assertions are independent of validator
correctness.
fix(smoke-test): gate tx-generator block on compose having the service
After PR #111 dropped tx-generator from cardano_node_master/, the
'Compose smoke test' job that runs scripts/smoke-test.sh against
master fails because the script unconditionally probes the
tx-generator control socket — which is only present on the new
cardano_node_tx_generator/ testnet now.

Skip the tx-generator-specific block when the compose under test
doesn't declare a 'tx-generator:' service. Same script keeps
working for both master (no tx-generator) and the new feature
testnet (still drives the daemon end-to-end).
chore(testnet): split tx-generator into its own feature testnet
Mirrors 43bf739 (chore(testnet): remove adversary service from
cardano_node_master) — same shape, different component.

The master testnet is the canonical reference cluster: cron-fired
Antithesis runs against it produce the baseline-finding signal that
everyone reads. With tx-generator in active iteration (reconnect
supervisor #105, freshness gate #110, composer-assertion shape work
in #98), keeping it on master pollutes that signal with experiments
in flight.

This commit:

* Creates testnets/cardano_node_tx_generator/ — same topology as
  master (3 producers + 2 relays + tracer + sidecars + log-tailer)
  plus the tx-generator service. Asteria-stub is deliberately
  absent (different feature, lives on master).

* Removes tx-generator from testnets/cardano_node_master/ along
  with its 'utxo-keys' volume mount on configurator and the volume
  declaration. Nothing else on master uses utxo-keys.

* README in the new testnet documents why and how to dispatch.

Workflow dispatch from this point:
  gh workflow run cardano-node.yaml \
    --ref <feature-branch> \
    -f test=cardano_node_tx_generator \
    -f duration=1

publish-images currently only scrapes cardano_node_master's compose;
the new testnet's tx-generator image is pushed manually until the
publish-images script is generalised to all testnets (follow-up).
asteria-game: split into a dedicated testnet (testnets/asteria_game/)
Adds the asteria-game workload as an isolated testnet so iteration
on the real asteria game can land on `main` without disturbing the
canonical scheduled run on `cardano_node_master`.

testnets/asteria_game/ (copied from cardano_node_master, then edited):
  - same producer/relay/tracer/sidecar/log-tailer/tx-generator
    topology.
  - asteria-game service replaces asteria-stub: same indexer-driven
    composer harness plus /utxo-keys mount so bootstrap can read
    the genesis wallet skey, and CARDANO_NODE_SOCKET_PATH +
    NETWORK_MAGIC env vars so Asteria.Provider.settingsFromEnv
    resolves to relay1's N2C socket.
  - asteria-game image tag pinned to 3042c0a (the prior commit on
    this branch — last commit to touch components/asteria-game/).
  - testnets/cardano_node_master/ untouched — its scheduled
    Antithesis run is unaffected.

Pipeline (additive, no edits to cardano_node_master jobs):
  - scripts/push-asteria_game_images.sh — sibling of
    push-cardano_node_master_images.sh, scans testnets/asteria_game
    for image tags and resolves each via the same nix build path.
  - .github/workflows/publish-images.yaml — new
    smoke-test-asteria-game job runs scripts/smoke-test.sh against
    testnets/asteria_game; the existing cardano_node_master jobs
    are unchanged.

Locally validated: 3-run idempotence (1 cold deploy, 2 short-circuit
via Asteria.Bootstrap.isAlreadyDeployed) on the asteria_game compose.

Antithesis dispatch wiring for this testnet is a follow-up PR.
asteria-game: lift PR #67 source + idempotent bootstrap
Renames components/asteria-player/ → components/asteria-game/ and
upgrades the lifted PR #67 sources so the bootstrap is safe to
re-run on container restart.

  - cabal package + executable renamed asteria-player → asteria-game.
  - cabal.project SRP pinned to cardano-node-clients PR #98 head
    5707836b (utxo-indexer supervisor + N2C reconnect).
  - flake.nix pulls cardano-node-clients utxo-indexer as a flake
    input so dockerTools bundles the prebuilt binary.
  - nix/docker-image.nix bundles utxo-indexer + asteria-bootstrap +
    asteria-game (player) execs + composer/stub scripts; entrypoint
    is the indexer, the bootstrap runs as a serial driver.
  - composer/stub/ shape replaces composer/asteria/: green-baseline
    heartbeat / eventually_alive / finally_alive plus a new
    serial_driver_asteria_bootstrap that execs /bin/asteria-bootstrap.
  - app/BootstrapMain.hs gains Asteria.Bootstrap.isAlreadyDeployed:
    queries Provider for UTxOs at the asteria spend address and
    short-circuits if any UTxO carries the @"asteriaAdmin"@ token.
    Antithesis can restart the asteria-game container at will and
    bootstrap exits 0 quickly on subsequent invocations.

The asteria_game testnet that wires this image into Antithesis is
added in the next commit.

KNOWN GAP — admin_mint validator is the always-true placeholder
PR #67 ships. The Haskell-side detection plus Antithesis's
@serial_driver_@ scheduling are the contract until a follow-up PR
replaces admin_mint with a one-shot policy parameterised on a seed
@OutputReference@.

Tracks: #67 (asteria-spawn-v2), #98 (utxo-indexer supervisor).
Closes companion: #108 (idempotent bootstrap, content folded here).
Allow LSQ clients to connect during initial LedgerDB replay (EarlyN2C)
Allow the local node-to-client socket to bind within seconds of node
startup rather than after the multi-hour LedgerDB replay completes. n2c
ChainSync clients (cardano-db-sync, ogmios, wallet) can begin streaming
blocks during replay; LSQ / TxSubmit / TxMonitor handlers naturally block
on the LedgerDB until ready, relying on the property that NtC has no
client-side timeouts.

Phase 1 (ChainDB):
- New 'LedgerDBStatus' type + 'awaitLedgerDB' STM helper
- 'openDBInternal' returns once Immutable+Volatile are open; LedgerDB
  replay and initial chain selection run on a registry-linked background
  thread (new 'runInitLedgerDB')
- LedgerDB-touching queries wrapped in 'awaitLedgerDB'
- New 'OpenedDBImmutableReady' trace event; existing 'OpenedDB' preserved
- 'closeDB' cancels the background initialiser and only closes the
  LedgerDB if 'cdbLedgerDBStatus' has reached 'LdbReady'

Phase 2 (NodeKernel + runWith):
- 'mkPendingMempool', 'mkPendingBlockchainTime', 'mkPendingDurationUntilTooOld'
  forwarding wrappers that block on a TMVar until populated
- 'initInternalState' split into 'initInternalStateEarly' +
  'completeInternalState'
- 'initNodeKernel' split into 'initNodeKernelEarly' returning
  '(NodeKernel, m ())'; the deferred action opens the real mempool,
  computes the real GsmState, and spawns the four ledger-touching
  background threads (GSM, GDD watcher, blockForging, blockFetchLogic,
  decisionLogicThreads)
- 'runWith' allocates the BlockchainTime / WrapDurationUntilTooOld TMVars,
  builds the kernel synchronously with pending wrappers, calls
  'llrnRunDataDiffusion' immediately, and forks a 'Node.lateInit' thread
  that runs 'hardForkBlockchainTime' and 'realDurationUntilTooOld' once
  the LedgerDB is ready, populates the TMVars, and runs the completer

Verified: 'cabal build all' clean and 'cabal test storage-test' passes
69/69 on the 3.0.1.0 release tag.
feat(testnet): add cardano_node_adversary as iteration testnet
New testnet that mirrors cardano_node_master and adds the
long-running cardano-adversary daemon. Created as a separate
iteration loop for the adversary roadmap so:

- The scheduled cron continues to target cardano_node_master only.
  Findings on the adversary testnet cannot regress the master
  baseline because nothing automatic consumes this testnet.
- New adversarial endpoints (Tier 1.2 chain_sync_thrash,
  1.3 slow_loris, Tier 2 / 3 / 4) iterate here first, dispatched
  manually via:
    gh workflow run "Antithesis on cardano-node testnet" \
      --ref <branch> -f test=cardano_node_adversary -f duration=1
- Promotion of any change to cardano_node_master happens only
  after multiple branch-dispatched runs against this testnet
  produce findings_new ≤ master baseline (currently 9), and
  with explicit user approval.

Layout:

- testnets/cardano_node_adversary/ — full clone of master plus
  one new `adversary` service. Same producers (p1/p2/p3),
  relays (relay1/relay2), tracer, tracer-sidecar, sidecar,
  tx-generator, asteria-bootstrap, asteria-player, log-tailer.
- adversary service mounts relay1-state for the control socket
  and tracer:ro for chainPoints.log; depends_on relay1 +
  tracer-sidecar + configurator.
- Image pinned to ghcr.io/cardano-foundation/cardano-node-antithesis/adversary:cc628d5
  (current built tag from PR #104). Will be bumped per follow-up
  branches as adversary code evolves.

Docs:

- docs/testnets/cardano-node-adversary.md — overview,
  scheduling discipline, promotion criteria.
- docs/testnets/index.md — entry pointing at the new page.
- docs/testnets/cardano-node-master.md — sidecar row updated
  to drop the now-relocated adversary driver mention.
- mkdocs.yml — nav entry for the new testnet doc.

components/adversary/ is unchanged (the wrapper image keeps
building from the same tree), and components/sidecar/ is
unchanged.

The helper_sdk_lib.sh on main is byte-identical to
tx-generator's; tx-generator's SDK assertions are ingested by
Antithesis successfully, so we keep it as-is and use the first
branch-dispatched run on this new testnet as the diagnostic
for whether the adversary container's assertions are or aren't
reaching the SDK ingest channel.