Apr 24, 9-10 AM (41)
Apr 24, 10-11 AM (72)
Apr 24, 11-12 PM (57)
Apr 24, 12-1 PM (100)
Apr 24, 1-2 PM (57)
Apr 24, 2-3 PM (35)
Apr 24, 3-4 PM (19)
Apr 24, 4-5 PM (17)
Apr 24, 5-6 PM (38)
Apr 24, 6-7 PM (27)
Apr 24, 7-8 PM (12)
Apr 24, 8-9 PM (42)
Apr 24, 9-10 PM (17)
Apr 24, 10-11 PM (30)
Apr 24, 11-12 AM (16)
Apr 25, 12-1 AM (8)
Apr 25, 1-2 AM (1)
Apr 25, 2-3 AM (10)
Apr 25, 3-4 AM (5)
Apr 25, 4-5 AM (3)
Apr 25, 5-6 AM (13)
Apr 25, 6-7 AM (1)
Apr 25, 7-8 AM (4)
Apr 25, 8-9 AM (24)
Apr 25, 9-10 AM (17)
Apr 25, 10-11 AM (4)
Apr 25, 11-12 PM (4)
Apr 25, 12-1 PM (13)
Apr 25, 1-2 PM (3)
Apr 25, 2-3 PM (10)
Apr 25, 3-4 PM (6)
Apr 25, 4-5 PM (10)
Apr 25, 5-6 PM (16)
Apr 25, 6-7 PM (13)
Apr 25, 7-8 PM (30)
Apr 25, 8-9 PM (55)
Apr 25, 9-10 PM (13)
Apr 25, 10-11 PM (21)
Apr 25, 11-12 AM (22)
Apr 26, 12-1 AM (5)
Apr 26, 1-2 AM (0)
Apr 26, 2-3 AM (2)
Apr 26, 3-4 AM (5)
Apr 26, 4-5 AM (2)
Apr 26, 5-6 AM (2)
Apr 26, 6-7 AM (3)
Apr 26, 7-8 AM (8)
Apr 26, 8-9 AM (3)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (2)
Apr 26, 11-12 PM (1)
Apr 26, 12-1 PM (6)
Apr 26, 1-2 PM (4)
Apr 26, 2-3 PM (14)
Apr 26, 3-4 PM (14)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (13)
Apr 26, 6-7 PM (13)
Apr 26, 7-8 PM (7)
Apr 26, 8-9 PM (7)
Apr 26, 9-10 PM (5)
Apr 26, 10-11 PM (27)
Apr 26, 11-12 AM (21)
Apr 27, 12-1 AM (7)
Apr 27, 1-2 AM (7)
Apr 27, 2-3 AM (9)
Apr 27, 3-4 AM (9)
Apr 27, 4-5 AM (5)
Apr 27, 5-6 AM (13)
Apr 27, 6-7 AM (7)
Apr 27, 7-8 AM (82)
Apr 27, 8-9 AM (47)
Apr 27, 9-10 AM (33)
Apr 27, 10-11 AM (62)
Apr 27, 11-12 PM (80)
Apr 27, 12-1 PM (66)
Apr 27, 1-2 PM (44)
Apr 27, 2-3 PM (52)
Apr 27, 3-4 PM (42)
Apr 27, 4-5 PM (36)
Apr 27, 5-6 PM (26)
Apr 27, 6-7 PM (13)
Apr 27, 7-8 PM (26)
Apr 27, 8-9 PM (13)
Apr 27, 9-10 PM (15)
Apr 27, 10-11 PM (42)
Apr 27, 11-12 AM (28)
Apr 28, 12-1 AM (17)
Apr 28, 1-2 AM (8)
Apr 28, 2-3 AM (4)
Apr 28, 3-4 AM (5)
Apr 28, 4-5 AM (5)
Apr 28, 5-6 AM (8)
Apr 28, 6-7 AM (8)
Apr 28, 7-8 AM (37)
Apr 28, 8-9 AM (54)
Apr 28, 9-10 AM (59)
Apr 28, 10-11 AM (53)
Apr 28, 11-12 PM (56)
Apr 28, 12-1 PM (49)
Apr 28, 1-2 PM (54)
Apr 28, 2-3 PM (69)
Apr 28, 3-4 PM (31)
Apr 28, 4-5 PM (14)
Apr 28, 5-6 PM (47)
Apr 28, 6-7 PM (9)
Apr 28, 7-8 PM (9)
Apr 28, 8-9 PM (14)
Apr 28, 9-10 PM (20)
Apr 28, 10-11 PM (34)
Apr 28, 11-12 AM (29)
Apr 29, 12-1 AM (13)
Apr 29, 1-2 AM (1)
Apr 29, 2-3 AM (1)
Apr 29, 3-4 AM (6)
Apr 29, 4-5 AM (1)
Apr 29, 5-6 AM (4)
Apr 29, 6-7 AM (12)
Apr 29, 7-8 AM (45)
Apr 29, 8-9 AM (70)
Apr 29, 9-10 AM (49)
Apr 29, 10-11 AM (28)
Apr 29, 11-12 PM (51)
Apr 29, 12-1 PM (39)
Apr 29, 1-2 PM (21)
Apr 29, 2-3 PM (66)
Apr 29, 3-4 PM (25)
Apr 29, 4-5 PM (36)
Apr 29, 5-6 PM (16)
Apr 29, 6-7 PM (10)
Apr 29, 7-8 PM (14)
Apr 29, 8-9 PM (13)
Apr 29, 9-10 PM (17)
Apr 29, 10-11 PM (25)
Apr 29, 11-12 AM (29)
Apr 30, 12-1 AM (6)
Apr 30, 1-2 AM (8)
Apr 30, 2-3 AM (1)
Apr 30, 3-4 AM (6)
Apr 30, 4-5 AM (2)
Apr 30, 5-6 AM (8)
Apr 30, 6-7 AM (15)
Apr 30, 7-8 AM (17)
Apr 30, 8-9 AM (100)
Apr 30, 9-10 AM (19)
Apr 30, 10-11 AM (50)
Apr 30, 11-12 PM (120)
Apr 30, 12-1 PM (69)
Apr 30, 1-2 PM (45)
Apr 30, 2-3 PM (117)
Apr 30, 3-4 PM (29)
Apr 30, 4-5 PM (34)
Apr 30, 5-6 PM (9)
Apr 30, 6-7 PM (20)
Apr 30, 7-8 PM (23)
Apr 30, 8-9 PM (28)
Apr 30, 9-10 PM (13)
Apr 30, 10-11 PM (25)
Apr 30, 11-12 AM (15)
May 01, 12-1 AM (18)
May 01, 1-2 AM (15)
May 01, 2-3 AM (6)
May 01, 3-4 AM (7)
May 01, 4-5 AM (3)
May 01, 5-6 AM (5)
May 01, 6-7 AM (7)
May 01, 7-8 AM (11)
May 01, 8-9 AM (16)
May 01, 9-10 AM (0)
3,958 commits this week Apr 24, 2026 - May 01, 2026
Allow LSQ clients to connect during initial LedgerDB replay (EarlyN2C)
Allow the local node-to-client socket to bind within seconds of node
startup rather than after the multi-hour LedgerDB replay completes. n2c
ChainSync clients (cardano-db-sync, ogmios, wallet) can begin streaming
blocks during replay; LSQ / TxSubmit / TxMonitor handlers naturally block
on the LedgerDB until ready, relying on the property that NtC has no
client-side timeouts.

Phase 1 (ChainDB):
- New 'LedgerDBStatus' type + 'awaitLedgerDB' STM helper
- 'openDBInternal' returns once Immutable+Volatile are open; LedgerDB
  replay and initial chain selection run on a registry-linked background
  thread (new 'runInitLedgerDB')
- LedgerDB-touching queries wrapped in 'awaitLedgerDB'
- New 'OpenedDBImmutableReady' trace event; existing 'OpenedDB' preserved
- 'closeDB' cancels the background initialiser and only closes the
  LedgerDB if 'cdbLedgerDBStatus' has reached 'LdbReady'

Phase 2 (NodeKernel + runWith):
- 'mkPendingMempool', 'mkPendingBlockchainTime', 'mkPendingDurationUntilTooOld'
  forwarding wrappers that block on a TMVar until populated
- 'initInternalState' split into 'initInternalStateEarly' +
  'completeInternalState'
- 'initNodeKernel' split into 'initNodeKernelEarly' returning
  '(NodeKernel, m ())'; the deferred action opens the real mempool,
  computes the real GsmState, and spawns the four ledger-touching
  background threads (GSM, GDD watcher, blockForging, blockFetchLogic,
  decisionLogicThreads)
- 'runWith' allocates the BlockchainTime / WrapDurationUntilTooOld TMVars,
  builds the kernel synchronously with pending wrappers, calls
  'llrnRunDataDiffusion' immediately, and forks a 'Node.lateInit' thread
  that runs 'hardForkBlockchainTime' and 'realDurationUntilTooOld' once
  the LedgerDB is ready, populates the TMVars, and runs the completer

Verified: 'cabal build all' clean and 'cabal test storage-test' passes
69/69 on the 3.0.1.0 release tag.
feat(testnet): add cardano_node_adversary as iteration testnet
New testnet that mirrors cardano_node_master and adds the
long-running cardano-adversary daemon. Created as a separate
iteration loop for the adversary roadmap so:

- The scheduled cron continues to target cardano_node_master only.
  Findings on the adversary testnet cannot regress the master
  baseline because nothing automatic consumes this testnet.
- New adversarial endpoints (Tier 1.2 chain_sync_thrash,
  1.3 slow_loris, Tier 2 / 3 / 4) iterate here first, dispatched
  manually via:
    gh workflow run "Antithesis on cardano-node testnet" \
      --ref <branch> -f test=cardano_node_adversary -f duration=1
- Promotion of any change to cardano_node_master happens only
  after multiple branch-dispatched runs against this testnet
  produce findings_new ≤ master baseline (currently 9), and
  with explicit user approval.

Layout:

- testnets/cardano_node_adversary/ — full clone of master plus
  one new `adversary` service. Same producers (p1/p2/p3),
  relays (relay1/relay2), tracer, tracer-sidecar, sidecar,
  tx-generator, asteria-bootstrap, asteria-player, log-tailer.
- adversary service mounts relay1-state for the control socket
  and tracer:ro for chainPoints.log; depends_on relay1 +
  tracer-sidecar + configurator.
- Image pinned to ghcr.io/cardano-foundation/cardano-node-antithesis/adversary:cc628d5
  (current built tag from PR #104). Will be bumped per follow-up
  branches as adversary code evolves.

Docs:

- docs/testnets/cardano-node-adversary.md — overview,
  scheduling discipline, promotion criteria.
- docs/testnets/index.md — entry pointing at the new page.
- docs/testnets/cardano-node-master.md — sidecar row updated
  to drop the now-relocated adversary driver mention.
- mkdocs.yml — nav entry for the new testnet doc.

components/adversary/ is unchanged (the wrapper image keeps
building from the same tree), and components/sidecar/ is
unchanged.

The helper_sdk_lib.sh on main is byte-identical to
tx-generator's; tx-generator's SDK assertions are ingested by
Antithesis successfully, so we keep it as-is and use the first
branch-dispatched run on this new testnet as the diagnostic
for whether the adversary container's assertions are or aren't
reaching the SDK ingest channel.
chore(testnet): remove adversary service from cardano_node_master
Roll back the adversary service from the master testnet's compose
until we can verify findings_new ≤ baseline on a feature branch
with workflow_dispatch --ref before re-merging.

PR #99 added the daemon (findings_new went 9 → 8 — strictly improved
vs baseline). PR #104 added must_hit:true SDK reachable assertions
that Antithesis didn't observe firing in the run, creating 5 new
findings (13 total, +4 vs baseline).

The right pre-merge process — dispatch on the PR branch via
'gh workflow run "Antithesis on cardano-node testnet" --ref <branch>
-f duration=1' and compare findings_new before merging — was
available the whole time and was not used. Both PRs landed on main
on the strength of the Compose smoke test alone, which only proves
"containers come up", not Antithesis behaviour.

Removing the service is the cheapest restore-baseline step. The
adversary image and the daemon code in cardano-node-clients stay
intact; we re-add the compose service in a follow-up PR after a
branch-dispatched Antithesis run shows findings_new ≤ baseline.

components/adversary/ remains so the wrapper image keeps building
and publishing; only the consumer-side service entry is removed.

Tracks: https://github.com/cardano-foundation/cardano-node-antithesis/issues/89 (epic).
Allow LSQ clients to connect during initial LedgerDB replay (EarlyN2C)
Allow the local node-to-client socket to bind within seconds of node
startup rather than after the multi-hour LedgerDB replay completes. n2c
ChainSync clients (cardano-db-sync, ogmios, wallet) can begin streaming
blocks during replay; LSQ / TxSubmit / TxMonitor handlers naturally block
on the LedgerDB until ready, relying on the property that NtC has no
client-side timeouts.

Phase 1 (ChainDB):
- New 'LedgerDBStatus' type + 'awaitLedgerDB' STM helper
- 'openDBInternal' returns once Immutable+Volatile are open; LedgerDB
  replay and initial chain selection run on a registry-linked background
  thread (new 'runInitLedgerDB')
- LedgerDB-touching queries wrapped in 'awaitLedgerDB'
- New 'OpenedDBImmutableReady' trace event; existing 'OpenedDB' preserved
- 'closeDB' cancels the background initialiser and only closes the
  LedgerDB if 'cdbLedgerDBStatus' has reached 'LdbReady'

Phase 2 (NodeKernel + runWith):
- 'mkPendingMempool', 'mkPendingBlockchainTime', 'mkPendingDurationUntilTooOld'
  forwarding wrappers that block on a TMVar until populated
- 'initInternalState' split into 'initInternalStateEarly' +
  'completeInternalState'
- 'initNodeKernel' split into 'initNodeKernelEarly' returning
  '(NodeKernel, m ())'; the deferred action opens the real mempool,
  computes the real GsmState, and spawns the four ledger-touching
  background threads (GSM, GDD watcher, blockForging, blockFetchLogic,
  decisionLogicThreads)
- 'runWith' allocates the BlockchainTime / WrapDurationUntilTooOld TMVars,
  builds the kernel synchronously with pending wrappers, calls
  'llrnRunDataDiffusion' immediately, and forks a 'Node.lateInit' thread
  that runs 'hardForkBlockchainTime' and 'realDurationUntilTooOld' once
  the LedgerDB is ready, populates the TMVars, and runs the completer

Verified: 'cabal build all' clean and 'cabal test storage-test' passes
69/69 on the 3.0.1.0 release tag.
chore(testnet): remove adversary service from cardano_node_master
Roll back the adversary service from the master testnet's compose
until we can verify findings_new ≤ baseline on a feature branch
with workflow_dispatch --ref before re-merging.

PR #99 added the daemon (findings_new went 9 → 8 — strictly improved
vs baseline). PR #104 added must_hit:true SDK reachable assertions
that Antithesis didn't observe firing in the run, creating 5 new
findings (13 total, +4 vs baseline).

The right pre-merge process — dispatch on the PR branch via
'gh workflow run "Antithesis on cardano-node testnet" --ref <branch>
-f duration=1' and compare findings_new before merging — was
available the whole time and was not used. Both PRs landed on main
on the strength of the Compose smoke test alone, which only proves
"containers come up", not Antithesis behaviour.

Removing the service is the cheapest restore-baseline step. The
adversary image and the daemon code in cardano-node-clients stay
intact; we re-add the compose service in a follow-up PR after a
branch-dispatched Antithesis run shows findings_new ≤ baseline.

components/adversary/ remains so the wrapper image keeps building
and publishing; only the consumer-side service entry is removed.

Tracks: https://github.com/cardano-foundation/cardano-node-antithesis/issues/89 (epic).
fix(composer): always exit 0 + use Reachability for not-applicable
Three composer-side fixes for findings carried over on
the previous Antithesis run (329a599 — see issues #105,
#106, #107):

1. (B / #105) The not-applicable case in
   parallel_driver_transact.sh and parallel_driver_refill.sh
   was emitting `sdk_sometimes false ...`. Antithesis grades
   a Sometimes assertion as PASSING when at least one sample
   has condition=true; we always emitted condition=false, so
   this assertion could never satisfy and was always failed.
   Switch to `sdk_reachable` — accumulates samples without a
   pass/fail grade, the right type for "documented
   not-applicable response".

2. (C / #106) The composer scripts intentionally exited 1 on
   not-applicable to mark the tick as "skipped". Antithesis's
   built-in 'Always: Commands finish with zero exit code'
   property has no opt-out and grades every non-zero exit as
   a failure. Switch to the asteria-stub convention: always
   exit 0, encode tick state purely via SDK assertions
   (parallel_driver_heartbeat.sh and eventually_alive.sh in
   components/asteria-stub/composer/stub/ do this). Same in
   eventually_population_grew.sh — fire the unreachability
   on did-not-grow and exit 0.

3. (D-adjacent / #107) Add 'index-not-ready' to the refill
   driver's not-applicable case set. After
   cardano-node-clients#110 the daemon's freshness gate
   returns IndexNotReady for refills during the post-
   reconnect stale-UTxO window; the composer should treat
   this as a documented not-applicable, not an unknown
   failure.

The submit-rejected paths keep their `sdk_unreachable`
(strict) framing — the daemon-side freshness gate (#110)
is the actual fix, and we want any leftover
submit-rejecteds to surface as findings, not be silenced.

Verification gate: a fresh 1h Antithesis run on this
branch should show 0 failures from these three findings,
plus the supervisor still triggering its full reconnect
load.
chore(tx-generator): bump pin to upstream main d0928a6 (post-#110)
Picks up the full reconnect-resilience stack on upstream
main:
  * #105 — N2C reconnect supervisor + BlockedIndefinitely catch
  * #110 — post-reconnect indexer freshness gate (rsIndexFresh)

The freshness gate closes the stale-UTxO window between
bearer reconnect and chain-sync re-sync that produced
the residual tx_generator_*_submit_rejected and
tx_generator_population_did_not_grow Always-assertion
failures on the previous Antithesis run (329a599).