Apr 24, 8-9 AM (40)
Apr 24, 9-10 AM (41)
Apr 24, 10-11 AM (72)
Apr 24, 11-12 PM (57)
Apr 24, 12-1 PM (100)
Apr 24, 1-2 PM (57)
Apr 24, 2-3 PM (35)
Apr 24, 3-4 PM (19)
Apr 24, 4-5 PM (17)
Apr 24, 5-6 PM (38)
Apr 24, 6-7 PM (27)
Apr 24, 7-8 PM (12)
Apr 24, 8-9 PM (42)
Apr 24, 9-10 PM (17)
Apr 24, 10-11 PM (30)
Apr 24, 11-12 AM (16)
Apr 25, 12-1 AM (8)
Apr 25, 1-2 AM (1)
Apr 25, 2-3 AM (10)
Apr 25, 3-4 AM (5)
Apr 25, 4-5 AM (3)
Apr 25, 5-6 AM (13)
Apr 25, 6-7 AM (1)
Apr 25, 7-8 AM (4)
Apr 25, 8-9 AM (24)
Apr 25, 9-10 AM (17)
Apr 25, 10-11 AM (4)
Apr 25, 11-12 PM (4)
Apr 25, 12-1 PM (13)
Apr 25, 1-2 PM (3)
Apr 25, 2-3 PM (10)
Apr 25, 3-4 PM (6)
Apr 25, 4-5 PM (10)
Apr 25, 5-6 PM (16)
Apr 25, 6-7 PM (13)
Apr 25, 7-8 PM (30)
Apr 25, 8-9 PM (55)
Apr 25, 9-10 PM (13)
Apr 25, 10-11 PM (21)
Apr 25, 11-12 AM (22)
Apr 26, 12-1 AM (5)
Apr 26, 1-2 AM (0)
Apr 26, 2-3 AM (2)
Apr 26, 3-4 AM (5)
Apr 26, 4-5 AM (2)
Apr 26, 5-6 AM (2)
Apr 26, 6-7 AM (3)
Apr 26, 7-8 AM (8)
Apr 26, 8-9 AM (3)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (2)
Apr 26, 11-12 PM (1)
Apr 26, 12-1 PM (6)
Apr 26, 1-2 PM (4)
Apr 26, 2-3 PM (14)
Apr 26, 3-4 PM (14)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (13)
Apr 26, 6-7 PM (13)
Apr 26, 7-8 PM (7)
Apr 26, 8-9 PM (7)
Apr 26, 9-10 PM (5)
Apr 26, 10-11 PM (27)
Apr 26, 11-12 AM (21)
Apr 27, 12-1 AM (7)
Apr 27, 1-2 AM (7)
Apr 27, 2-3 AM (9)
Apr 27, 3-4 AM (9)
Apr 27, 4-5 AM (5)
Apr 27, 5-6 AM (13)
Apr 27, 6-7 AM (7)
Apr 27, 7-8 AM (82)
Apr 27, 8-9 AM (47)
Apr 27, 9-10 AM (33)
Apr 27, 10-11 AM (62)
Apr 27, 11-12 PM (80)
Apr 27, 12-1 PM (66)
Apr 27, 1-2 PM (44)
Apr 27, 2-3 PM (52)
Apr 27, 3-4 PM (42)
Apr 27, 4-5 PM (36)
Apr 27, 5-6 PM (26)
Apr 27, 6-7 PM (13)
Apr 27, 7-8 PM (26)
Apr 27, 8-9 PM (13)
Apr 27, 9-10 PM (15)
Apr 27, 10-11 PM (42)
Apr 27, 11-12 AM (28)
Apr 28, 12-1 AM (17)
Apr 28, 1-2 AM (8)
Apr 28, 2-3 AM (4)
Apr 28, 3-4 AM (5)
Apr 28, 4-5 AM (5)
Apr 28, 5-6 AM (8)
Apr 28, 6-7 AM (8)
Apr 28, 7-8 AM (37)
Apr 28, 8-9 AM (54)
Apr 28, 9-10 AM (59)
Apr 28, 10-11 AM (53)
Apr 28, 11-12 PM (56)
Apr 28, 12-1 PM (49)
Apr 28, 1-2 PM (54)
Apr 28, 2-3 PM (69)
Apr 28, 3-4 PM (31)
Apr 28, 4-5 PM (14)
Apr 28, 5-6 PM (47)
Apr 28, 6-7 PM (9)
Apr 28, 7-8 PM (9)
Apr 28, 8-9 PM (14)
Apr 28, 9-10 PM (20)
Apr 28, 10-11 PM (34)
Apr 28, 11-12 AM (29)
Apr 29, 12-1 AM (13)
Apr 29, 1-2 AM (1)
Apr 29, 2-3 AM (1)
Apr 29, 3-4 AM (6)
Apr 29, 4-5 AM (1)
Apr 29, 5-6 AM (4)
Apr 29, 6-7 AM (12)
Apr 29, 7-8 AM (45)
Apr 29, 8-9 AM (70)
Apr 29, 9-10 AM (49)
Apr 29, 10-11 AM (28)
Apr 29, 11-12 PM (51)
Apr 29, 12-1 PM (39)
Apr 29, 1-2 PM (21)
Apr 29, 2-3 PM (66)
Apr 29, 3-4 PM (25)
Apr 29, 4-5 PM (36)
Apr 29, 5-6 PM (16)
Apr 29, 6-7 PM (10)
Apr 29, 7-8 PM (14)
Apr 29, 8-9 PM (13)
Apr 29, 9-10 PM (17)
Apr 29, 10-11 PM (25)
Apr 29, 11-12 AM (29)
Apr 30, 12-1 AM (6)
Apr 30, 1-2 AM (8)
Apr 30, 2-3 AM (1)
Apr 30, 3-4 AM (6)
Apr 30, 4-5 AM (2)
Apr 30, 5-6 AM (8)
Apr 30, 6-7 AM (15)
Apr 30, 7-8 AM (17)
Apr 30, 8-9 AM (100)
Apr 30, 9-10 AM (19)
Apr 30, 10-11 AM (50)
Apr 30, 11-12 PM (120)
Apr 30, 12-1 PM (69)
Apr 30, 1-2 PM (45)
Apr 30, 2-3 PM (117)
Apr 30, 3-4 PM (29)
Apr 30, 4-5 PM (34)
Apr 30, 5-6 PM (9)
Apr 30, 6-7 PM (20)
Apr 30, 7-8 PM (23)
Apr 30, 8-9 PM (28)
Apr 30, 9-10 PM (13)
Apr 30, 10-11 PM (25)
Apr 30, 11-12 AM (15)
May 01, 12-1 AM (18)
May 01, 1-2 AM (15)
May 01, 2-3 AM (6)
May 01, 3-4 AM (7)
May 01, 4-5 AM (3)
May 01, 5-6 AM (5)
May 01, 6-7 AM (7)
May 01, 7-8 AM (11)
May 01, 8-9 AM (2)
3,961 commits this week Apr 24, 2026 - May 01, 2026
feat(testnet): add cardano_node_adversary as iteration testnet
New testnet that mirrors cardano_node_master and adds the
long-running cardano-adversary daemon. Created as a separate
iteration loop for the adversary roadmap so:

- The scheduled cron continues to target cardano_node_master only.
  Findings on the adversary testnet cannot regress the master
  baseline because nothing automatic consumes this testnet.
- New adversarial endpoints (Tier 1.2 chain_sync_thrash,
  1.3 slow_loris, Tier 2 / 3 / 4) iterate here first, dispatched
  manually via:
    gh workflow run "Antithesis on cardano-node testnet" \
      --ref <branch> -f test=cardano_node_adversary -f duration=1
- Promotion of any change to cardano_node_master happens only
  after multiple branch-dispatched runs against this testnet
  produce findings_new ≤ master baseline (currently 9), and
  with explicit user approval.

Layout:

- testnets/cardano_node_adversary/ — full clone of master plus
  one new `adversary` service. Same producers (p1/p2/p3),
  relays (relay1/relay2), tracer, tracer-sidecar, sidecar,
  tx-generator, asteria-bootstrap, asteria-player, log-tailer.
- adversary service mounts relay1-state for the control socket
  and tracer:ro for chainPoints.log; depends_on relay1 +
  tracer-sidecar + configurator.
- Image pinned to ghcr.io/cardano-foundation/cardano-node-antithesis/adversary:cc628d5
  (current built tag from PR #104). Will be bumped per follow-up
  branches as adversary code evolves.

Docs:

- docs/testnets/cardano-node-adversary.md — overview,
  scheduling discipline, promotion criteria.
- docs/testnets/index.md — entry pointing at the new page.
- docs/testnets/cardano-node-master.md — sidecar row updated
  to drop the now-relocated adversary driver mention.
- mkdocs.yml — nav entry for the new testnet doc.

components/adversary/ is unchanged (the wrapper image keeps
building from the same tree), and components/sidecar/ is
unchanged.

The helper_sdk_lib.sh on main is byte-identical to
tx-generator's; tx-generator's SDK assertions are ingested by
Antithesis successfully, so we keep it as-is and use the first
branch-dispatched run on this new testnet as the diagnostic
for whether the adversary container's assertions are or aren't
reaching the SDK ingest channel.
chore(testnet): remove adversary service from cardano_node_master
Roll back the adversary service from the master testnet's compose
until we can verify findings_new ≤ baseline on a feature branch
with workflow_dispatch --ref before re-merging.

PR #99 added the daemon (findings_new went 9 → 8 — strictly improved
vs baseline). PR #104 added must_hit:true SDK reachable assertions
that Antithesis didn't observe firing in the run, creating 5 new
findings (13 total, +4 vs baseline).

The right pre-merge process — dispatch on the PR branch via
'gh workflow run "Antithesis on cardano-node testnet" --ref <branch>
-f duration=1' and compare findings_new before merging — was
available the whole time and was not used. Both PRs landed on main
on the strength of the Compose smoke test alone, which only proves
"containers come up", not Antithesis behaviour.

Removing the service is the cheapest restore-baseline step. The
adversary image and the daemon code in cardano-node-clients stay
intact; we re-add the compose service in a follow-up PR after a
branch-dispatched Antithesis run shows findings_new ≤ baseline.

components/adversary/ remains so the wrapper image keeps building
and publishing; only the consumer-side service entry is removed.

Tracks: https://github.com/cardano-foundation/cardano-node-antithesis/issues/89 (epic).
Allow LSQ clients to connect during initial LedgerDB replay (EarlyN2C)
Allow the local node-to-client socket to bind within seconds of node
startup rather than after the multi-hour LedgerDB replay completes. n2c
ChainSync clients (cardano-db-sync, ogmios, wallet) can begin streaming
blocks during replay; LSQ / TxSubmit / TxMonitor handlers naturally block
on the LedgerDB until ready, relying on the property that NtC has no
client-side timeouts.

Phase 1 (ChainDB):
- New 'LedgerDBStatus' type + 'awaitLedgerDB' STM helper
- 'openDBInternal' returns once Immutable+Volatile are open; LedgerDB
  replay and initial chain selection run on a registry-linked background
  thread (new 'runInitLedgerDB')
- LedgerDB-touching queries wrapped in 'awaitLedgerDB'
- New 'OpenedDBImmutableReady' trace event; existing 'OpenedDB' preserved
- 'closeDB' cancels the background initialiser and only closes the
  LedgerDB if 'cdbLedgerDBStatus' has reached 'LdbReady'

Phase 2 (NodeKernel + runWith):
- 'mkPendingMempool', 'mkPendingBlockchainTime', 'mkPendingDurationUntilTooOld'
  forwarding wrappers that block on a TMVar until populated
- 'initInternalState' split into 'initInternalStateEarly' +
  'completeInternalState'
- 'initNodeKernel' split into 'initNodeKernelEarly' returning
  '(NodeKernel, m ())'; the deferred action opens the real mempool,
  computes the real GsmState, and spawns the four ledger-touching
  background threads (GSM, GDD watcher, blockForging, blockFetchLogic,
  decisionLogicThreads)
- 'runWith' allocates the BlockchainTime / WrapDurationUntilTooOld TMVars,
  builds the kernel synchronously with pending wrappers, calls
  'llrnRunDataDiffusion' immediately, and forks a 'Node.lateInit' thread
  that runs 'hardForkBlockchainTime' and 'realDurationUntilTooOld' once
  the LedgerDB is ready, populates the TMVars, and runs the completer

Verified: 'cabal build all' clean and 'cabal test storage-test' passes
69/69 on the 3.0.1.0 release tag.
chore(testnet): remove adversary service from cardano_node_master
Roll back the adversary service from the master testnet's compose
until we can verify findings_new ≤ baseline on a feature branch
with workflow_dispatch --ref before re-merging.

PR #99 added the daemon (findings_new went 9 → 8 — strictly improved
vs baseline). PR #104 added must_hit:true SDK reachable assertions
that Antithesis didn't observe firing in the run, creating 5 new
findings (13 total, +4 vs baseline).

The right pre-merge process — dispatch on the PR branch via
'gh workflow run "Antithesis on cardano-node testnet" --ref <branch>
-f duration=1' and compare findings_new before merging — was
available the whole time and was not used. Both PRs landed on main
on the strength of the Compose smoke test alone, which only proves
"containers come up", not Antithesis behaviour.

Removing the service is the cheapest restore-baseline step. The
adversary image and the daemon code in cardano-node-clients stay
intact; we re-add the compose service in a follow-up PR after a
branch-dispatched Antithesis run shows findings_new ≤ baseline.

components/adversary/ remains so the wrapper image keeps building
and publishing; only the consumer-side service entry is removed.

Tracks: https://github.com/cardano-foundation/cardano-node-antithesis/issues/89 (epic).
fix(composer): always exit 0 + use Reachability for not-applicable
Three composer-side fixes for findings carried over on
the previous Antithesis run (329a599 — see issues #105,
#106, #107):

1. (B / #105) The not-applicable case in
   parallel_driver_transact.sh and parallel_driver_refill.sh
   was emitting `sdk_sometimes false ...`. Antithesis grades
   a Sometimes assertion as PASSING when at least one sample
   has condition=true; we always emitted condition=false, so
   this assertion could never satisfy and was always failed.
   Switch to `sdk_reachable` — accumulates samples without a
   pass/fail grade, the right type for "documented
   not-applicable response".

2. (C / #106) The composer scripts intentionally exited 1 on
   not-applicable to mark the tick as "skipped". Antithesis's
   built-in 'Always: Commands finish with zero exit code'
   property has no opt-out and grades every non-zero exit as
   a failure. Switch to the asteria-stub convention: always
   exit 0, encode tick state purely via SDK assertions
   (parallel_driver_heartbeat.sh and eventually_alive.sh in
   components/asteria-stub/composer/stub/ do this). Same in
   eventually_population_grew.sh — fire the unreachability
   on did-not-grow and exit 0.

3. (D-adjacent / #107) Add 'index-not-ready' to the refill
   driver's not-applicable case set. After
   cardano-node-clients#110 the daemon's freshness gate
   returns IndexNotReady for refills during the post-
   reconnect stale-UTxO window; the composer should treat
   this as a documented not-applicable, not an unknown
   failure.

The submit-rejected paths keep their `sdk_unreachable`
(strict) framing — the daemon-side freshness gate (#110)
is the actual fix, and we want any leftover
submit-rejecteds to surface as findings, not be silenced.

Verification gate: a fresh 1h Antithesis run on this
branch should show 0 failures from these three findings,
plus the supervisor still triggering its full reconnect
load.
chore(tx-generator): bump pin to upstream main d0928a6 (post-#110)
Picks up the full reconnect-resilience stack on upstream
main:
  * #105 — N2C reconnect supervisor + BlockedIndefinitely catch
  * #110 — post-reconnect indexer freshness gate (rsIndexFresh)

The freshness gate closes the stale-UTxO window between
bearer reconnect and chain-sync re-sync that produced
the residual tx_generator_*_submit_rejected and
tx_generator_population_did_not_grow Always-assertion
failures on the previous Antithesis run (329a599).
asteria-bootstrap: idempotent deploy via Provider UTxO query
PR 2: make rerunning bootstrap safe. Antithesis can restart the
asteria-game container at any time; serial_driver_asteria_bootstrap
will re-fire on each restart. The Haskell binary now skips the
mint+lock if the asteria spend address already carries a UTxO with
the asteriaAdmin token.

Defence layers in place after this PR:
  1. Haskell-side detection (this PR): Provider.queryUTxOs at the
     asteria spend address; presence of (admin_mint_hash,
     "asteriaAdmin") = already deployed → exit 0 with
     sdk_sometimes("asteria_bootstrap_already_deployed").
  2. Antithesis serial_driver scheduling: exclusive access while
     bootstrap runs → no concurrent invocations on the same timeline.

KNOWN GAP — defence layer 3 (Plutus one-shot) is still on the
todo list. PR #67's admin_mint validator is the always-true
placeholder, so there is no chain-level uniqueness guarantee yet.
A future PR replaces admin_mint with a parameterised one-shot
that consumes a seed OutputReference; until then the indexer
race window between detection and submission, while small under
serial_driver scheduling, is non-zero. See spec FR-010 / User
Story 4.

New: composer/stub/serial_driver_asteria_bootstrap.sh — exec
/bin/asteria-bootstrap. Antithesis discovers it under
/opt/antithesis/test/v1/stub/ (already mounted via the same
docker-image build).
asteria-bootstrap: idempotent deploy via Provider UTxO query
PR 2: make rerunning bootstrap safe. Antithesis can restart the
asteria-game container at any time; serial_driver_asteria_bootstrap
will re-fire on each restart. The Haskell binary now skips the
mint+lock if the asteria spend address already carries a UTxO with
the asteriaAdmin token.

Defence layers in place after this PR:
  1. Haskell-side detection (this PR): Provider.queryUTxOs at the
     asteria spend address; presence of (admin_mint_hash,
     "asteriaAdmin") = already deployed → exit 0 with
     sdk_sometimes("asteria_bootstrap_already_deployed").
  2. Antithesis serial_driver scheduling: exclusive access while
     bootstrap runs → no concurrent invocations on the same timeline.

KNOWN GAP — defence layer 3 (Plutus one-shot) is still on the
todo list. PR #67's admin_mint validator is the always-true
placeholder, so there is no chain-level uniqueness guarantee yet.
A future PR replaces admin_mint with a parameterised one-shot
that consumes a seed OutputReference; until then the indexer
race window between detection and submission, while small under
serial_driver scheduling, is non-zero. See spec FR-010 / User
Story 4.

New: composer/stub/serial_driver_asteria_bootstrap.sh — exec
/bin/asteria-bootstrap. Antithesis discovers it under
/opt/antithesis/test/v1/stub/ (already mounted via the same
docker-image build).
asteria-game: rename component + lift PR #67 source for bootstrap+player
PR 1 of 2 toward the real asteria workload. Scope-limited to the
rename + Haskell source lift + nix scaffolding so the binaries
build alongside the existing utxo-indexer. Bootstrap is iteration
5b — *not safe for repeat invocation* — that gap is explicitly the
subject of PR 2.

Rename:
  components/asteria-stub/ → components/asteria-game/
  service asteria-stub      → asteria-game
  volume  asteria-stub-db   → asteria-game-db

Lift from cardano-foundation/cardano-node-antithesis#67 (asteria-spawn-v2):
  components/asteria-game/aiken/         (validators + apply-params)
  components/asteria-game/src/Asteria/   (game state, datums, validators, wallet, providers, RNG, SDK)
  components/asteria-game/app/{BootstrapMain.hs, PlayerMain.hs}
  components/asteria-game/asteria-game.cabal      (renamed package)
  components/asteria-game/cabal.project           (cardano-node-clients SRP bumped to PR #98 head 5707836b)

Build infra:
  flake.nix — extends prior asteria-stub flake with haskell.nix /
    iohk-nix overlays so local Haskell packages compile, while
    keeping the upstream cardano-node-clients flake input that
    supplies the prebuilt utxo-indexer (PR #98 supervisor).
  nix/project.nix — lifted from PR #67, package renamed.
  nix/docker-image.nix — bundles utxo-indexer + asteria-bootstrap
    + asteria-game (player) execs + composer scripts + bash/jq/
    socat; entrypoint stays utxo-indexer.

KNOWN GAP — admin_mint validator is the always-true placeholder
PR #67 ships. Without an on-chain one-shot guarantee, every
container restart could mint another admin NFT. Bootstrap is *not*
wired into compose in this PR for that reason. PR 2 patches the
Aiken admin_mint to take an OutputReference parameter, runs
apply-params at bootstrap-time, and adds defence-in-depth detection
in Bootstrap.hs.

Composer scripts (composer/stub/{parallel_driver_heartbeat,
eventually_alive, finally_alive, helper_sdk}.sh) unchanged from
the green stub baseline so this PR stays bisect-safe and in
property terms identical to commit 3b6fb0e (PR #74's merged head).
Disable protocol version check in the header for testnets until Dijkstra
This check that was introduced for the protocol version in the block header
proved to be problematic for testnets. Which makes sense, since it was
designed for mainnet in mind and its introduction was needed to be done
urgently since it was blocking 10.6.2 release. See #5595 for more context

In order to allow for testnets to continue being able to produce blocks
for older protocol versions with latest version of the node we lift this
restriction, but only for any network with `Testnet` network id. This is
implemented as a temporary measure and will be properly fixed in
Dijkstra era. See #5763 for more context
Fix an inconsistency in `GOV` rule:
* Verification of `PrevGovActionId` in `proposalsAddAction` happens on
  accumulated `proposals`
* While `preceedingHardFork` check happens on original `st`.

In a usual application, which has identifiers assigned, this inconsistency
would have been a problem. However, in a transaction's `GovActionId`, which is
derived from a hash of a transaction itself, it is impossible to reference
a previous governance action within the same transaction, since one would need
to know the hash of a transaction that contains the proposal it tries to
reference.

Therefore, this commit fixes not an issue, but a mere inconsistency.
This is done in order to avoid developers even considering this edge case.
feat: enhance proxy lifecycle management to support hierarchical wallets
- Updated proxy lifecycle scenarios to include support for hierarchical wallets alongside legacy and SDK wallets.
- Modified README documentation to reflect changes in wallet type coverage for proxy full lifecycle scenarios.
- Improved error handling in proxy setup finalization to ensure valid transaction hashes.
- Added unit tests to validate new hierarchical wallet functionality in proxy management.
- Adjusted existing tests to ensure comprehensive coverage of all wallet types in proxy lifecycle processes.