Deployed b7b7423 with MkDocs version: 1.6.1
Apr 18, 5-6 PM (6)
Apr 18, 6-7 PM (2)
Apr 18, 7-8 PM (2)
Apr 18, 8-9 PM (4)
Apr 18, 9-10 PM (9)
Apr 18, 10-11 PM (21)
Apr 18, 11-12 AM (23)
Apr 19, 12-1 AM (1)
Apr 19, 1-2 AM (4)
Apr 19, 2-3 AM (1)
Apr 19, 3-4 AM (0)
Apr 19, 4-5 AM (0)
Apr 19, 5-6 AM (3)
Apr 19, 6-7 AM (0)
Apr 19, 7-8 AM (2)
Apr 19, 8-9 AM (1)
Apr 19, 9-10 AM (1)
Apr 19, 10-11 AM (4)
Apr 19, 11-12 PM (7)
Apr 19, 12-1 PM (6)
Apr 19, 1-2 PM (8)
Apr 19, 2-3 PM (23)
Apr 19, 3-4 PM (7)
Apr 19, 4-5 PM (4)
Apr 19, 5-6 PM (3)
Apr 19, 6-7 PM (8)
Apr 19, 7-8 PM (3)
Apr 19, 8-9 PM (8)
Apr 19, 9-10 PM (6)
Apr 19, 10-11 PM (25)
Apr 19, 11-12 AM (23)
Apr 20, 12-1 AM (4)
Apr 20, 1-2 AM (5)
Apr 20, 2-3 AM (2)
Apr 20, 3-4 AM (7)
Apr 20, 4-5 AM (1)
Apr 20, 5-6 AM (8)
Apr 20, 6-7 AM (12)
Apr 20, 7-8 AM (29)
Apr 20, 8-9 AM (42)
Apr 20, 9-10 AM (37)
Apr 20, 10-11 AM (95)
Apr 20, 11-12 PM (42)
Apr 20, 12-1 PM (39)
Apr 20, 1-2 PM (53)
Apr 20, 2-3 PM (68)
Apr 20, 3-4 PM (47)
Apr 20, 4-5 PM (41)
Apr 20, 5-6 PM (31)
Apr 20, 6-7 PM (24)
Apr 20, 7-8 PM (10)
Apr 20, 8-9 PM (7)
Apr 20, 9-10 PM (16)
Apr 20, 10-11 PM (28)
Apr 20, 11-12 AM (18)
Apr 21, 12-1 AM (24)
Apr 21, 1-2 AM (5)
Apr 21, 2-3 AM (13)
Apr 21, 3-4 AM (4)
Apr 21, 4-5 AM (3)
Apr 21, 5-6 AM (8)
Apr 21, 6-7 AM (15)
Apr 21, 7-8 AM (44)
Apr 21, 8-9 AM (119)
Apr 21, 9-10 AM (36)
Apr 21, 10-11 AM (35)
Apr 21, 11-12 PM (98)
Apr 21, 12-1 PM (57)
Apr 21, 1-2 PM (71)
Apr 21, 2-3 PM (60)
Apr 21, 3-4 PM (33)
Apr 21, 4-5 PM (31)
Apr 21, 5-6 PM (27)
Apr 21, 6-7 PM (38)
Apr 21, 7-8 PM (35)
Apr 21, 8-9 PM (37)
Apr 21, 9-10 PM (14)
Apr 21, 10-11 PM (34)
Apr 21, 11-12 AM (12)
Apr 22, 12-1 AM (2)
Apr 22, 1-2 AM (3)
Apr 22, 2-3 AM (3)
Apr 22, 3-4 AM (4)
Apr 22, 4-5 AM (3)
Apr 22, 5-6 AM (17)
Apr 22, 6-7 AM (34)
Apr 22, 7-8 AM (21)
Apr 22, 8-9 AM (37)
Apr 22, 9-10 AM (18)
Apr 22, 10-11 AM (47)
Apr 22, 11-12 PM (45)
Apr 22, 12-1 PM (56)
Apr 22, 1-2 PM (64)
Apr 22, 2-3 PM (44)
Apr 22, 3-4 PM (86)
Apr 22, 4-5 PM (46)
Apr 22, 5-6 PM (17)
Apr 22, 6-7 PM (10)
Apr 22, 7-8 PM (18)
Apr 22, 8-9 PM (15)
Apr 22, 9-10 PM (23)
Apr 22, 10-11 PM (31)
Apr 22, 11-12 AM (17)
Apr 23, 12-1 AM (7)
Apr 23, 1-2 AM (4)
Apr 23, 2-3 AM (4)
Apr 23, 3-4 AM (6)
Apr 23, 4-5 AM (3)
Apr 23, 5-6 AM (8)
Apr 23, 6-7 AM (17)
Apr 23, 7-8 AM (26)
Apr 23, 8-9 AM (33)
Apr 23, 9-10 AM (33)
Apr 23, 10-11 AM (29)
Apr 23, 11-12 PM (30)
Apr 23, 12-1 PM (51)
Apr 23, 1-2 PM (69)
Apr 23, 2-3 PM (65)
Apr 23, 3-4 PM (26)
Apr 23, 4-5 PM (21)
Apr 23, 5-6 PM (7)
Apr 23, 6-7 PM (7)
Apr 23, 7-8 PM (11)
Apr 23, 8-9 PM (14)
Apr 23, 9-10 PM (6)
Apr 23, 10-11 PM (28)
Apr 23, 11-12 AM (18)
Apr 24, 12-1 AM (7)
Apr 24, 1-2 AM (4)
Apr 24, 2-3 AM (7)
Apr 24, 3-4 AM (5)
Apr 24, 4-5 AM (8)
Apr 24, 5-6 AM (13)
Apr 24, 6-7 AM (12)
Apr 24, 7-8 AM (33)
Apr 24, 8-9 AM (40)
Apr 24, 9-10 AM (41)
Apr 24, 10-11 AM (71)
Apr 24, 11-12 PM (57)
Apr 24, 12-1 PM (37)
Apr 24, 1-2 PM (53)
Apr 24, 2-3 PM (34)
Apr 24, 3-4 PM (19)
Apr 24, 4-5 PM (16)
Apr 24, 5-6 PM (38)
Apr 24, 6-7 PM (25)
Apr 24, 7-8 PM (12)
Apr 24, 8-9 PM (41)
Apr 24, 9-10 PM (17)
Apr 24, 10-11 PM (30)
Apr 24, 11-12 AM (16)
Apr 25, 12-1 AM (8)
Apr 25, 1-2 AM (1)
Apr 25, 2-3 AM (10)
Apr 25, 3-4 AM (5)
Apr 25, 4-5 AM (3)
Apr 25, 5-6 AM (13)
Apr 25, 6-7 AM (1)
Apr 25, 7-8 AM (4)
Apr 25, 8-9 AM (24)
Apr 25, 9-10 AM (17)
Apr 25, 10-11 AM (4)
Apr 25, 11-12 PM (4)
Apr 25, 12-1 PM (12)
Apr 25, 1-2 PM (3)
Apr 25, 2-3 PM (10)
Apr 25, 3-4 PM (6)
Apr 25, 4-5 PM (9)
Apr 25, 5-6 PM (0)
3,688 commits this week
Apr 18, 2026
-
Apr 25, 2026
feat(asteria-player): pin cardano-node-clients + N2C provider (#56)
Iteration 2 of the asteria phase-1 gatherer. - cabal.project: pin cardano-node-clients@f6a31ca via source-repository-package along with all of its transitive SRPs (chain-follower, rocksdb-kv-transactions, rocksdb-haskell, cardano-ledger-read, typed-protocols, quickcheck-state-machine, cuddle) and matching ledger / ouroboros constraints. - flake.nix / nix/project.nix: pin haskell.nix, hackage.nix, iohkNix, CHaP to the same revisions as cardano-node-clients; bump the compiler to ghc9122; add the fix-libs module for libsodium-vrf, secp256k1, libblst, lmdb, liburing. - src/Asteria/Provider.hs: thin wrapper around cardano-node-clients's runNodeClient + mkN2CProvider / mkN2CSubmitter; reads CARDANO_NODE_SOCKET_PATH and NETWORK_MAGIC from the environment. - app/PlayerMain.hs: connect via withN2C, query protocol params every 5 seconds in a loop, emit asteria_player_pp_query_<id> sometimes events. Verified against the local docker-compose cluster: each player container connects to its assigned relay, queries pp every 5s, and the SDK fallback file shows a steady stream of sometimes events.
feat(asteria-player): pin cardano-node-clients + N2C provider (#56)
Iteration 2 of the asteria phase-1 gatherer. - cabal.project: pin cardano-node-clients@f6a31ca via source-repository-package along with all of its transitive SRPs (chain-follower, rocksdb-kv-transactions, rocksdb-haskell, cardano-ledger-read, typed-protocols, quickcheck-state-machine, cuddle) and matching ledger / ouroboros constraints. - flake.nix / nix/project.nix: pin haskell.nix, hackage.nix, iohkNix, CHaP to the same revisions as cardano-node-clients; bump the compiler to ghc9122; add the fix-libs module for libsodium-vrf, secp256k1, libblst, lmdb, liburing. - src/Asteria/Provider.hs: thin wrapper around cardano-node-clients's runNodeClient + mkN2CProvider / mkN2CSubmitter; reads CARDANO_NODE_SOCKET_PATH and NETWORK_MAGIC from the environment. - app/PlayerMain.hs: connect via withN2C, query protocol params every 5 seconds in a loop, emit asteria_player_pp_query_<id> sometimes events. Verified against the local docker-compose cluster: each player container connects to its assigned relay, queries pp every 5s, and the SDK fallback file shows a steady stream of sometimes events.
feat(asteria-player): iteration 1 wiring proof for #56
Add `components/asteria-player/`, a new container that runs the
asteria game inside the cardano-node-antithesis cluster.
Iteration 1 only proves the wiring end-to-end:
- Nix-built docker image (`flake.nix` + `nix/{project,docker-image}.nix`)
mirroring the sidecar pattern.
- `asteria-bootstrap` (one-shot) and `asteria-player` (long-running)
Haskell binaries, currently just emitting `sdkReachable` to the
JSONL fallback file before exiting / sleeping forever.
- `Asteria.Sdk` Haskell module covering reachable / unreachable /
sometimes / always; mirrors the existing shell helpers in
`sidecar/composer/convergence/helper_sdk_lib.sh`.
- Composer scripts at `composer/asteria/`:
`parallel_driver_asteria_bootstrap.sh`,
`parallel_driver_asteria_player.sh`,
`eventually_asteria_alive.sh`, plus a copy of `helper_sdk_lib.sh`.
Baked into the image at `/opt/antithesis/test/v1/asteria/` and
exposed as wrapper bins (`/bin/parallel_driver_asteria_*`).
- `testnets/cardano_node_master/docker-compose.yaml` extended with
`asteria-bootstrap` (depends on relay1) plus `asteria-player-1` /
`asteria-player-2` (depend on bootstrap completing successfully),
sharing an `asteria-sdk` volume for the JSONL fallback file.
Verified locally with `docker compose up`:
- asteria-bootstrap exits 0 and writes its sdk_reachable events.
- asteria-player-1/2 start once bootstrap is green and write
their own sdk_reachable events.
- All six assertions (3 shell + 3 Haskell) appear in
`/sdk/sdk.jsonl` on the shared volume.
Subsequent iterations layer on the cardano-node-clients TxBuild DSL,
the parameter-applied Aiken validators, Antithesis-controlled
randomness, and the actual move_ship / gather_fuel / mine_asteria
game loop.
Initialize speckit + constitution + phase1 asteria gatherer spec
Bootstraps spec-driven workflow for cardano-node-antithesis: - .specify/ scaffolding installed via nix run /code/spec-kit. - .specify/memory/constitution.md authored from scratch, capturing the composer-first, SDK-instrumented, duration-robust principles the repo has been implicitly following since PR #53 + the shared/skills/antithesis-tests skill. - .claude/commands/ speckit command definitions. - specs/phase1-asteria-gatherer/spec.md — first feature spec: txpipe/asteria deployed as the eager-agent workload, one gatherer parallel_driver per ship, admin-bootstrap serial driver, anytime + finally invariants. Explicit out-of-scope list for phase 2+ (mine, quit, spawn contention, prize tokens). Implements issue #56 (parent #55). No runtime changes yet; this commit is spec-only.
docs(asteria-player): wire into mkdocs nav + cluster service table
Add the new component to the docs site:
- components/asteria-player.md describes the iteration plan,
build instructions, and local run flow.
- mkdocs.yml registers the page under Components.
- cardano-node-master.md service table mentions
asteria-bootstrap and asteria-player-1/-2.
docs(asteria-player): wire into mkdocs nav + cluster service table
Add the new component to the docs site:
- components/asteria-player.md describes the iteration plan,
build instructions, and local run flow.
- mkdocs.yml registers the page under Components.
- cardano-node-master.md service table mentions
asteria-bootstrap and asteria-player-1/-2.
feat(asteria-player): iteration 1 wiring proof for #56
Add `components/asteria-player/`, a new container that runs the
asteria game inside the cardano-node-antithesis cluster.
Iteration 1 only proves the wiring end-to-end:
- Nix-built docker image (`flake.nix` + `nix/{project,docker-image}.nix`)
mirroring the sidecar pattern.
- `asteria-bootstrap` (one-shot) and `asteria-player` (long-running)
Haskell binaries, currently just emitting `sdkReachable` to the
JSONL fallback file before exiting / sleeping forever.
- `Asteria.Sdk` Haskell module covering reachable / unreachable /
sometimes / always; mirrors the existing shell helpers in
`sidecar/composer/convergence/helper_sdk_lib.sh`.
- Composer scripts at `composer/asteria/`:
`parallel_driver_asteria_bootstrap.sh`,
`parallel_driver_asteria_player.sh`,
`eventually_asteria_alive.sh`, plus a copy of `helper_sdk_lib.sh`.
Baked into the image at `/opt/antithesis/test/v1/asteria/` and
exposed as wrapper bins (`/bin/parallel_driver_asteria_*`).
- `testnets/cardano_node_master/docker-compose.yaml` extended with
`asteria-bootstrap` (depends on relay1) plus `asteria-player-1` /
`asteria-player-2` (depend on bootstrap completing successfully),
sharing an `asteria-sdk` volume for the JSONL fallback file.
Verified locally with `docker compose up`:
- asteria-bootstrap exits 0 and writes its sdk_reachable events.
- asteria-player-1/2 start once bootstrap is green and write
their own sdk_reachable events.
- All six assertions (3 shell + 3 Haskell) appear in
`/sdk/sdk.jsonl` on the shared volume.
Subsequent iterations layer on the cardano-node-clients TxBuild DSL,
the parameter-applied Aiken validators, Antithesis-controlled
randomness, and the actual move_ship / gather_fuel / mine_asteria
game loop.
feat: add remove-validation-status CLI command
also teach dump-chain-db a new trick: show ancestors of given point with validity Signed-off-by: Roland Kuhn <[email protected]>
run e2e sync test until epoch 185 (was reduced from 182 to 176 due to regression)
Signed-off-by: Roland Kuhn <[email protected]>
fix: only check best header height when needed
Signed-off-by: Roland Kuhn <[email protected]>
tweak logging to see why sync is slow again
Signed-off-by: Roland Kuhn <[email protected]>
add Haskell Benchmark (customSmallerIsBetter) benchmark result for 9ae77d611ad86ae58add04b6042ab730272f2327
CIP-???? | Governance Proposal Feedback and Addenda
Adds a draft CIP-100 extension defining three new JSON-LD document types — DraftProposal, ProposalFeedback, and ProposalAddendum — for signed pre-submission drafts, signed public commentary, and binding author clarifications against governance proposals. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Merge pull request #5747 from tweag/joaosreis/canonical-gov-proposals-roots
Add gov/proposals/roots/v0 namespace
feat(cardano): persist shard count alongside ashard_progress
Guards against a config change to `account_shards` corrupting an
in-flight boundary. Previously, if dolos crashed mid-boundary and the
operator changed `account_shards` between crash and restart, the resume
would re-partition the account key space with the new count, mismatching
the cursor's already-committed shards.
Fix: snapshot the boundary's shard count into state at the first
`EpochEndAccumulate` apply. The persisted total is authoritative for the
duration of the in-flight boundary; the new config value only takes
effect on the next boundary.
Changes:
- New `AShardProgress { committed, total }` struct stored at
`EpochState.ashard_progress: Option<AShardProgress>` (was
`Option<u32>`).
- `EpochEndAccumulate` carries `total_shards`. Its apply validates the
delta's `total_shards` matches any previously persisted total and
surfaces an error if they diverge (would only happen if a work unit
was constructed with a stale config view).
- `EpochWrapUp` and `EpochTransition` undo fields adapted to the new
type.
- `AShardWorkUnit::load` / `commit_state` read the persisted total when
present and fall back to `config.account_shards()` for fresh
boundaries.
- `CardanoLogic` caches `effective_account_shards` (= persisted total
if a boundary is in flight, else config). Refreshed at every
`pop_work` call so `receive_block` (which has no state access) can
use the up-to-date value when constructing
`WorkBuffer::AShardingBoundary`.
- Crash-recovery wording updated to surface a clear warning when the
persisted total disagrees with current config.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
*testing* ddz newsfeed
(cherry picked from commit a874da18c15e97766e0006c97f4025a6119da2b1)
refactor(cardano): rename `ashard_total` → `account_shards`
Per review feedback: the user-facing config name should be self-explanatory in `dolos.toml`. Renames everywhere for consistency: - `CardanoConfig::ashard_total` field → `account_shards` - `CardanoConfig::ashard_total()` accessor → `account_shards()` - `CardanoConfig::DEFAULT_ASHARD_TOTAL` → `DEFAULT_ACCOUNT_SHARDS` - WorkBuffer parameters and error messages updated to match. BREAKING CONFIG CHANGE: existing `dolos.toml` files that explicitly set this option (under any prior name from this PR) need to use `account_shards`. Users relying on the default are unaffected. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
refactor(cardano): rename AccountShard → AShard for structs and variants
Aligns the type and variant names with the module path convention: - struct `AccountShardWorkUnit` → `AShardWorkUnit` - enum variant `CardanoWorkUnit::AccountShard` → `AShard` - enum variant `InternalWorkUnit::AccountShard` → `AShard` - WorkBuffer state `AccountShardingBoundary` → `AShardingBoundary` - module re-export and all callers updated to match - prose / docstrings / log messages also use `AShard` consistently The module path is `crate::ashard`, so the type now reads as `ashard::AShardWorkUnit`. No behavior change. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
refactor(cardano): decouple shard naming from `ewrap`
The shard-related identifiers and comments were named after the legacy EWRAP pipeline that bundled the global epoch-boundary work and the per-account shards together. With AccountShard now a distinct work unit in its own module, those names are misleading. Rename to use the `ashard` prefix consistently with the module path: - `CardanoConfig::ewrap_total_shards` → `ashard_total` - `CardanoConfig::DEFAULT_EWRAP_TOTAL_SHARDS` → `DEFAULT_ASHARD_TOTAL` - `EpochState::ewrap_progress` → `ashard_progress` - `prev_ewrap_progress` → `prev_ashard_progress` on `EpochEndAccumulate`, `EpochWrapUp`, and `EpochTransition` - `WorkBuffer::receive_block` / `on_ewrap_boundary` / `pop_work` parameter `ewrap_total_shards` → `ashard_total` - Error messages in `ashard/shard.rs` updated to match. Also fixes comment / doc misattributions where "EWRAP" was used for work that's now in `AccountShard`: - `PendingRewardState` / `DequeueReward` are consumed by `AccountShard`, not Ewrap. - `PendingMirState` / `DequeueMir` are consumed by Ewrap (clarified). - `AppliedReward` and the `applied_rewards` field are populated during AccountShard, not Ewrap. - RUPD's docstring now says rewards are consumed by `AccountShard`. - Crash-recovery wording in `lib.rs` says "mid-boundary" instead of "mid-EWRAP" since the cursor specifically tracks AccountShard progress. BREAKING CONFIG CHANGE: existing `dolos.toml` files that explicitly set `ewrap_total_shards` need to rename the key to `ashard_total`. Users relying on the default (omitted) are unaffected. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
chore: move hard-wired hex cbor bytes to files, and reference them.
Signed-off-by: KtorZ <[email protected]>
test: write golden test for culprit transaction causing phase_two validations.
Signed-off-by: KtorZ <[email protected]>
fix: use latest amaru-uplc crates to fix big negative number encoding.
Signed-off-by: KtorZ <[email protected]>
docs(cardano): fix stale references in EWRAP/AccountShard refactor comments
Sweeps the docstrings/comments touched in this PR for references to phases, work units, and deltas that no longer exist after the rename / reorder / merge / split sequence: - Restore the in-place explanation for the "rewards before drops" HACK in `ashard/loading.rs` (the dangling "see comment on the pre-shard path" pointed to a comment that was deleted when the prepare phase was removed). - Drop "prepare phase" / "finalize phase" wording from `BoundaryWork` field docstrings, `commit_ewrap` comments, and `loading.rs` section dividers — neither phase exists; there's only Ewrap (global + close) and AccountShard (per-account). - Update the ESTART `EpochTransition` description in `work_units.md` so it reflects the post-merge data flow: AccountShards populate the accumulators directly, then Ewrap reads them back and emits `EpochWrapUp` with the final `EndStats` (no `EpochEndInit` patch step anymore). - Rename `compute_prepare_deltas` → `compute_ewrap_deltas`. The "prepare" name was a leftover from the `EwrapPrepare` work unit; the method is now the only Ewrap-phase compute helper. - Tighten `load_pending_rewards_range` docstring; flag that the `None` branch is currently unused. No behavior change. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>