Apr 18, 3-4 PM (0)
Apr 18, 4-5 PM (2)
Apr 18, 5-6 PM (6)
Apr 18, 6-7 PM (2)
Apr 18, 7-8 PM (2)
Apr 18, 8-9 PM (4)
Apr 18, 9-10 PM (9)
Apr 18, 10-11 PM (21)
Apr 18, 11-12 AM (23)
Apr 19, 12-1 AM (1)
Apr 19, 1-2 AM (4)
Apr 19, 2-3 AM (1)
Apr 19, 3-4 AM (0)
Apr 19, 4-5 AM (0)
Apr 19, 5-6 AM (3)
Apr 19, 6-7 AM (0)
Apr 19, 7-8 AM (2)
Apr 19, 8-9 AM (1)
Apr 19, 9-10 AM (1)
Apr 19, 10-11 AM (4)
Apr 19, 11-12 PM (7)
Apr 19, 12-1 PM (6)
Apr 19, 1-2 PM (8)
Apr 19, 2-3 PM (23)
Apr 19, 3-4 PM (7)
Apr 19, 4-5 PM (4)
Apr 19, 5-6 PM (3)
Apr 19, 6-7 PM (8)
Apr 19, 7-8 PM (3)
Apr 19, 8-9 PM (8)
Apr 19, 9-10 PM (6)
Apr 19, 10-11 PM (25)
Apr 19, 11-12 AM (23)
Apr 20, 12-1 AM (4)
Apr 20, 1-2 AM (5)
Apr 20, 2-3 AM (2)
Apr 20, 3-4 AM (7)
Apr 20, 4-5 AM (1)
Apr 20, 5-6 AM (8)
Apr 20, 6-7 AM (12)
Apr 20, 7-8 AM (29)
Apr 20, 8-9 AM (42)
Apr 20, 9-10 AM (37)
Apr 20, 10-11 AM (95)
Apr 20, 11-12 PM (42)
Apr 20, 12-1 PM (39)
Apr 20, 1-2 PM (53)
Apr 20, 2-3 PM (68)
Apr 20, 3-4 PM (47)
Apr 20, 4-5 PM (41)
Apr 20, 5-6 PM (31)
Apr 20, 6-7 PM (24)
Apr 20, 7-8 PM (10)
Apr 20, 8-9 PM (7)
Apr 20, 9-10 PM (16)
Apr 20, 10-11 PM (28)
Apr 20, 11-12 AM (18)
Apr 21, 12-1 AM (24)
Apr 21, 1-2 AM (5)
Apr 21, 2-3 AM (13)
Apr 21, 3-4 AM (4)
Apr 21, 4-5 AM (3)
Apr 21, 5-6 AM (8)
Apr 21, 6-7 AM (15)
Apr 21, 7-8 AM (44)
Apr 21, 8-9 AM (119)
Apr 21, 9-10 AM (36)
Apr 21, 10-11 AM (35)
Apr 21, 11-12 PM (98)
Apr 21, 12-1 PM (57)
Apr 21, 1-2 PM (71)
Apr 21, 2-3 PM (60)
Apr 21, 3-4 PM (33)
Apr 21, 4-5 PM (31)
Apr 21, 5-6 PM (27)
Apr 21, 6-7 PM (38)
Apr 21, 7-8 PM (35)
Apr 21, 8-9 PM (37)
Apr 21, 9-10 PM (14)
Apr 21, 10-11 PM (34)
Apr 21, 11-12 AM (12)
Apr 22, 12-1 AM (2)
Apr 22, 1-2 AM (3)
Apr 22, 2-3 AM (3)
Apr 22, 3-4 AM (4)
Apr 22, 4-5 AM (3)
Apr 22, 5-6 AM (17)
Apr 22, 6-7 AM (34)
Apr 22, 7-8 AM (21)
Apr 22, 8-9 AM (37)
Apr 22, 9-10 AM (18)
Apr 22, 10-11 AM (47)
Apr 22, 11-12 PM (45)
Apr 22, 12-1 PM (56)
Apr 22, 1-2 PM (64)
Apr 22, 2-3 PM (44)
Apr 22, 3-4 PM (86)
Apr 22, 4-5 PM (46)
Apr 22, 5-6 PM (17)
Apr 22, 6-7 PM (10)
Apr 22, 7-8 PM (18)
Apr 22, 8-9 PM (15)
Apr 22, 9-10 PM (23)
Apr 22, 10-11 PM (31)
Apr 22, 11-12 AM (17)
Apr 23, 12-1 AM (7)
Apr 23, 1-2 AM (4)
Apr 23, 2-3 AM (4)
Apr 23, 3-4 AM (6)
Apr 23, 4-5 AM (3)
Apr 23, 5-6 AM (8)
Apr 23, 6-7 AM (17)
Apr 23, 7-8 AM (26)
Apr 23, 8-9 AM (33)
Apr 23, 9-10 AM (33)
Apr 23, 10-11 AM (29)
Apr 23, 11-12 PM (30)
Apr 23, 12-1 PM (51)
Apr 23, 1-2 PM (69)
Apr 23, 2-3 PM (65)
Apr 23, 3-4 PM (26)
Apr 23, 4-5 PM (21)
Apr 23, 5-6 PM (7)
Apr 23, 6-7 PM (7)
Apr 23, 7-8 PM (11)
Apr 23, 8-9 PM (14)
Apr 23, 9-10 PM (6)
Apr 23, 10-11 PM (28)
Apr 23, 11-12 AM (18)
Apr 24, 12-1 AM (7)
Apr 24, 1-2 AM (4)
Apr 24, 2-3 AM (7)
Apr 24, 3-4 AM (5)
Apr 24, 4-5 AM (8)
Apr 24, 5-6 AM (13)
Apr 24, 6-7 AM (12)
Apr 24, 7-8 AM (33)
Apr 24, 8-9 AM (40)
Apr 24, 9-10 AM (41)
Apr 24, 10-11 AM (71)
Apr 24, 11-12 PM (57)
Apr 24, 12-1 PM (37)
Apr 24, 1-2 PM (53)
Apr 24, 2-3 PM (34)
Apr 24, 3-4 PM (19)
Apr 24, 4-5 PM (16)
Apr 24, 5-6 PM (38)
Apr 24, 6-7 PM (25)
Apr 24, 7-8 PM (12)
Apr 24, 8-9 PM (41)
Apr 24, 9-10 PM (17)
Apr 24, 10-11 PM (30)
Apr 24, 11-12 AM (16)
Apr 25, 12-1 AM (8)
Apr 25, 1-2 AM (1)
Apr 25, 2-3 AM (10)
Apr 25, 3-4 AM (5)
Apr 25, 4-5 AM (3)
Apr 25, 5-6 AM (13)
Apr 25, 6-7 AM (1)
Apr 25, 7-8 AM (4)
Apr 25, 8-9 AM (24)
Apr 25, 9-10 AM (17)
Apr 25, 10-11 AM (4)
Apr 25, 11-12 PM (4)
Apr 25, 12-1 PM (12)
Apr 25, 1-2 PM (3)
Apr 25, 2-3 PM (8)
Apr 25, 3-4 PM (0)
3,673 commits this week Apr 18, 2026 - Apr 25, 2026
feat(cardano): persist shard count alongside ashard_progress
Guards against a config change to `account_shards` corrupting an
in-flight boundary. Previously, if dolos crashed mid-boundary and the
operator changed `account_shards` between crash and restart, the resume
would re-partition the account key space with the new count, mismatching
the cursor's already-committed shards.

Fix: snapshot the boundary's shard count into state at the first
`EpochEndAccumulate` apply. The persisted total is authoritative for the
duration of the in-flight boundary; the new config value only takes
effect on the next boundary.

Changes:
- New `AShardProgress { committed, total }` struct stored at
  `EpochState.ashard_progress: Option<AShardProgress>` (was
  `Option<u32>`).
- `EpochEndAccumulate` carries `total_shards`. Its apply validates the
  delta's `total_shards` matches any previously persisted total and
  surfaces an error if they diverge (would only happen if a work unit
  was constructed with a stale config view).
- `EpochWrapUp` and `EpochTransition` undo fields adapted to the new
  type.
- `AShardWorkUnit::load` / `commit_state` read the persisted total when
  present and fall back to `config.account_shards()` for fresh
  boundaries.
- `CardanoLogic` caches `effective_account_shards` (= persisted total
  if a boundary is in flight, else config). Refreshed at every
  `pop_work` call so `receive_block` (which has no state access) can
  use the up-to-date value when constructing
  `WorkBuffer::AShardingBoundary`.
- Crash-recovery wording updated to surface a clear warning when the
  persisted total disagrees with current config.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
refactor(cardano): rename AccountShard → AShard for structs and variants
Aligns the type and variant names with the module path convention:

- struct `AccountShardWorkUnit` → `AShardWorkUnit`
- enum variant `CardanoWorkUnit::AccountShard` → `AShard`
- enum variant `InternalWorkUnit::AccountShard` → `AShard`
- WorkBuffer state `AccountShardingBoundary` → `AShardingBoundary`
- module re-export and all callers updated to match
- prose / docstrings / log messages also use `AShard` consistently

The module path is `crate::ashard`, so the type now reads as
`ashard::AShardWorkUnit`. No behavior change.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
refactor(cardano): decouple shard naming from `ewrap`
The shard-related identifiers and comments were named after the legacy
EWRAP pipeline that bundled the global epoch-boundary work and the
per-account shards together. With AccountShard now a distinct work unit
in its own module, those names are misleading. Rename to use the
`ashard` prefix consistently with the module path:

- `CardanoConfig::ewrap_total_shards` → `ashard_total`
- `CardanoConfig::DEFAULT_EWRAP_TOTAL_SHARDS` → `DEFAULT_ASHARD_TOTAL`
- `EpochState::ewrap_progress` → `ashard_progress`
- `prev_ewrap_progress` → `prev_ashard_progress` on `EpochEndAccumulate`,
  `EpochWrapUp`, and `EpochTransition`
- `WorkBuffer::receive_block` / `on_ewrap_boundary` / `pop_work`
  parameter `ewrap_total_shards` → `ashard_total`
- Error messages in `ashard/shard.rs` updated to match.

Also fixes comment / doc misattributions where "EWRAP" was used for
work that's now in `AccountShard`:
- `PendingRewardState` / `DequeueReward` are consumed by `AccountShard`,
  not Ewrap.
- `PendingMirState` / `DequeueMir` are consumed by Ewrap (clarified).
- `AppliedReward` and the `applied_rewards` field are populated during
  AccountShard, not Ewrap.
- RUPD's docstring now says rewards are consumed by `AccountShard`.
- Crash-recovery wording in `lib.rs` says "mid-boundary" instead of
  "mid-EWRAP" since the cursor specifically tracks AccountShard
  progress.

BREAKING CONFIG CHANGE: existing `dolos.toml` files that explicitly set
`ewrap_total_shards` need to rename the key to `ashard_total`. Users
relying on the default (omitted) are unaffected.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
docs(cardano): fix stale references in EWRAP/AccountShard refactor comments
Sweeps the docstrings/comments touched in this PR for references to
phases, work units, and deltas that no longer exist after the rename /
reorder / merge / split sequence:

- Restore the in-place explanation for the "rewards before drops"
  HACK in `ashard/loading.rs` (the dangling "see comment on the
  pre-shard path" pointed to a comment that was deleted when the
  prepare phase was removed).
- Drop "prepare phase" / "finalize phase" wording from `BoundaryWork`
  field docstrings, `commit_ewrap` comments, and `loading.rs` section
  dividers — neither phase exists; there's only Ewrap (global + close)
  and AccountShard (per-account).
- Update the ESTART `EpochTransition` description in `work_units.md`
  so it reflects the post-merge data flow: AccountShards populate the
  accumulators directly, then Ewrap reads them back and emits
  `EpochWrapUp` with the final `EndStats` (no `EpochEndInit` patch step
  anymore).
- Rename `compute_prepare_deltas` → `compute_ewrap_deltas`. The
  "prepare" name was a leftover from the `EwrapPrepare` work unit; the
  method is now the only Ewrap-phase compute helper.
- Tighten `load_pending_rewards_range` docstring; flag that the `None`
  branch is currently unused.

No behavior change.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
refactor(cardano): split AccountShard into its own `ashard` module
Pulls the AccountShard work unit out of `ewrap/` into a peer module
`ashard/` (matching the layout of `estart/`, `rupd/`, `roll/`,
`genesis/`). The shared `BoundaryWork` / `BoundaryVisitor` infrastructure
and the drops visitor (used by both phases) stay in `ewrap/`; `ashard/`
imports them.

Moves: `rewards.rs`, `shard.rs`, `AccountShardWorkUnit` (from
`work_unit.rs`), and the `load_*` / `commit_*` impl blocks. Visibility
on shared `BoundaryWork` helpers (`new_empty`, `load_pool_data`,
`load_drep_data`, `stream_and_apply_namespace`) widened from private to
`pub(crate)`. The `ending_state` field also widened to `pub(crate)` so
peer modules can mutate it (e.g. `wrapup.flush` already does this).

Method/identifier renames to match the new module path:
- `BoundaryWork::load_account_shard` → `load_ashard`
- `BoundaryWork::commit_account_shard` → `commit_ashard`
- `WorkUnit::name()` returns `"ashard"`

Type name `AccountShardWorkUnit` is preserved.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
fix(cardano): seed EpochState.end in Genesis
After the boundary-pipeline reorder (AccountShard runs before Ewrap),
the first epoch's AccountShard hits `EpochEndAccumulate::apply` with
`entity.end == None` because Genesis bootstrapped the EpochState before
ESTART's `EpochTransition` had a chance to seed the slot. Seed
`end = Some(EndStats::default())` directly in Genesis to match the
invariant ESTART maintains for every subsequent epoch.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Add deterministic CIP experiment scripts and sim-rs vote config fields
Scripts for running CIP experiments with deterministic turbo mode:
- run-deterministic.sh: per-experiment runner with voting mode, seed,
  and engine selection (turbo default, actor/sequential optional)
- run-all-NA.sh: runs all CIP throughputs (0.150-0.350) for a given mode
- run-all-voting-modes.sh: runs all throughputs x all voting modes
- combine-results-multi-vote.sh: collects results for a given voting mode
  into the format expected by analysis.ipynb

Add sim-rs persistent/non-persistent vote config fields to
experiments/config.yaml alongside existing Haskell sim fields. Both
halves are set to the same original CIP values so the weighted average
is unchanged. Without these, sim-rs silently uses defaults from
config.default.yaml (total probability 500 instead of 600).

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
refactor(cardano): merge EwrapFinalize into Ewrap; drop EpochEndInit delta
The boundary close is now a single Ewrap work unit: it runs the global
visitors AND emits EpochWrapUp carrying the assembled final EndStats
(prepare-time fields combined with the AccountShard-populated accumulator
fields). The wrap-up visitor now constructs the final stats locally
instead of routing them through a separate EpochEndInit delta.

Side-benefits: one fewer state-machine state, one fewer delta type, one
fewer commit cycle. Atomicity also improves — the boundary close is now
a single state-writer commit, so a crash between Ewrap and EwrapFinalize
is no longer possible.

Test fixture in tests/epoch_pots/main.rs restructured to match the
post-reorder pipeline: accumulator reset gates on AccountShard
shard_index == 0; rewards CSV is dumped on the Ewrap arm.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>