Home / Input Output / ouroboros-leios-sim
May 08, 2-3 PM (0)
May 08, 3-4 PM (0)
May 08, 4-5 PM (0)
May 08, 5-6 PM (0)
May 08, 6-7 PM (0)
May 08, 7-8 PM (0)
May 08, 8-9 PM (0)
May 08, 9-10 PM (0)
May 08, 10-11 PM (0)
May 08, 11-12 AM (0)
May 09, 12-1 AM (0)
May 09, 1-2 AM (0)
May 09, 2-3 AM (0)
May 09, 3-4 AM (0)
May 09, 4-5 AM (0)
May 09, 5-6 AM (0)
May 09, 6-7 AM (0)
May 09, 7-8 AM (0)
May 09, 8-9 AM (0)
May 09, 9-10 AM (0)
May 09, 10-11 AM (0)
May 09, 11-12 PM (0)
May 09, 12-1 PM (0)
May 09, 1-2 PM (0)
May 09, 2-3 PM (0)
May 09, 3-4 PM (0)
May 09, 4-5 PM (0)
May 09, 5-6 PM (0)
May 09, 6-7 PM (0)
May 09, 7-8 PM (0)
May 09, 8-9 PM (0)
May 09, 9-10 PM (0)
May 09, 10-11 PM (1)
May 09, 11-12 AM (0)
May 10, 12-1 AM (0)
May 10, 1-2 AM (0)
May 10, 2-3 AM (0)
May 10, 3-4 AM (0)
May 10, 4-5 AM (0)
May 10, 5-6 AM (0)
May 10, 6-7 AM (0)
May 10, 7-8 AM (0)
May 10, 8-9 AM (0)
May 10, 9-10 AM (0)
May 10, 10-11 AM (0)
May 10, 11-12 PM (1)
May 10, 12-1 PM (0)
May 10, 1-2 PM (0)
May 10, 2-3 PM (0)
May 10, 3-4 PM (0)
May 10, 4-5 PM (0)
May 10, 5-6 PM (0)
May 10, 6-7 PM (0)
May 10, 7-8 PM (0)
May 10, 8-9 PM (0)
May 10, 9-10 PM (0)
May 10, 10-11 PM (0)
May 10, 11-12 AM (0)
May 11, 12-1 AM (0)
May 11, 1-2 AM (0)
May 11, 2-3 AM (0)
May 11, 3-4 AM (0)
May 11, 4-5 AM (0)
May 11, 5-6 AM (0)
May 11, 6-7 AM (0)
May 11, 7-8 AM (8)
May 11, 8-9 AM (1)
May 11, 9-10 AM (2)
May 11, 10-11 AM (2)
May 11, 11-12 PM (1)
May 11, 12-1 PM (5)
May 11, 1-2 PM (7)
May 11, 2-3 PM (0)
May 11, 3-4 PM (0)
May 11, 4-5 PM (0)
May 11, 5-6 PM (0)
May 11, 6-7 PM (0)
May 11, 7-8 PM (1)
May 11, 8-9 PM (0)
May 11, 9-10 PM (1)
May 11, 10-11 PM (0)
May 11, 11-12 AM (0)
May 12, 12-1 AM (0)
May 12, 1-2 AM (0)
May 12, 2-3 AM (0)
May 12, 3-4 AM (0)
May 12, 4-5 AM (0)
May 12, 5-6 AM (0)
May 12, 6-7 AM (0)
May 12, 7-8 AM (2)
May 12, 8-9 AM (0)
May 12, 9-10 AM (0)
May 12, 10-11 AM (2)
May 12, 11-12 PM (77)
May 12, 12-1 PM (0)
May 12, 1-2 PM (2)
May 12, 2-3 PM (1)
May 12, 3-4 PM (0)
May 12, 4-5 PM (0)
May 12, 5-6 PM (0)
May 12, 6-7 PM (0)
May 12, 7-8 PM (0)
May 12, 8-9 PM (0)
May 12, 9-10 PM (0)
May 12, 10-11 PM (0)
May 12, 11-12 AM (0)
May 13, 12-1 AM (0)
May 13, 1-2 AM (0)
May 13, 2-3 AM (0)
May 13, 3-4 AM (0)
May 13, 4-5 AM (0)
May 13, 5-6 AM (0)
May 13, 6-7 AM (0)
May 13, 7-8 AM (0)
May 13, 8-9 AM (0)
May 13, 9-10 AM (0)
May 13, 10-11 AM (0)
May 13, 11-12 PM (0)
May 13, 12-1 PM (0)
May 13, 1-2 PM (1)
May 13, 2-3 PM (0)
May 13, 3-4 PM (0)
May 13, 4-5 PM (2)
May 13, 5-6 PM (0)
May 13, 6-7 PM (0)
May 13, 7-8 PM (0)
May 13, 8-9 PM (0)
May 13, 9-10 PM (1)
May 13, 10-11 PM (0)
May 13, 11-12 AM (0)
May 14, 12-1 AM (0)
May 14, 1-2 AM (0)
May 14, 2-3 AM (0)
May 14, 3-4 AM (0)
May 14, 4-5 AM (0)
May 14, 5-6 AM (0)
May 14, 6-7 AM (0)
May 14, 7-8 AM (0)
May 14, 8-9 AM (0)
May 14, 9-10 AM (0)
May 14, 10-11 AM (0)
May 14, 11-12 PM (0)
May 14, 12-1 PM (0)
May 14, 1-2 PM (0)
May 14, 2-3 PM (0)
May 14, 3-4 PM (0)
May 14, 4-5 PM (0)
May 14, 5-6 PM (0)
May 14, 6-7 PM (0)
May 14, 7-8 PM (0)
May 14, 8-9 PM (0)
May 14, 9-10 PM (0)
May 14, 10-11 PM (0)
May 14, 11-12 AM (0)
May 15, 12-1 AM (0)
May 15, 1-2 AM (0)
May 15, 2-3 AM (0)
May 15, 3-4 AM (0)
May 15, 4-5 AM (0)
May 15, 5-6 AM (0)
May 15, 6-7 AM (0)
May 15, 7-8 AM (1)
May 15, 8-9 AM (0)
May 15, 9-10 AM (0)
May 15, 10-11 AM (0)
May 15, 11-12 PM (0)
May 15, 12-1 PM (0)
May 15, 1-2 PM (0)
May 15, 2-3 PM (0)
119 commits this week May 08, 2026 - May 15, 2026
sim-rs: con-rs emits CIP-0164 per-vote messages, not bundles
The sim's `VoteBundle` aggregation is a pre-CIP-0164 simplification —
in the real protocol every PV / NPV vote is one BLS signature on the
wire, diffused independently, aggregated only at the certifier.  The
con-rs adapter now mirrors that shape:

  model.rs:
    Vote, VoteId<Node>, VoteKind  (no `weight` field — weight is
    re-derived at verification time from the persistent-committee
    registry / NPV-VRF check, matching CIP wire encoding)

  linear_wire.rs:
    Message::{AnnounceVote, RequestVote, Vote(Arc<Vote>)}
    CpuTask::{VoteGenerated, VoteValidated}
    Vote class costs the matching `persistent_vote_bytes` or
    `non_persistent_vote_bytes` from VotingConfig.

  con_rs.rs:
    `votes: BTreeMap<VoteId, VoteState>` (was `vote_bundles`)
    `emit_vote` produces ONE Vote per (voter, EB) honouring Part A's
    PV-xor-NPV partition.  Receivers call
    `Elections::weight_for(voter_id, tag, sig)` to re-derive weight —
    same code path net-rs uses.

  events.rs / sim-cli:
    Event::{VoteGenerated, VoteSent, VoteReceived} alongside the
    existing bundle variants.  `weight` is carried on
    `VoteGenerated` for telemetry aggregation (NOT on the wire).
    sim-cli stats aggregator handles the new events.

`linear_leios.rs` is untouched in behaviour — bundle Message /
CpuTask variants stay live for it; the per-vote variants are
`unreachable!` from its dispatch.  Conversely the bundle variants
are unreachable in con-rs.  Strict adapter ownership keeps the
union-typed `Message` enum honest.

Empirical at NA,0.350 / wfa-ls / 750n / -s 200:
  1085 PV-only signatures + 57 NPV-only signatures = 1142 votes
  0 dual emissions (Part A partition holding)
  0 bundles emitted by con-rs (`"There were 0 bundle(s) of votes"`)
  weighted vote total 2825 matches PV-multi-seat + NPV-unit-seat
  aggregation.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
sim-rs: extract shared Linear-Leios wire types into linear_wire
The `Message`, `CpuTask`, and `TimedEvent` enums sat inside
`linear_leios.rs` but were consumed by both adapters that implement
Linear Leios in the sim — `linear_leios.rs` itself and the newer
`con_rs.rs`.  Splitting them into `sim/linear_wire.rs` makes the
ownership match reality: the wire shape is the union both adapters
exchange, neither adapter owns it.

Pure refactor: enum variants unchanged, `SimMessage`/`SimCpuTask`
impls move with their enums, all 55 sim-core tests pass.  The
adapter-specific dispatch (linear's bundle path; con-rs to follow
with per-vote variants in the next commit) stays in each adapter's
`handle_message` / `handle_cpu_task`.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
sim-rs: con-rs adapter — fan out AnnounceRB after body validation
The adapter only sent AnnounceRBHeader (in `finish_validating_rb_header`)
and relied on the header response's `has_body` flag to propagate body
fetches.  When a relay node first announced a header to its consumers,
its own state was Pending / Requested (body still in flight), so the
consumers saw `has_body=false` and didn't request the body.  When the
relay later transitioned to Received, it never told the consumers, so
bodies only propagated one hop — out to the producer's direct
consumers — and stalled there.

Mirror linear_leios's `publish_rb` fan-out: send AnnounceRB to every
consumer when `finish_validating_rb` completes.  Consumers in Pending
state pick it up via `receive_announce_rb` and request the body.

NA,0.200 / -s 1500 / seed 0 now produces 60 EBs / 10 uncertified / 26
endorsements / 28 755 votes — matching linear's 61 / 10 / 26 / 29 569.
Peak RSS stays ~5 GB (was ~38 GB), since the mempool drain on cert now
fires at every node instead of just direct consumers of the producer.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
sim-rs: con-rs adapter — drain endorsed-EB txs from mempool
Mirror linear_leios's two-path drain so the local mempool doesn't
keep re-including txs that are already on chain via an endorsement:

- Production path (`try_produce_rb`): build the endorsement first,
  then drain its EB's txs from `mempool.txs` and the adapter's
  tx-tracking maps before `BodyPath::decide` walks the mempool to
  shape the new EB body.
- Validation path (`finish_validating_rb`): on receipt of an RB that
  carries an endorsement, drain the endorsed EB's txs locally if its
  body has already validated; otherwise stash the EB id in
  `incomplete_onchain_ebs` and drain on the next
  `finish_validating_eb`.

At NA,0.200 / -s 1500 / seed 0 this shrinks producer EB tx counts
mid-run (slot 175 13312 → 2399 txs, slot 185 16648 → 3070) so the
network isn't propagating ever-growing EB bodies.  Voting cert rate
barely moves on its own — the remaining gap (87 EBs / 72 uncertified
vs linear's 61 / 10) needs separate work on RB-body propagation and
the WrongEB predicate input.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Update CLAUDE.mds for 1500-node sweep, --memory-limit-file, done markers
The 2026w18 doc now reflects the harness as it actually behaves: it
covers both 750n and 1500n topologies, documents the --memory-limit-file
flag, the `done` marker semantics, and the continue-on-failure logic in
the sweep wrappers. Adds a per-topology run-time table, an honest
"Memory and disk requirements" section explaining why memory-limit
caps don't help at 1500n high throughput (the per-node txs cache is
diffusion-limited, not throughput-limited), and the rationale for the
256 GB virtual ulimit. Voting-mode thresholds now show that `everyone`
includes zero-stake relays in the simulator. Removes the "original CIP
results at experiment root" lines — those files were never actually
checked in there.

The sim-rs doc gets a brief note on 1500n RSS scaling and the one-line
flush_window tweak that would shrink the end-of-sim EventMonitor spike
without altering correctness.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Add topology level to experiment paths; --topology flag throughout
run-deterministic.sh accepts --topology NAME and writes outputs to
<MODE>/<TOPOLOGY>/seed-<N>/, so 750-node and 1500-node sweeps can
coexist instead of overwriting each other. combine-results follows the
same path pattern, with a matching --topology flag and output under
results/<MODE>/<TOPOLOGY>/. analysis.ipynb gains a TOPOLOGY variable
in the configuration cell so each fork picks its own dataset.

Existing 750-node data moved into the new layout in place.
Replace per-block ledger state cache with single mutable state
ledger_states was a BTreeMap caching a cumulative HashSet<u64> of spent
inputs for every ranking block. At slot 1380 with 72 blocks this reached
3.4M entries per snapshot × 72 copies = 55 MB/node × 750 = 41 GB (~70%
of total memory).

Since resolve_ledger_state is only called for the current chain tip, and
rollbacks aren't handled correctly anyway (slot battle replacement blocks
are skipped by seen_blocks), replace the BTreeMap with a single
Option<(BlockId, LedgerState)> that is mutated in place. Removes Clone
derive from LedgerState.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add deterministic CIP experiment scripts and sim-rs vote config fields
Scripts for running CIP experiments with deterministic turbo mode:
- run-deterministic.sh: per-experiment runner with voting mode, seed,
  and engine selection (turbo default, actor/sequential optional)
- run-all-NA.sh: runs all CIP throughputs (0.150-0.350) for a given mode
- run-all-voting-modes.sh: runs all throughputs x all voting modes
- combine-results-multi-vote.sh: collects results for a given voting mode
  into the format expected by analysis.ipynb

Add sim-rs persistent/non-persistent vote config fields to
experiments/config.yaml alongside existing Haskell sim fields. Both
halves are set to the same original CIP values so the weighted average
is unchanged. Without these, sim-rs silently uses defaults from
config.default.yaml (total probability 500 instead of 600).

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Make multi-shard sequential engine deterministic
Cross-shard message delivery order in the sequential engine previously
depended on OS thread scheduling of peer shards, so runs with
shard_count > 1 produced different event sequences across runs. Fixing
this required four coordinated changes:

1. **Deterministic cross-shard merge**: tag every CrossShardMsg with
   `source_shard` and a per-sender monotonic `seq`. Receiving shards
   buffer incoming messages into a `BinaryHeap` keyed on
   `(send_time, source_shard, seq)` and only deliver those whose
   send_time is strictly less than the minimum of every peer's
   advertised `shared_time`. Under that rule, no future message can
   arrive with an earlier send_time, so delivery order is a pure
   function of sent messages (the messages themselves are produced
   deterministically per-shard).

2. **Strict CMB ceiling**: the block condition changes from
   `timestamp > ceiling` to `timestamp >= ceiling`. At the boundary
   `timestamp == ceiling`, a peer might still be about to send a
   message whose `delivery_time == timestamp`; using strict less-than
   ensures every message with `delivery_time <= timestamp` is already
   on the mpsc by the time we process `timestamp`.

3. **Content-derived sort at pop**: BinaryHeap pop order for
   equal-timestamp events is a function of push history, which under
   multi-shard can vary across runs (cross-shard pushes from drain
   interleave with intra-shard pushes from apply_batch_output). Collect
   all events at the current timestamp into a Vec and sort by
   `GlobalEvent::sort_key()` before processing, so the order is a pure
   function of event content.

4. **Ceiling-aware termination**: replace the
   primary-shard-cancels-on-SlotBoundary scheme with an independent
   per-shard termination check that only breaks when the local queue
   has no events with `ts < end_time` AND the CMB ceiling is also
   `>= end_time`. Every shard stops at the same simulation time,
   independent of token-cancellation propagation races.

5. **Second drain before popping**: run drain_cross_shard_safe a second
   time after the ceiling check passes. The top-of-loop drain may run
   before the peer has advanced enough for send_time=`timestamp - eps`
   messages to be deliverable; the post-ceiling-check drain catches
   them, preventing a cross-shard delivery from landing in a later
   iteration and splitting a timestamp's events across batches.

New test `test_sequential_multi_shard_deterministic` compares per-node
event trajectories across two runs under shard_count=2. Passes 500/500
in release mode (was failing in ~100% of runs before the fix, ~25%
with only the sort fix, 2% with the termination fix, 0% with the
second drain).

All 55 sim-core tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Fix TX generation rate over-rate from f64 truncation
`TxGeneratorCore::generate` computed inter-tx delay as
`config.frequency_ms.sample() as u64 * shard_count as u64` and passed
it to `Duration::from_millis`. The `as u64` cast truncated each
sample: a configured 7.5 ms became 7 ms, producing TXs ~7% faster
than requested. For the 0.200/wfa-ls single-shard run this meant
128,572 TXs over 900s (~214 KB/s) instead of the intended ~120,000
TXs (~200 KB/s).

Only affects configurations with sub-ms precision and no batching.
Turbo is largely unaffected (1 ms resolution, 10 ms tx-batch-window
collapses the fractional delay anyway).

Switch to `Duration::from_secs_f64`, preserving sub-millisecond
precision via nanosecond-resolution Duration. Clamp to `.max(0.0)` so
distributions that can sample negative (e.g., Normal) keep the old
"treat negative as zero delay" behaviour rather than panicking in
`from_secs_f64`.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Document determinism guarantees and benchmark scripts in CLAUDE.md
Add Determinism section covering all sources of non-determinism that
were found and fixed (HashMap iteration, shard assignment, TX ID
counters, rayon collect order, event stream sorting), what was tested
and found unnecessary (barrier synchronization), and what does not
affect determinism (CpuTaskQueue HashMap, config HashSets).

Add Benchmark Scripts section documenting cip-voting-options.sh,
poll-sim.sh, and the determinism verification methodology.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add process RSS to memory stats and simplify praos.blocks instrumentation
Read VmRSS from /proc/self/status and log it alongside estimated totals
so we can directly compare instrumented vs actual memory usage.

Simplify praos.blocks stats back to basic entry count and tx_refs — the
detailed unique/endorse breakdown showed praos.blocks is not a
significant memory consumer.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add -P/--extra-params and scripts/poll-sim.sh
cip-voting-options.sh gains a repeatable -P/--extra-params flag that
layers additional YAML parameter files on top of the existing config
chain (applied last so they override everything). Useful for quick
experiments — e.g., `-P /tmp/coarse-timestamp.yaml` to bump
timestamp-resolution-ms without touching the committed parameter set.

poll-sim.sh prints a concise one-line status of a running sim-cli plus
the log tail, intended for use from /loop or cron to watch a
long-running benchmark without blocking Claude's thread on sleep.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
check-progress: also report trace-processor and pigz post-processing
Previously the script only watched sim-cli, so cron output read "NO SIM
RUNNING" while the experiment was actually busy in the trace processor
or the final pigz of csv files — making a healthy run look stuck. Now
it reports any of sim-cli / leios-trace-processor / pigz -p 3 -9f and
prefixes each line with the binary name so it's obvious which phase is
active.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Add configurable committee selection algorithms for linear leios
Add committee-selection-algorithm config with three modes:
- wfa-ls (default): existing VRF lottery matching CIP-0164 wFA+LS
- everyone: every node votes unconditionally (1 vote each)
- top-stake-fraction: nodes covering top N% of cumulative stake vote

This enables traffic analysis comparing the CIP's VRF-based scheme
against simpler alternatives. Vote bundle sizes, CPU times, diffusion,
and threshold checking are unchanged — only the selection mechanism
differs.

Includes benchmark script (scripts/cip-voting-options.sh) that runs
CIP topology under turbo mode across all three committee modes.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Fix EB state pruning causing protocol collapse
The prune_old_leios_state function could prune an EB from node state
before an endorsing RB arrived, causing the node to add the EB to
incomplete_onchain_ebs with no way to validate it (body already gone).
This permanently set produce_empty_block=true, shutting down all block
production on affected nodes and cascading across the network.

Fix: don't add an EB to incomplete_onchain_ebs if it's in pruned_ebs
(meaning it was already validated before being pruned — no conflict
risk). Also add defensive guards in the pruning loop to skip EBs that
are in incomplete_onchain_ebs, and preserve ebs_by_rb mappings for
incomplete EBs.

Bisected: pre-pruning commit a1649012d produces 100% TX finalization
at 0.200 throughput; post-pruning ce028db2d collapses to 46%.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Append voting benchmark results from cip-voting-options.sh runs
The voting_results.csv accumulates rows from each cip-voting-options.sh
invocation; this commit captures the runs done while developing the
2026w18 sweep harness (label tags include `caps-retain`, `no-caps`,
and seed/throughput sweep variants). Adds 127 rows for future
reference / analysis.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>