Home / Input Output / ouroboros-leios-sim
May 01, 2-3 PM (0)
May 01, 3-4 PM (0)
May 01, 4-5 PM (0)
May 01, 5-6 PM (0)
May 01, 6-7 PM (0)
May 01, 7-8 PM (0)
May 01, 8-9 PM (0)
May 01, 9-10 PM (0)
May 01, 10-11 PM (0)
May 01, 11-12 AM (0)
May 02, 12-1 AM (0)
May 02, 1-2 AM (0)
May 02, 2-3 AM (0)
May 02, 3-4 AM (0)
May 02, 4-5 AM (0)
May 02, 5-6 AM (0)
May 02, 6-7 AM (0)
May 02, 7-8 AM (0)
May 02, 8-9 AM (0)
May 02, 9-10 AM (0)
May 02, 10-11 AM (0)
May 02, 11-12 PM (0)
May 02, 12-1 PM (0)
May 02, 1-2 PM (0)
May 02, 2-3 PM (0)
May 02, 3-4 PM (0)
May 02, 4-5 PM (0)
May 02, 5-6 PM (0)
May 02, 6-7 PM (0)
May 02, 7-8 PM (0)
May 02, 8-9 PM (0)
May 02, 9-10 PM (0)
May 02, 10-11 PM (0)
May 02, 11-12 AM (0)
May 03, 12-1 AM (0)
May 03, 1-2 AM (0)
May 03, 2-3 AM (0)
May 03, 3-4 AM (0)
May 03, 4-5 AM (0)
May 03, 5-6 AM (0)
May 03, 6-7 AM (0)
May 03, 7-8 AM (0)
May 03, 8-9 AM (0)
May 03, 9-10 AM (0)
May 03, 10-11 AM (0)
May 03, 11-12 PM (0)
May 03, 12-1 PM (0)
May 03, 1-2 PM (0)
May 03, 2-3 PM (0)
May 03, 3-4 PM (0)
May 03, 4-5 PM (0)
May 03, 5-6 PM (0)
May 03, 6-7 PM (0)
May 03, 7-8 PM (0)
May 03, 8-9 PM (0)
May 03, 9-10 PM (0)
May 03, 10-11 PM (0)
May 03, 11-12 AM (0)
May 04, 12-1 AM (0)
May 04, 1-2 AM (0)
May 04, 2-3 AM (0)
May 04, 3-4 AM (0)
May 04, 4-5 AM (0)
May 04, 5-6 AM (0)
May 04, 6-7 AM (0)
May 04, 7-8 AM (0)
May 04, 8-9 AM (0)
May 04, 9-10 AM (0)
May 04, 10-11 AM (0)
May 04, 11-12 PM (0)
May 04, 12-1 PM (0)
May 04, 1-2 PM (0)
May 04, 2-3 PM (0)
May 04, 3-4 PM (0)
May 04, 4-5 PM (0)
May 04, 5-6 PM (0)
May 04, 6-7 PM (0)
May 04, 7-8 PM (0)
May 04, 8-9 PM (0)
May 04, 9-10 PM (0)
May 04, 10-11 PM (0)
May 04, 11-12 AM (0)
May 05, 12-1 AM (0)
May 05, 1-2 AM (0)
May 05, 2-3 AM (0)
May 05, 3-4 AM (0)
May 05, 4-5 AM (0)
May 05, 5-6 AM (0)
May 05, 6-7 AM (0)
May 05, 7-8 AM (0)
May 05, 8-9 AM (1)
May 05, 9-10 AM (0)
May 05, 10-11 AM (4)
May 05, 11-12 PM (1)
May 05, 12-1 PM (1)
May 05, 1-2 PM (1)
May 05, 2-3 PM (1)
May 05, 3-4 PM (1)
May 05, 4-5 PM (0)
May 05, 5-6 PM (0)
May 05, 6-7 PM (0)
May 05, 7-8 PM (0)
May 05, 8-9 PM (0)
May 05, 9-10 PM (0)
May 05, 10-11 PM (0)
May 05, 11-12 AM (0)
May 06, 12-1 AM (0)
May 06, 1-2 AM (0)
May 06, 2-3 AM (0)
May 06, 3-4 AM (0)
May 06, 4-5 AM (0)
May 06, 5-6 AM (0)
May 06, 6-7 AM (0)
May 06, 7-8 AM (0)
May 06, 8-9 AM (0)
May 06, 9-10 AM (0)
May 06, 10-11 AM (0)
May 06, 11-12 PM (0)
May 06, 12-1 PM (3)
May 06, 1-2 PM (0)
May 06, 2-3 PM (0)
May 06, 3-4 PM (0)
May 06, 4-5 PM (0)
May 06, 5-6 PM (0)
May 06, 6-7 PM (0)
May 06, 7-8 PM (0)
May 06, 8-9 PM (0)
May 06, 9-10 PM (0)
May 06, 10-11 PM (0)
May 06, 11-12 AM (0)
May 07, 12-1 AM (0)
May 07, 1-2 AM (0)
May 07, 2-3 AM (0)
May 07, 3-4 AM (0)
May 07, 4-5 AM (0)
May 07, 5-6 AM (0)
May 07, 6-7 AM (0)
May 07, 7-8 AM (103)
May 07, 8-9 AM (0)
May 07, 9-10 AM (1)
May 07, 10-11 AM (5)
May 07, 11-12 PM (2)
May 07, 12-1 PM (2)
May 07, 1-2 PM (0)
May 07, 2-3 PM (4)
May 07, 3-4 PM (0)
May 07, 4-5 PM (2)
May 07, 5-6 PM (0)
May 07, 6-7 PM (0)
May 07, 7-8 PM (0)
May 07, 8-9 PM (0)
May 07, 9-10 PM (0)
May 07, 10-11 PM (0)
May 07, 11-12 AM (0)
May 08, 12-1 AM (0)
May 08, 1-2 AM (0)
May 08, 2-3 AM (0)
May 08, 3-4 AM (0)
May 08, 4-5 AM (0)
May 08, 5-6 AM (0)
May 08, 6-7 AM (0)
May 08, 7-8 AM (10)
May 08, 8-9 AM (1)
May 08, 9-10 AM (0)
May 08, 10-11 AM (0)
May 08, 11-12 PM (0)
May 08, 12-1 PM (0)
May 08, 1-2 PM (0)
May 08, 2-3 PM (0)
143 commits this week May 01, 2026 - May 08, 2026
2026w18 750n re-run: wfa-ls and top-stake-fraction with unbounded mempool
Re-ran the full 11-experiment sweep (5 NA throughputs + 6 Plutus levels) for
both wfa-ls and top-stake-fraction modes with leios-mempool-size-bytes: null
(now the default in experiments/config.yaml). Burst-driven RSS bloat is gone:
peak RSS scales linearly with throughput at ~5–13 GB across the sweep, vs
20–35 GB on the bounded runs, with the same 100% TX finalization.

Per-run config.yaml is now real YAML with all overlays folded in (engine,
voting, seed, memory-limit), as a self-contained record of each run.

Includes case.csv, config.yaml, summary.txt, time.txt for both modes.
The everyone mode was stopped midway through its Plutus runs; will rerun
separately.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: align test cluster config with sim-rs reference values
Two config changes to make the smoke cluster behave like the sim-rs
Linear-Leios reference scenarios rather than a stress test.

1. `[validation]` in mainnet.toml: switch RB-body validation cost
   from a flat `1000.0 ms` constant to sim-rs's
   `0.3539 ms + 2.151e-05 ms/byte` formula, and surface
   `tx_validation_ms = 0.6201` (matches sim-rs's amortised-Plutus
   `tx-validation-cpu-time-ms`).  At 90 KiB max RB body the new
   constant is ~2.4 ms — 3-hop propagation now finishes well under
   the 3-slot pre-voting buffer, where the old 1 s/hop pushed RB
   adoption past the voting window and forced every non-producer
   voter into `WrongEB`.

2. `rb_generation_probability` in sample-cluster.toml: revert from
   `0.2` (cluster-wide ≈ 1 RB / 5 s) to the base-config `0.05`
   (≈ 1 RB / 20 s, mainnet-like).  The high rate packed multiple
   RBs into each EB's voting window and meant the chain tip
   typically moved past the EB-referencing RB before quorum could
   gather, so no EB ever certified.

Together these match sim-rs's empirical 40%-certification working
point and let the cluster reach `RbCertifiedEb` end-to-end.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: retry voting on transient predicate failures + telemetry
Two changes to the Leios voting flow.

1. `decide_vote` previously called `mark_voted` on *any* predicate
   failure, so a voter that hit `WrongEB`, `LateRBHeader`, or
   `MissingTX` on the first slot of the Voting phase had no chance
   to retry next slot when their chain-tip caught up or the missing
   TX arrived.  In the 25-node cluster this gave ~1 vote per EB
   (the producer's), nowhere near quorum.

   Only `LateEB` is genuinely permanent — `eb_seen_slot` is fixed at
   receipt, once-late is always-late.  The other three are transient.
   Skip `mark_voted` on the transient reasons so `EligibleToVote`
   re-fires every slot of the Voting window.  `EmitVote` still calls
   `mark_voted` so a successful vote happens at most once per EB.

   Effect on the smoke test: best-EB vote count went from 1 to 18
   out of 25, which crosses the quorum threshold and produces the
   first end-to-end `RbCertifiedEb` events.

2. `emit_vote` only logged the bundle at `info!` level; the
   `VTBundleGenerated` telemetry variant defined in `telemetry.rs`
   was never constructed.  Push it onto `pending_telemetry` parallel
   to `LeiosNoVote` so the UI can show producer flashes.

Updated the two `con-rs` unit tests that asserted the
"`mark_voted` after WrongEB" behaviour, and added a positive test
for the new "transient reasons re-fire each slot" semantic.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-core: scale mux ingress for Leios bulk fetches
Two related sizing fixes for protocol 19 (LeiosFetch).

1. Per-protocol ingress channel capacity was a single
   `egress_queue_size: 16` value used for every protocol.  At 25-node-
   cluster scale, an EB-tx response can fragment into ~256 segments
   and overshoots a 16-deep mpsc instantly, killing the mux.  Split
   into three named tiers in `peer_task.rs`:
     - `HIGH_VOLUME_QUEUE_SIZE   = 256` (chainsync, blockfetch,
       txsubmission, leios_notify) — kilobyte-sized messages
     - `BULK_FETCH_QUEUE_SIZE    = 4096` (leios_fetch only) — sized
       to hold a worst-case 12 KiB-segmented multi-MB delivery plus
       headroom for the next request
     - `LOW_VOLUME_QUEUE_SIZE    = 4` (keepalive, peersharing) — slow
       fixed-size traffic

2. `MuxError::IngressOverflow` was reused for both byte-budget overflow
   and channel-full: the resulting message read
   "{queued bytes} exceeds limit {byte budget}" even when the byte
   budget was *not* exceeded, because the channel filled first.  Add
   a sibling `IngressChannelFull { queued, capacity }` variant and
   use it on the `try_send` Full path so the log accurately points
   at the channel as the bottleneck.

3. `LeiosFetch` `INGRESS_LIMIT` and `SIZE_LIMIT_LARGE` raised from
   16 MiB to 24 MiB.  Without 50% headroom over `MAX_BLOCK_SIZE`,
   the demuxer's per-state buffer cap was hit by the very last SDU
   of a max-size delivery (codec hadn't drained earlier segments
   yet), tearing the connection down at the end of every full-size
   EB-tx response.  Per-message safety is still enforced inside
   the CBOR codec via the unchanged `MAX_BLOCK_SIZE`,
   `MAX_TRANSACTIONS`, `MAX_VOTES`, `MAX_TRANSACTION_SIZE` caps.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: cooldown peer on block-fetch failure, route via fetch policy
When a `BlockFetch` request fails, `on_block_fetch_failed` previously
removed the in-flight marker and immediately re-ran chain selection.
The same peer's announced fragment still pointed at the missing point,
so `select_chain_once` reached the same `WaitingForBlocks { peer_id }`
decision and re-issued the fetch from the same peer in microseconds.
Under any persistent fetch failure (e.g. a peer that NoBlocks every
request) this busy-loops at hundreds of thousands of iterations per
second — disk I/O from the resulting log spam can starve slot ticks
and cascade into cluster-wide fork divergence.

Add `BLOCK_FETCH_COOLDOWN` (2 s) and a `block_fetch_cooldown` map on
`PraosState`.  `on_block_fetch_failed` now takes the responsible
`PeerId`, inserts a cooldown entry, and `evaluate_and_fetch_internal`
merges those peers into its `skip` set so the fetch policy picks a
different candidate.

`NetworkEvent::BlockFetchFailed` now carries `peer_id: Option<PeerId>`
so the wrapper can pass it through.  `Some(p)` is the normal "this
peer failed" case; `None` means the coordinator never reached any
peer for the requested fragment, so there is no one to penalise and
the wrapper skips the cooldown call.

Demoted `select_chain: fetching missing blocks`,
`fetching missing chain blocks`, and the failure log to `debug!` —
they fired at info level on every fetch decision and were the bulk
of the disk-I/O storm during the 25-node smoke.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-core: drop stale PeerEvent::Connected after peer was removed
When `remove_peer` aborts a peer task whose `PeerEvent::Connected`
was already buffered in the fan-in channel, the coordinator was
processing the Connected message after the peer was gone from
`self.peers`, emitting a spurious `NetworkEvent::PeerConnected`
with an empty address (the `unwrap_or_default` fallback) ordered
right after the corresponding `PeerDisconnected`.

Found while diagnosing the topology UI's edge-flash colour
ordering: an edge would show red→green where green→red was
expected, because each peer task's death produced a leftover
Connected at the tail of the event sequence.

Fix: in `handle_peer_event`'s `Connected` arm, return early when
`self.peers.get(&peer_id)` is None.  The early return both skips
the misleading event emission and avoids the empty-address
fallback path entirely.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-ui: visualize edge connect/disconnect churn in topology graph
Adds two layers of feedback so cluster network instability is
visible at a glance:

- Edge flash queue: each PeerConnected → green and PeerDisconnected
  → red event flashes the corresponding edge for 450ms.  Stored as
  a per-edge FIFO so a rapid sequence (e.g. connect → die) plays
  green-then-red rather than the latter overwriting the former.
  A per-edge timer shifts the queue forward.

- Edge steady-state status: tracks the most recent observed event
  per edge.  Edges whose last event was a disconnect render pink
  (connection believed down); edges whose last event was a connect
  render light green (idle, alive); edges with no events stay gray
  (never observed).

Edge resolution: PeerConnected events carry a peer's connect
address, which (for outbound connections in a local cluster)
matches the target node's listen_address.  We map address →
topology node index and key the edge as
`min(local, peer)-max(local, peer)`.  PeerDisconnected events
carry no address, so we cache `(node, peer_id) → address` from
the prior PeerConnected to resolve them.  Inbound events from
ephemeral source ports don't resolve; the outbound side covers
the same edge.

Reveals the connection-recovery problem in the 25-node smoke:
most edges sit pink most of the time with brief green→red
flickers when a reconnect succeeds and dies within milliseconds
under the chain-catchup flood.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: clear EB-tx pending guard on response, respect it on retry
Two-line bookkeeping bug in LeiosState's CandidateTracker integration
caused a runaway retry loop on partial EB-tx fetch responses:

1. on_eb_txs_received cleared the per-slot in_flight gate but never
   called candidates.finish_eb_txs_fetch — the per-Point pending
   guard stayed permanently set after the first fetch.

2. retry_eb_tx_fetch called candidates.start_eb_txs_fetch but
   ignored the returned bool, firing a retry whether the guard was
   already set or not.

Combined with main.rs's ordering — match_eb_tx_response runs BEFORE
handle_event in the event loop — every partial response from a
single peer triggered another fetch to a different peer, and so on
through the candidate pool, with each retry sitting in the
per-protocol mpsc channel until the consumer caught up.  In a
25-node cluster smoke this manifested as ~360 IngressOverflow
events per node on protocol 19 (LeiosFetch) over 60 seconds,
tearing down peer mux connections and starving the rest of the
Leios pipeline.

Fixes:

- match_eb_tx_response now calls candidates.finish_eb_txs_fetch.
  By definition the in-flight fetch is done the moment its response
  is matched; clearing here covers both the wrapper-direct flow
  (test fixtures call match_eb_tx_response without going through
  on_eb_txs_received) and the production main.rs flow (where
  match_eb_tx_response runs before handle_event).
- on_eb_txs_received also calls candidates.finish_eb_txs_fetch as
  a defence-in-depth idempotent clear — covers wrappers that drive
  the receive path without separately calling match_eb_tx_response.
- retry_eb_tx_fetch respects start_eb_txs_fetch's return value,
  skipping the fetch when a previous one is still in flight.  The
  next response will trigger the next retry attempt naturally.

Smoke verification in the same 25-node cluster: IngressOverflow
events on protocol 19 drop from 360 to 9 in node-0, partial-retry
warnings from hundreds-per-EB to ~3.

182 con-rs + 478 net-rs tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-core: dedup Leios offer events per (peer, resource) in coordinator
Work Item 2 deleted the coordinator-side LeiosTracker that had been
deduping incoming offers on (slot, hash) before forwarding to the
application.  Consensus's CandidateTracker now needs to see all peers
that have offered each resource (so BlockFetchPolicy / BroadcastN /
LowestRttFirst can rank candidates), so the old "one event per
(slot, hash) regardless of peer" model couldn't be kept verbatim —
each peer's first offer for each resource has to flow through.

But each peer's notify-loop replays still-relevant EBs / EB-txs /
votes every iteration, so without any dedup the same peer's
re-announces flood network_events.  In a 5-node smoke run the
EBReceived count reached 819 events for ~22 unique EBs (~37x per EB),
saturating the network_events channel headroom, blocking the
coordinator's peer_events drain, and cascading to per-protocol mux
ingress overflow on protocol 19 (LeiosFetch).  Symptom: peers
disconnect immediately after the first EB offer they manage to
deliver, with "leios_notify: mux error: mux shut down".

Add an OfferDedup struct in the coordinator keyed on
(peer_id, resource).  Each peer's first offer of a resource still
fires; subsequent re-announces by that peer are silently absorbed.
Slot-pruned via the existing leios_dedup_window so the dedup state
stays bounded.

Smoke verification: same 5-node config, EBReceived dropped from 819
to 125 (~6.5x reduction) with the same tx_rate.  Voting does fire
(2 EmitVote, 60 LeiosNoVote with the expected
WrongEB/LateRBHeader/MissingTX breakdown).  Mux back-pressure on
LeiosFetch body responses persists at lower frequency — that's a
separate per-protocol-channel-size concern that pre-dates this work.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: live RTT oracle for fetch policies, plumbed through net-rs
Replaces the UniformRtt(0) default that PraosState / LeiosState fall
back to with a concurrent shared map populated by the coordinator's
KeepAlive measurements.  Without this, LowestRttFirst was effectively
picking the lowest-PeerId candidate rather than the lowest-RTT one — a
quiet regression vs the pre-policy in-coordinator routing.

Layering:

  con-rs::fetch::PeerRttCache
      Arc<RwLock<BTreeMap<PeerId, Duration>>>; impl PeerRtt.
      Cheap Clone (shares Arc), so the same cache backs both the
      writer and the readers.

  net-core CoordinatorConfig.peer_rtt_observer
      Option<Arc<dyn Fn(PeerId, Option<Duration>) + Send + Sync>>.
      Opaque callback — no con-rs types crossing the API.  Coordinator
      invokes it from LatencyMeasured (Some) and from peer-removal
      (None).  Keeps net-core's con-rs surface minimal: still just
      Point, Tip, PeerId re-exports.

  net-node main.rs
      Constructs PeerRttCache, wraps it as the observer closure for
      the coordinator, and hands the cache to Consensus::new which
      threads it through new set_rtt setters on PraosConsensus and
      LeiosConsensus.  Existing test paths use the default
      UniformRtt(0) and don't change shape.

Three CoordinatorConfig literal-construction sites updated with the
new field set to None (net-cli's serve / multi_follow, net-node's
network.rs).

Smoke verification: 5-node net-cluster (30s) and 2-node manual
(slot_duration=200ms, ~25s) both run cleanly: blocks produced and
validated, TipAdvanced flowing, fork resolution working, no panics
or errors.  Leios-side activity (EBs, votes, quorum) requires
non-empty mempool overflow and longer runs, deferred to a fuller
exercise.

182 con-rs + 478 net-rs tests pass; clippy clean both crates.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: lift mempool into a sans-IO state machine with effect/on_xx surface
Pulls net-node/src/mempool.rs's queue, capacity-bounded eviction,
per-peer advertised set, and tx-id ↔ body lookup into
con-rs/src/mempool.rs as MempoolState — a peer of PraosState and
LeiosState.  Validation crosses the boundary in the same shape as
Praos block apply and Leios EB / vote validation:

  on_tx_received(tx_id, body)    →  emit ValidateTx
                                    (wrapper validates)
  on_tx_validated(tx_id, size)   →  admit + caller pulls advertise list
  on_tx_validation_failed(.., r) →  emit TxRejected(ValidationFailed)

No MempoolValidator trait — the consistent effect/on_xx pattern is
what makes the eventual Acropolis port a wrapper swap rather than a
trait migration.  admit_validated bypasses the dance for locally-
generated txs the wrapper has already validated.

MempoolEffect::TxRejected covers all three drop reasons (QueueFull on
oldest-evict, ValidationFailed, AlreadyKnown on duplicate arrival)
so consumers get parity telemetry.  No AdvertiseTx effect: the pull-
based peek_unannounced_for_peer query suffices for both net-rs's
TxSubmission server and the eventual sim-rs announce loop.  All
internal state is BTreeMap / BTreeSet for deterministic iteration.

Net-rs side:

  - net-node/src/mempool.rs reduced from ~650 to ~330 lines.  Mempool
    is now a one-field newtype over MempoolState; public methods
    translate net_core::peer::PeerId ↔ con_rs::peer::PeerId and
    net_core::PendingTx ↔ con_rs::mempool::PendingTx at the boundary.
  - spawn_tx_generator and spawn_tx_validator stay (they're I/O-side
    actors).  Validator currently uses admit_validated to bypass the
    effect path; the dance is exercised via direct con-rs unit tests
    until a real validator service lands.
  - Algorithmic mempool tests removed (lived in net-node's old test
    block, now covered by con-rs's 23 tests on MempoolState directly).
    Wrapper-specific tests (translation, fake-tx hashing, exp_sample,
    validator integration) kept.

Verification: 178 con-rs tests (+23 new on MempoolState) and 478
net-rs tests pass; both crates clippy-clean under -D warnings.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: clear pre-existing clippy lints
Four lints were firing under cargo clippy --all-targets -- -D warnings:

- net-core/src/store/leios_store.rs: manual_is_multiple_of
  → version.is_multiple_of(stats_log_interval)

- net-node/src/consensus/leios/mod.rs: cloned_ref_to_slice_refs
  → std::slice::from_ref(&body1)

- net-node/src/consensus/leios/mod.rs: dead_code on
  election_phase / election_count test helpers, orphaned by the
  duplicate-test cull in 4d6588ee7

- net-node/src/mempool.rs: dead_code on Mempool::peek_up_to
  (only ever called from its own self-tests) and the unused
  make_tx test helper

The first two are mechanical clippy fixes; the latter two are
genuine dead code, deleted along with their self-tests.  Whole-
workspace cargo clippy --all-targets -- -D warnings is now clean,
so future regressions surface immediately instead of getting
shrugged off as "pre-existing."

493 net-rs tests pass (down 2 from removed peek_up_to self-tests).

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: cull duplicate consensus tests covered by con-rs
After the con-rs extraction, 17 of the ~31 leios consensus tests in
net-node were exercising pure state-machine behaviour through a thin
async wrapper — every shape change in con-rs (e.g., adding peer_id to
LeiosBlockOffered, adding peers: Vec<PeerId> to FetchLeiosBlock)
required a parallel edit here for the same assertion.

Deleted (pure pipeline / election / quorum / EB-tx-match logic, all
covered in con-rs's own #[cfg(test)] blocks):

  eb_creates_election
  election_advances_to_voting
  election_advances_through_all_phases
  duplicate_eb_deduped
  old_election_pruned
  multiple_ebs_concurrent
  eb_arriving_late_starts_in_correct_phase
  expired_eb_not_tracked
  no_vote_during_equivocation_check
  duplicate_block_offer_dedup
  duplicate_voter_not_counted
  quorum_reached_after_enough_voters
  pruned_election_drops_eb_manifests
  match_eb_tx_response_keeps_only_manifest_hashes_in_order
  match_eb_tx_response_with_unknown_manifest_passes_through
  match_eb_tx_response_pending_bitmap_cleared_after_match
  retry_eb_tx_fetch_with_empty_bitmap_is_noop

Kept: tests that exercise wrapper translation specifically — effect →
NetworkCommand dispatch, vote-body construction, validator
submissions, EmitTelemetry → NodeEvent mapping, mempool-aware bitmap
computation, and the cross-state-machine plumbing (validated-eb →
election → vote-validated → quorum).  These have no con-rs equivalent.

net-rs goes from 512 to 495 tests; con-rs's 155 still own the deleted
ground.  cargo test green; clippy unchanged from the pre-existing
manual_is_multiple_of in leios_store.rs.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>