Apr 30, 3-4 PM (29)
Apr 30, 4-5 PM (34)
Apr 30, 5-6 PM (9)
Apr 30, 6-7 PM (20)
Apr 30, 7-8 PM (23)
Apr 30, 8-9 PM (28)
Apr 30, 9-10 PM (13)
Apr 30, 10-11 PM (25)
Apr 30, 11-12 AM (15)
May 01, 12-1 AM (18)
May 01, 1-2 AM (15)
May 01, 2-3 AM (6)
May 01, 3-4 AM (7)
May 01, 4-5 AM (3)
May 01, 5-6 AM (5)
May 01, 6-7 AM (8)
May 01, 7-8 AM (15)
May 01, 8-9 AM (24)
May 01, 9-10 AM (17)
May 01, 10-11 AM (16)
May 01, 11-12 PM (17)
May 01, 12-1 PM (39)
May 01, 1-2 PM (32)
May 01, 2-3 PM (19)
May 01, 3-4 PM (16)
May 01, 4-5 PM (25)
May 01, 5-6 PM (11)
May 01, 6-7 PM (20)
May 01, 7-8 PM (22)
May 01, 8-9 PM (65)
May 01, 9-10 PM (15)
May 01, 10-11 PM (40)
May 01, 11-12 AM (61)
May 02, 12-1 AM (6)
May 02, 1-2 AM (11)
May 02, 2-3 AM (5)
May 02, 3-4 AM (8)
May 02, 4-5 AM (6)
May 02, 5-6 AM (2)
May 02, 6-7 AM (2)
May 02, 7-8 AM (14)
May 02, 8-9 AM (7)
May 02, 9-10 AM (8)
May 02, 10-11 AM (11)
May 02, 11-12 PM (7)
May 02, 12-1 PM (7)
May 02, 1-2 PM (3)
May 02, 2-3 PM (14)
May 02, 3-4 PM (9)
May 02, 4-5 PM (27)
May 02, 5-6 PM (9)
May 02, 6-7 PM (29)
May 02, 7-8 PM (11)
May 02, 8-9 PM (15)
May 02, 9-10 PM (1)
May 02, 10-11 PM (20)
May 02, 11-12 AM (18)
May 03, 12-1 AM (8)
May 03, 1-2 AM (1)
May 03, 2-3 AM (4)
May 03, 3-4 AM (7)
May 03, 4-5 AM (1)
May 03, 5-6 AM (4)
May 03, 6-7 AM (32)
May 03, 7-8 AM (5)
May 03, 8-9 AM (1)
May 03, 9-10 AM (3)
May 03, 10-11 AM (10)
May 03, 11-12 PM (11)
May 03, 12-1 PM (16)
May 03, 1-2 PM (11)
May 03, 2-3 PM (2)
May 03, 3-4 PM (2)
May 03, 4-5 PM (5)
May 03, 5-6 PM (0)
May 03, 6-7 PM (5)
May 03, 7-8 PM (6)
May 03, 8-9 PM (8)
May 03, 9-10 PM (15)
May 03, 10-11 PM (23)
May 03, 11-12 AM (17)
May 04, 12-1 AM (4)
May 04, 1-2 AM (4)
May 04, 2-3 AM (10)
May 04, 3-4 AM (9)
May 04, 4-5 AM (5)
May 04, 5-6 AM (6)
May 04, 6-7 AM (6)
May 04, 7-8 AM (28)
May 04, 8-9 AM (26)
May 04, 9-10 AM (43)
May 04, 10-11 AM (36)
May 04, 11-12 PM (61)
May 04, 12-1 PM (34)
May 04, 1-2 PM (49)
May 04, 2-3 PM (64)
May 04, 3-4 PM (33)
May 04, 4-5 PM (64)
May 04, 5-6 PM (49)
May 04, 6-7 PM (13)
May 04, 7-8 PM (32)
May 04, 8-9 PM (45)
May 04, 9-10 PM (9)
May 04, 10-11 PM (54)
May 04, 11-12 AM (24)
May 05, 12-1 AM (4)
May 05, 1-2 AM (5)
May 05, 2-3 AM (5)
May 05, 3-4 AM (11)
May 05, 4-5 AM (11)
May 05, 5-6 AM (50)
May 05, 6-7 AM (16)
May 05, 7-8 AM (37)
May 05, 8-9 AM (81)
May 05, 9-10 AM (68)
May 05, 10-11 AM (34)
May 05, 11-12 PM (72)
May 05, 12-1 PM (115)
May 05, 1-2 PM (118)
May 05, 2-3 PM (66)
May 05, 3-4 PM (91)
May 05, 4-5 PM (41)
May 05, 5-6 PM (26)
May 05, 6-7 PM (28)
May 05, 7-8 PM (73)
May 05, 8-9 PM (31)
May 05, 9-10 PM (18)
May 05, 10-11 PM (25)
May 05, 11-12 AM (17)
May 06, 12-1 AM (10)
May 06, 1-2 AM (5)
May 06, 2-3 AM (9)
May 06, 3-4 AM (23)
May 06, 4-5 AM (7)
May 06, 5-6 AM (13)
May 06, 6-7 AM (30)
May 06, 7-8 AM (11)
May 06, 8-9 AM (106)
May 06, 9-10 AM (27)
May 06, 10-11 AM (41)
May 06, 11-12 PM (46)
May 06, 12-1 PM (86)
May 06, 1-2 PM (53)
May 06, 2-3 PM (43)
May 06, 3-4 PM (33)
May 06, 4-5 PM (18)
May 06, 5-6 PM (8)
May 06, 6-7 PM (12)
May 06, 7-8 PM (26)
May 06, 8-9 PM (13)
May 06, 9-10 PM (9)
May 06, 10-11 PM (30)
May 06, 11-12 AM (23)
May 07, 12-1 AM (7)
May 07, 1-2 AM (2)
May 07, 2-3 AM (1)
May 07, 3-4 AM (10)
May 07, 4-5 AM (4)
May 07, 5-6 AM (33)
May 07, 6-7 AM (97)
May 07, 7-8 AM (235)
May 07, 8-9 AM (42)
May 07, 9-10 AM (29)
May 07, 10-11 AM (55)
May 07, 11-12 PM (38)
May 07, 12-1 PM (46)
May 07, 1-2 PM (44)
May 07, 2-3 PM (34)
May 07, 3-4 PM (0)
4,264 commits this week Apr 30, 2026 - May 07, 2026
net-rs: align test cluster config with sim-rs reference values
Two config changes to make the smoke cluster behave like the sim-rs
Linear-Leios reference scenarios rather than a stress test.

1. `[validation]` in mainnet.toml: switch RB-body validation cost
   from a flat `1000.0 ms` constant to sim-rs's
   `0.3539 ms + 2.151e-05 ms/byte` formula, and surface
   `tx_validation_ms = 0.6201` (matches sim-rs's amortised-Plutus
   `tx-validation-cpu-time-ms`).  At 90 KiB max RB body the new
   constant is ~2.4 ms — 3-hop propagation now finishes well under
   the 3-slot pre-voting buffer, where the old 1 s/hop pushed RB
   adoption past the voting window and forced every non-producer
   voter into `WrongEB`.

2. `rb_generation_probability` in sample-cluster.toml: revert from
   `0.2` (cluster-wide ≈ 1 RB / 5 s) to the base-config `0.05`
   (≈ 1 RB / 20 s, mainnet-like).  The high rate packed multiple
   RBs into each EB's voting window and meant the chain tip
   typically moved past the EB-referencing RB before quorum could
   gather, so no EB ever certified.

Together these match sim-rs's empirical 40%-certification working
point and let the cluster reach `RbCertifiedEb` end-to-end.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: retry voting on transient predicate failures + telemetry
Two changes to the Leios voting flow.

1. `decide_vote` previously called `mark_voted` on *any* predicate
   failure, so a voter that hit `WrongEB`, `LateRBHeader`, or
   `MissingTX` on the first slot of the Voting phase had no chance
   to retry next slot when their chain-tip caught up or the missing
   TX arrived.  In the 25-node cluster this gave ~1 vote per EB
   (the producer's), nowhere near quorum.

   Only `LateEB` is genuinely permanent — `eb_seen_slot` is fixed at
   receipt, once-late is always-late.  The other three are transient.
   Skip `mark_voted` on the transient reasons so `EligibleToVote`
   re-fires every slot of the Voting window.  `EmitVote` still calls
   `mark_voted` so a successful vote happens at most once per EB.

   Effect on the smoke test: best-EB vote count went from 1 to 18
   out of 25, which crosses the quorum threshold and produces the
   first end-to-end `RbCertifiedEb` events.

2. `emit_vote` only logged the bundle at `info!` level; the
   `VTBundleGenerated` telemetry variant defined in `telemetry.rs`
   was never constructed.  Push it onto `pending_telemetry` parallel
   to `LeiosNoVote` so the UI can show producer flashes.

Updated the two `con-rs` unit tests that asserted the
"`mark_voted` after WrongEB" behaviour, and added a positive test
for the new "transient reasons re-fire each slot" semantic.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-core: scale mux ingress for Leios bulk fetches
Two related sizing fixes for protocol 19 (LeiosFetch).

1. Per-protocol ingress channel capacity was a single
   `egress_queue_size: 16` value used for every protocol.  At 25-node-
   cluster scale, an EB-tx response can fragment into ~256 segments
   and overshoots a 16-deep mpsc instantly, killing the mux.  Split
   into three named tiers in `peer_task.rs`:
     - `HIGH_VOLUME_QUEUE_SIZE   = 256` (chainsync, blockfetch,
       txsubmission, leios_notify) — kilobyte-sized messages
     - `BULK_FETCH_QUEUE_SIZE    = 4096` (leios_fetch only) — sized
       to hold a worst-case 12 KiB-segmented multi-MB delivery plus
       headroom for the next request
     - `LOW_VOLUME_QUEUE_SIZE    = 4` (keepalive, peersharing) — slow
       fixed-size traffic

2. `MuxError::IngressOverflow` was reused for both byte-budget overflow
   and channel-full: the resulting message read
   "{queued bytes} exceeds limit {byte budget}" even when the byte
   budget was *not* exceeded, because the channel filled first.  Add
   a sibling `IngressChannelFull { queued, capacity }` variant and
   use it on the `try_send` Full path so the log accurately points
   at the channel as the bottleneck.

3. `LeiosFetch` `INGRESS_LIMIT` and `SIZE_LIMIT_LARGE` raised from
   16 MiB to 24 MiB.  Without 50% headroom over `MAX_BLOCK_SIZE`,
   the demuxer's per-state buffer cap was hit by the very last SDU
   of a max-size delivery (codec hadn't drained earlier segments
   yet), tearing the connection down at the end of every full-size
   EB-tx response.  Per-message safety is still enforced inside
   the CBOR codec via the unchanged `MAX_BLOCK_SIZE`,
   `MAX_TRANSACTIONS`, `MAX_VOTES`, `MAX_TRANSACTION_SIZE` caps.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
con-rs: cooldown peer on block-fetch failure, route via fetch policy
When a `BlockFetch` request fails, `on_block_fetch_failed` previously
removed the in-flight marker and immediately re-ran chain selection.
The same peer's announced fragment still pointed at the missing point,
so `select_chain_once` reached the same `WaitingForBlocks { peer_id }`
decision and re-issued the fetch from the same peer in microseconds.
Under any persistent fetch failure (e.g. a peer that NoBlocks every
request) this busy-loops at hundreds of thousands of iterations per
second — disk I/O from the resulting log spam can starve slot ticks
and cascade into cluster-wide fork divergence.

Add `BLOCK_FETCH_COOLDOWN` (2 s) and a `block_fetch_cooldown` map on
`PraosState`.  `on_block_fetch_failed` now takes the responsible
`PeerId`, inserts a cooldown entry, and `evaluate_and_fetch_internal`
merges those peers into its `skip` set so the fetch policy picks a
different candidate.

`NetworkEvent::BlockFetchFailed` now carries `peer_id: Option<PeerId>`
so the wrapper can pass it through.  `Some(p)` is the normal "this
peer failed" case; `None` means the coordinator never reached any
peer for the requested fragment, so there is no one to penalise and
the wrapper skips the cooldown call.

Demoted `select_chain: fetching missing blocks`,
`fetching missing chain blocks`, and the failure log to `debug!` —
they fired at info level on every fetch decision and were the bulk
of the disk-I/O storm during the 25-node smoke.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Consolidate table kinds and re-index LedgerTables by blk
Reshape the UTxO-HD table abstraction so the @mk@ parameter is a
single-argument @TableKind@ (indexed by @blk@) rather than a
two-argument @MapKind@ over @(TxIn blk)@ and @(TxOut blk)@. The
concrete table types (`EmptyMK`, `KeysMK`, `ValuesMK`, `DiffMK`)
are renamed to `NoTables`, `Keys`, `Values`, `Diffs` and now take
@blk@ directly.

Collapse `Ouroboros.Consensus.Ledger.Tables{,.Basics,.Combinators,
.Kinds,.MapKind,.Utils}` into a single new
`Ouroboros.Consensus.Ledger.BasicTypes` module. The old modules are
left on disk but commented out of the cabal file.

Add `empty`, `map`, and `mapKeys` to
`Ouroboros.Consensus.Ledger.Tables.Diff`, and a constant bifunctor
`K2` in `Ouroboros.Consensus.Util`, both used by the new module.

Update all 130+ call sites to the new names and shape.