deploy: 01e8d4463a47d1f67a9be9d5aab5074da454cc6a
Home /
Input Output /
ouroboros-leios
Apr 23, 4-5 PM (0)
Apr 23, 5-6 PM (0)
Apr 23, 6-7 PM (0)
Apr 23, 7-8 PM (0)
Apr 23, 8-9 PM (0)
Apr 23, 9-10 PM (0)
Apr 23, 10-11 PM (0)
Apr 23, 11-12 AM (0)
Apr 24, 12-1 AM (0)
Apr 24, 1-2 AM (0)
Apr 24, 2-3 AM (0)
Apr 24, 3-4 AM (0)
Apr 24, 4-5 AM (0)
Apr 24, 5-6 AM (0)
Apr 24, 6-7 AM (0)
Apr 24, 7-8 AM (0)
Apr 24, 8-9 AM (0)
Apr 24, 9-10 AM (0)
Apr 24, 10-11 AM (0)
Apr 24, 11-12 PM (0)
Apr 24, 12-1 PM (4)
Apr 24, 1-2 PM (0)
Apr 24, 2-3 PM (0)
Apr 24, 3-4 PM (0)
Apr 24, 4-5 PM (0)
Apr 24, 5-6 PM (0)
Apr 24, 6-7 PM (1)
Apr 24, 7-8 PM (0)
Apr 24, 8-9 PM (18)
Apr 24, 9-10 PM (2)
Apr 24, 10-11 PM (0)
Apr 24, 11-12 AM (0)
Apr 25, 12-1 AM (0)
Apr 25, 1-2 AM (0)
Apr 25, 2-3 AM (0)
Apr 25, 3-4 AM (0)
Apr 25, 4-5 AM (0)
Apr 25, 5-6 AM (0)
Apr 25, 6-7 AM (0)
Apr 25, 7-8 AM (0)
Apr 25, 8-9 AM (0)
Apr 25, 9-10 AM (0)
Apr 25, 10-11 AM (0)
Apr 25, 11-12 PM (0)
Apr 25, 12-1 PM (0)
Apr 25, 1-2 PM (0)
Apr 25, 2-3 PM (0)
Apr 25, 3-4 PM (0)
Apr 25, 4-5 PM (0)
Apr 25, 5-6 PM (2)
Apr 25, 6-7 PM (3)
Apr 25, 7-8 PM (25)
Apr 25, 8-9 PM (25)
Apr 25, 9-10 PM (0)
Apr 25, 10-11 PM (0)
Apr 25, 11-12 AM (0)
Apr 26, 12-1 AM (0)
Apr 26, 1-2 AM (0)
Apr 26, 2-3 AM (0)
Apr 26, 3-4 AM (0)
Apr 26, 4-5 AM (0)
Apr 26, 5-6 AM (0)
Apr 26, 6-7 AM (0)
Apr 26, 7-8 AM (0)
Apr 26, 8-9 AM (0)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (1)
Apr 26, 11-12 PM (0)
Apr 26, 12-1 PM (0)
Apr 26, 1-2 PM (0)
Apr 26, 2-3 PM (0)
Apr 26, 3-4 PM (0)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (0)
Apr 26, 6-7 PM (5)
Apr 26, 7-8 PM (0)
Apr 26, 8-9 PM (0)
Apr 26, 9-10 PM (0)
Apr 26, 10-11 PM (0)
Apr 26, 11-12 AM (0)
Apr 27, 12-1 AM (0)
Apr 27, 1-2 AM (0)
Apr 27, 2-3 AM (0)
Apr 27, 3-4 AM (0)
Apr 27, 4-5 AM (0)
Apr 27, 5-6 AM (0)
Apr 27, 6-7 AM (0)
Apr 27, 7-8 AM (2)
Apr 27, 8-9 AM (1)
Apr 27, 9-10 AM (0)
Apr 27, 10-11 AM (3)
Apr 27, 11-12 PM (1)
Apr 27, 12-1 PM (0)
Apr 27, 1-2 PM (0)
Apr 27, 2-3 PM (1)
Apr 27, 3-4 PM (0)
Apr 27, 4-5 PM (0)
Apr 27, 5-6 PM (0)
Apr 27, 6-7 PM (0)
Apr 27, 7-8 PM (0)
Apr 27, 8-9 PM (0)
Apr 27, 9-10 PM (0)
Apr 27, 10-11 PM (0)
Apr 27, 11-12 AM (2)
Apr 28, 12-1 AM (0)
Apr 28, 1-2 AM (0)
Apr 28, 2-3 AM (0)
Apr 28, 3-4 AM (0)
Apr 28, 4-5 AM (0)
Apr 28, 5-6 AM (0)
Apr 28, 6-7 AM (1)
Apr 28, 7-8 AM (0)
Apr 28, 8-9 AM (0)
Apr 28, 9-10 AM (0)
Apr 28, 10-11 AM (4)
Apr 28, 11-12 PM (2)
Apr 28, 12-1 PM (0)
Apr 28, 1-2 PM (0)
Apr 28, 2-3 PM (1)
Apr 28, 3-4 PM (0)
Apr 28, 4-5 PM (2)
Apr 28, 5-6 PM (0)
Apr 28, 6-7 PM (0)
Apr 28, 7-8 PM (0)
Apr 28, 8-9 PM (0)
Apr 28, 9-10 PM (0)
Apr 28, 10-11 PM (0)
Apr 28, 11-12 AM (0)
Apr 29, 12-1 AM (0)
Apr 29, 1-2 AM (0)
Apr 29, 2-3 AM (0)
Apr 29, 3-4 AM (0)
Apr 29, 4-5 AM (0)
Apr 29, 5-6 AM (0)
Apr 29, 6-7 AM (2)
Apr 29, 7-8 AM (0)
Apr 29, 8-9 AM (0)
Apr 29, 9-10 AM (7)
Apr 29, 10-11 AM (3)
Apr 29, 11-12 PM (1)
Apr 29, 12-1 PM (3)
Apr 29, 1-2 PM (0)
Apr 29, 2-3 PM (0)
Apr 29, 3-4 PM (5)
Apr 29, 4-5 PM (0)
Apr 29, 5-6 PM (0)
Apr 29, 6-7 PM (0)
Apr 29, 7-8 PM (0)
Apr 29, 8-9 PM (0)
Apr 29, 9-10 PM (0)
Apr 29, 10-11 PM (0)
Apr 29, 11-12 AM (0)
Apr 30, 12-1 AM (0)
Apr 30, 1-2 AM (0)
Apr 30, 2-3 AM (0)
Apr 30, 3-4 AM (0)
Apr 30, 4-5 AM (0)
Apr 30, 5-6 AM (0)
Apr 30, 6-7 AM (0)
Apr 30, 7-8 AM (0)
Apr 30, 8-9 AM (0)
Apr 30, 9-10 AM (0)
Apr 30, 10-11 AM (2)
Apr 30, 11-12 PM (0)
Apr 30, 12-1 PM (0)
Apr 30, 1-2 PM (0)
Apr 30, 2-3 PM (0)
Apr 30, 3-4 PM (0)
Apr 30, 4-5 PM (0)
129 commits this week
Apr 23, 2026
-
Apr 30, 2026
net-rs: switch net-node to jemalloc allocator
glibc's allocator holds freed pages on its freelist and returns them to the OS lazily, inflating RSS under heavy small-allocation churn. At 50 TPS the cluster grew to ~10 GB with glibc but ~7 GB with jemalloc — a ~30% reduction with no other change. tikv-jemallocator is the standard Rust binding and is already used by other Cardano-adjacent tools. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: make LeiosStore stats logging configurable, default off
Add a stats_log_interval knob to LeiosStore: when non-zero, every Nth bump_version logs the current map sizes (blocks/block_txs/eb_tx_hashes/ votes/notifications/max_slot/cutoff). Default 0 disables. Plumb through CoordinatorConfig.leios_store_stats_log_interval and a matching net-node config field so it can be enabled from TOML or --node-set transactions.leios_store_stats_log_interval=50. Useful for memory-leak diagnostics — confirmed slot-window retention is working as designed (entries evicted past max_slot - retention). Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: bound LeiosStore by slot-window retention
LeiosStore's votes / eb_tx_hashes / blocks / block_txs maps had no slot-based eviction — receivers accumulated every vote and every EB manifest forever. With actual EB and voting traffic, that's ~600 vote entries per EB committee × every EB seen = ~70 MB/s of unbounded growth on a 25-node cluster. Pre-codec-fix, broken tx flow meant few EBs / votes flowed and the leak was hidden. Add max_slot + retention_slots to LeiosStoreInner. Each inject_* updates max_slot; bump_version evicts entries with slot < max_slot - retention_slots. Default retention is 100 slots, sized for the 13-slot Linear Leios pipeline plus headroom — far smaller than the LeiosTracker dedup window (1000), since the tracker stores tiny offer IDs while this store holds full bodies. new_with_retention exposes the knob for explicit configuration. Notifications (small fixed-size enum entries used by LeiosNotify with absolute read indices) are intentionally left growing for now; pruning them requires reworking the read_index protocol. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: track per-peer announced txs in mempool, stop re-announcing
The TxsRequested handler in main.rs called mempool.peek_up_to (non- consuming) and returned the same head-of-mempool txs every cycle. With the codec fix unblocking real tx flow, this hot-loop re-cloned and re-shipped the same txs to each peer hundreds of times per second — ~115 MB/s of body memcpy per node. Move per-peer state into the mempool: peek_unannounced_for_peer marks each tx as advertised to the given peer; subsequent calls skip those ids. Push/drain_up_to/drain_all/capacity-eviction prune the affected ids from every peer set, bounding total state by mempool size. forget_peer drops the entry on disconnect. The handler in main.rs becomes a thin call into the mempool. New unit tests cover the per-peer independence, lazy pruning on tx removal, and forget_peer. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
filip(fix): add bg colour to iframe container to account for some screen size inconsistencies
net-rs: make telemetry sinks async to propagate backpressure
HttpEventSink::emit and HttpStatsSink::emit each spawned a fresh tokio task per event, with no bound on in-flight tasks. Under heavy event load (e.g. once the TxSubmission codec fix made tx propagation actually work), spawn rate exceeded drain rate and tasks accumulated unboundedly, each pinning a JSON payload, a reqwest::Client clone, and the in-flight POST future. Switch the EventSink/StatsSink traits to async (#[async_trait]) and have HTTP sinks .await the POST inline. record() and emit_stats() are now async; record_network_event() too. A slow aggregator now backpressures the caller chain naturally instead of leaking spawned tasks. New regression test (http_event_sink_does_not_spawn_per_emit) sanity-checks that emit doesn't leave background tasks behind. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Add Mininet LeiosFetch logic test bed to targeted-investigations
Fix message detail on the edge stats
net-rs: fix TxSubmission codec to wrap raw tx_id and tx body bytes
The TxId/TxBody encoder was writing self.0 raw on the wire, and the decoder used d.skip() to recover them — both assumed self.0 was already CBOR-encoded. Tests passed because they pre-encoded with e.bytes(...). Production constructs TxId(blake2b256(body)) and TxBody(raw_body) with the actual hash/body bytes, so the wire format was garbage: receivers decoded random bytes, looked them up in pending_bodies (no match), and sent empty MsgReplyTxs. Eventually a CBOR-decode error tore the protocol down — symptom: zero TransactionReceived events, one TxsRequested per peer, "txsubmission: mux shut down" cascade. Codec now wraps with e.bytes() / d.bytes(); TxId and TxBody hold raw bytes throughout. Updated test helpers to match. Two new round-trip tests (raw_hash_tx_id, raw_tx_body) cover the production form. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: cluster smoke surfaced two bitmap pipeline bugs
1. make_fake_tx generated a random tx_id independent of the body, but tx_from_received_bytes (used on receivers) and the EB manifest convention both treat tx_id as blake2b256(body). Locally-generated txs landed in producers' manifests with random hashes, so receivers could never match the bodies they received back to manifest indices. make_fake_tx now derives tx_id via tx_from_received_bytes. 2. LeiosStore::record_eb_manifest didn't push a BlockTxsOffer notification, so receivers' LeiosNotify never advertised tx availability for EBs they had cached the manifest for. The "offer set" was producer-only; if the producer was unreachable or the request errored, the retry path had zero alternatives. record now queues the offer so epidemic flooding extends past the producer. Regression test: make_fake_tx_id_equals_body_hash asserts the locally-generated tx_id equals what tx_from_received_bytes would compute, catching any future divergence. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: update leios-consensus.md for tx bitmap end-to-end
Marks the tx-bitmap initiative complete: helpers, server-side filter with producer/receiver paths, mempool-backed TxBodyResolver, manifest-cache + mempool-diff bitmap construction, hash-verified responses, and partial-response retry across peers via leios_tracker attempt exclusion. Test count updated to 558. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: retry partial EB tx responses on a different peer
Closes the partial-response loop. When a server returns fewer bodies than the bitmap requested, consensus identifies the still-missing manifest indices and re-issues FetchLeiosBlockTxs; the coordinator's leios_tracker excludes peers it has already tried, so the retry naturally lands on a different candidate. The cycle terminates when the request is fully satisfied or the offering peer set is exhausted. leios_tracker: txs_attempts: HashMap<(slot, hash), HashSet<PeerId>> tracks peers asked for each EB's txs across the retry sequence. pick_txs_fetch_peer filters candidates by this set and records its pick. update_slot prunes alongside the other slot-keyed sets; remove_peer drops disconnected peers. LeiosConsensus: EbTxMatchOutcome gains remaining_bitmap. After matching, the still-missing indices are stored back into pending_eb_tx_fetches so the next response is verified against the exact remaining set. New retry_eb_tx_fetch issues the follow-up command; LeiosBlockTxsReceived drops the in-flight gate so the retry path is unblocked. Tests: tracker exclusion (3), partial-then-retry two-stage flow, empty-bitmap retry is a no-op. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: hash-verify EB tx responses against the manifest
The wire format gives no per-body index in MsgLeiosBlockTxs, so a partial or out-of-order server response can't be interpreted by position alone. Each response body is now hashed with blake2b-256 and looked up in the cached manifest, restricted to the indices we actually requested. LeiosConsensus tracks the pending request bitmap per EB and on match_eb_tx_response returns the verified bodies in manifest order along with a requested count. main.rs forwards only verified bodies to the validator and warns on partial responses, setting up the next stage where the missing indices can be re-fetched from another peer. Three new tests: keep-manifest-hashes-only (with bogus mixed in), unknown-manifest passes through, pending bitmap consumed after match. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: receivers re-serve EB txs via mempool resolver
Adds a TxBodyResolver trait to net-core. LeiosStore now has two paths for MsgLeiosBlockTxsRequest: block_txs (full bodies, producer side) and eb_tx_hashes + resolver (manifest only, receiver side). The receiver path drops indices the resolver cannot supply, so partial responses are now first-class. net-node implements MempoolTxBodyResolver over SharedMempool and threads it through network::start into the coordinator's LeiosStore. After fetching and decoding an EB, LeiosConsensus emits RecordLeiosEbManifest so the coordinator's store can serve downstream peers' bitmap requests by resolving each tx_hash against the mempool. This closes the epidemic-flood gap: receivers can now satisfy MsgLeiosBlockTxsRequest for any EB whose manifest they have cached, without keeping a duplicate copy of the bodies. Tests cover: producer path takes precedence, resolver fallback, resolver partial response, server-handler wire integration, coordinator RecordLeiosEbManifest dispatch, and mempool get_body_by_id round-trip. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: route fetched EB tx bodies through the validator
LeiosBlockTxsReceived now feeds each transaction body into the same tx_valid_tx channel as TransactionReceived. Per-body validation delay, then mempool push with tx_id = blake2b256(body). Closes the loop: the producer drains tx bodies into an overflow EB, peers fetch only the missing indices via the bitmap, and the fetched bodies land in the receiver's mempool ready for the next RB or downstream EB. Test: spawn_tx_validator integration test confirms the resulting tx_id matches tx_from_received_bytes (covers both feeders, since they share the channel). Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: mempool-aware bitmap for LeiosBlockTxsRequest
Receiver decodes the EB manifest on LeiosBlockReceived, caches the ordered tx_hash list per EB, and on LeiosBlockTxsOffered builds a sparse bitmap of indices NOT present in its mempool. Consensus cuts wire bytes proportional to mempool overlap — local production plus prior tx dissemination already cover most txs in steady state. Fallback when the offer arrives before our EB fetch: select_all up to MAX_BITMAP_ENTRIES * 64 (the protocol's maximum). The server returns whatever subset it actually has. New helpers: production::decode_overflow_eb (pure decoder, paired with make_overflow_eb) and Mempool::current_tx_ids (HashSet for O(1) membership). LeiosConsensus now owns SharedMempool and an eb_tx_hashes cache, plumbed through Consensus::new from main.rs. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: producer publishes EB tx bodies to LeiosStore
The overflow EB previously injected only the manifest ([slot, [tx_hash, ...]]); peers had no way to obtain the bodies. ProducedEb now carries the tx body blobs alongside the manifest, and main.rs follows InjectLeiosBlock with InjectLeiosBlockTxs so the coordinator can populate LeiosStore::block_txs on the producer side. A new NetworkCommand::InjectLeiosBlockTxs handler routes the bodies through the coordinator. Tests cover the producer carrying bodies in manifest order and the coordinator delivering them into the store. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: server-side bitmap filtering for LeiosBlockTxsRequest
LeiosStore::get_block_txs now takes a sparse bitmap and returns the
selected txs in ascending index order. None still means "EB unknown"
(triggers server disconnect per CIP); Some(empty) means "EB known,
bitmap selected nothing". Out-of-range bits are silently ignored.
serve_leios_fetch passes the request bitmap through. New integration
test injects a 100-tx EB and confirms indices {0,5,64,99} come back
in order over the wire.
Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: bitmap helpers for LeiosFetch tx selection
Pure helpers over the CIP-0164 sparse `BTreeMap<u16, u64>` bitmap used by `MsgLeiosBlockTxsRequest`: `from_indices`, `select_all`, `contains`, `iter_indices`. Empty bitmap selects no transactions, matching the wire-format semantics; `select_all(n)` produces the "every tx" bitmap for callers that want the previous behavior. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Change timeline to have skip to next/previous event
Add edge stats that show message in transit detail
run-all-voting-modes: continue across modes on partial failure
run-sweep.sh now exits 1 when any experiment fails (continue-on- failure logic), which under set -eo pipefail aborted the outer loop after the first mode with any OOM. Wrap the inner call so failures in one mode don't lose the remaining modes; report at the end. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
Bump virt ulimit to 256G; revert pigz -1 -> pigz -9
The 96G ulimit -v killed 1500-node sims at slot 313 with RSS only at 58G — Rust+tokio allocator reserves more virtual address space on larger topologies than physical commit alone implies. The cap was sized for 750 nodes; 1500 needs more headroom. 256G is the board's max physical RAM ceiling; actual commit is bounded by RAM + swap. Reverts pigz -1 to pigz -9 — the faster compressor did not solve the end-of-sim EventMonitor spike (still ~8 GB from 11M events flushed at once, regardless of compressor speed). The bottleneck is the unbounded mpsc channel, not compression. Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>