Home / Input Output / ouroboros-leios
Apr 23, 11-12 AM (0)
Apr 24, 12-1 AM (0)
Apr 24, 1-2 AM (0)
Apr 24, 2-3 AM (0)
Apr 24, 3-4 AM (0)
Apr 24, 4-5 AM (0)
Apr 24, 5-6 AM (0)
Apr 24, 6-7 AM (0)
Apr 24, 7-8 AM (0)
Apr 24, 8-9 AM (0)
Apr 24, 9-10 AM (0)
Apr 24, 10-11 AM (0)
Apr 24, 11-12 PM (0)
Apr 24, 12-1 PM (4)
Apr 24, 1-2 PM (0)
Apr 24, 2-3 PM (0)
Apr 24, 3-4 PM (0)
Apr 24, 4-5 PM (0)
Apr 24, 5-6 PM (0)
Apr 24, 6-7 PM (1)
Apr 24, 7-8 PM (0)
Apr 24, 8-9 PM (18)
Apr 24, 9-10 PM (2)
Apr 24, 10-11 PM (0)
Apr 24, 11-12 AM (0)
Apr 25, 12-1 AM (0)
Apr 25, 1-2 AM (0)
Apr 25, 2-3 AM (0)
Apr 25, 3-4 AM (0)
Apr 25, 4-5 AM (0)
Apr 25, 5-6 AM (0)
Apr 25, 6-7 AM (0)
Apr 25, 7-8 AM (0)
Apr 25, 8-9 AM (0)
Apr 25, 9-10 AM (0)
Apr 25, 10-11 AM (0)
Apr 25, 11-12 PM (0)
Apr 25, 12-1 PM (0)
Apr 25, 1-2 PM (0)
Apr 25, 2-3 PM (0)
Apr 25, 3-4 PM (0)
Apr 25, 4-5 PM (0)
Apr 25, 5-6 PM (2)
Apr 25, 6-7 PM (3)
Apr 25, 7-8 PM (25)
Apr 25, 8-9 PM (25)
Apr 25, 9-10 PM (0)
Apr 25, 10-11 PM (0)
Apr 25, 11-12 AM (0)
Apr 26, 12-1 AM (0)
Apr 26, 1-2 AM (0)
Apr 26, 2-3 AM (0)
Apr 26, 3-4 AM (0)
Apr 26, 4-5 AM (0)
Apr 26, 5-6 AM (0)
Apr 26, 6-7 AM (0)
Apr 26, 7-8 AM (0)
Apr 26, 8-9 AM (0)
Apr 26, 9-10 AM (0)
Apr 26, 10-11 AM (1)
Apr 26, 11-12 PM (0)
Apr 26, 12-1 PM (0)
Apr 26, 1-2 PM (0)
Apr 26, 2-3 PM (0)
Apr 26, 3-4 PM (0)
Apr 26, 4-5 PM (0)
Apr 26, 5-6 PM (0)
Apr 26, 6-7 PM (5)
Apr 26, 7-8 PM (0)
Apr 26, 8-9 PM (0)
Apr 26, 9-10 PM (0)
Apr 26, 10-11 PM (0)
Apr 26, 11-12 AM (0)
Apr 27, 12-1 AM (0)
Apr 27, 1-2 AM (0)
Apr 27, 2-3 AM (0)
Apr 27, 3-4 AM (0)
Apr 27, 4-5 AM (0)
Apr 27, 5-6 AM (0)
Apr 27, 6-7 AM (0)
Apr 27, 7-8 AM (2)
Apr 27, 8-9 AM (1)
Apr 27, 9-10 AM (0)
Apr 27, 10-11 AM (3)
Apr 27, 11-12 PM (1)
Apr 27, 12-1 PM (0)
Apr 27, 1-2 PM (0)
Apr 27, 2-3 PM (1)
Apr 27, 3-4 PM (0)
Apr 27, 4-5 PM (0)
Apr 27, 5-6 PM (0)
Apr 27, 6-7 PM (0)
Apr 27, 7-8 PM (0)
Apr 27, 8-9 PM (0)
Apr 27, 9-10 PM (0)
Apr 27, 10-11 PM (0)
Apr 27, 11-12 AM (2)
Apr 28, 12-1 AM (0)
Apr 28, 1-2 AM (0)
Apr 28, 2-3 AM (0)
Apr 28, 3-4 AM (0)
Apr 28, 4-5 AM (0)
Apr 28, 5-6 AM (0)
Apr 28, 6-7 AM (1)
Apr 28, 7-8 AM (0)
Apr 28, 8-9 AM (0)
Apr 28, 9-10 AM (0)
Apr 28, 10-11 AM (4)
Apr 28, 11-12 PM (2)
Apr 28, 12-1 PM (0)
Apr 28, 1-2 PM (0)
Apr 28, 2-3 PM (1)
Apr 28, 3-4 PM (0)
Apr 28, 4-5 PM (2)
Apr 28, 5-6 PM (0)
Apr 28, 6-7 PM (0)
Apr 28, 7-8 PM (0)
Apr 28, 8-9 PM (0)
Apr 28, 9-10 PM (0)
Apr 28, 10-11 PM (0)
Apr 28, 11-12 AM (0)
Apr 29, 12-1 AM (0)
Apr 29, 1-2 AM (0)
Apr 29, 2-3 AM (0)
Apr 29, 3-4 AM (0)
Apr 29, 4-5 AM (0)
Apr 29, 5-6 AM (0)
Apr 29, 6-7 AM (2)
Apr 29, 7-8 AM (0)
Apr 29, 8-9 AM (0)
Apr 29, 9-10 AM (7)
Apr 29, 10-11 AM (3)
Apr 29, 11-12 PM (1)
Apr 29, 12-1 PM (3)
Apr 29, 1-2 PM (0)
Apr 29, 2-3 PM (0)
Apr 29, 3-4 PM (5)
Apr 29, 4-5 PM (0)
Apr 29, 5-6 PM (0)
Apr 29, 6-7 PM (0)
Apr 29, 7-8 PM (0)
Apr 29, 8-9 PM (0)
Apr 29, 9-10 PM (0)
Apr 29, 10-11 PM (0)
Apr 29, 11-12 AM (0)
Apr 30, 12-1 AM (0)
Apr 30, 1-2 AM (0)
Apr 30, 2-3 AM (0)
Apr 30, 3-4 AM (0)
Apr 30, 4-5 AM (0)
Apr 30, 5-6 AM (0)
Apr 30, 6-7 AM (0)
Apr 30, 7-8 AM (0)
Apr 30, 8-9 AM (0)
Apr 30, 9-10 AM (0)
Apr 30, 10-11 AM (2)
Apr 30, 11-12 PM (0)
Apr 30, 12-1 PM (0)
Apr 30, 1-2 PM (0)
Apr 30, 2-3 PM (0)
Apr 30, 3-4 PM (3)
Apr 30, 4-5 PM (0)
Apr 30, 5-6 PM (0)
Apr 30, 6-7 PM (0)
Apr 30, 7-8 PM (6)
Apr 30, 8-9 PM (6)
Apr 30, 9-10 PM (0)
Apr 30, 10-11 PM (0)
Apr 30, 11-12 AM (0)
144 commits this week Apr 24, 2026 - May 01, 2026
net-rs: prune Leios eb_tx_hashes and pending_eb_tx_fetches by slot age
LeiosConsensus.eb_tx_hashes (EB manifest cache) and
pending_eb_tx_fetches (per-EB requested-bitmap cache) were inserted on
every received EB but never pruned. With ~5 EBs/sec across the cluster
this map grew indefinitely. Tag each entry with the EB's announced slot
and drop entries whose age has passed pipeline expiry in on_slot. Doing
the cleanup independently of `elections.retain` covers the case where
an EB is received but the validator never produces a corresponding
election entry — without that the manifest would still leak.
net-rs: store per-peer announced tx ids as [u8; 32] inline
The mempool's per-peer advertised set was HashSet<Vec<u8>>, allocating
on the heap for every entry. Tx ids in this node are always Blake2b-256
(32 bytes) — see tx_from_received_bytes and the local generator — so
the Vec indirection is pure overhead. Switch the set to [u8; 32] which
inlines the key into the bucket and removes one malloc per insert. With
~24 peers and 50 TPS that's a few thousand fewer allocations per second
in steady state and ~half the per-entry footprint. Pre-emptively skip
non-32-byte ids on insert/prune so a future change that lets ids vary
won't blow up.
net-rs: prune LeiosTracker pending_*_fetches by slot window
pending_block_fetches, pending_txs_fetches and pending_vote_fetches were
only cleared on completion or peer disconnect. A silently dropped fetch
(no LeiosBlockFetched / LeiosBlockTxsFetched / LeiosVotesFetched event,
no peer disconnect) leaks the (slot, hash) entry forever. Add slot-based
pruning to update_slot, matching the existing seen_*/offers/txs_attempts
treatment. Once a slot ages out of the dedup window the seen set has
already been cleared, so dropping a stale pending entry is safe.
net-rs: switch net-node to jemalloc allocator
glibc's allocator holds freed pages on its freelist and returns them
to the OS lazily, inflating RSS under heavy small-allocation churn.
At 50 TPS the cluster grew to ~10 GB with glibc but ~7 GB with
jemalloc — a ~30% reduction with no other change.

tikv-jemallocator is the standard Rust binding and is already used
by other Cardano-adjacent tools.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: make LeiosStore stats logging configurable, default off
Add a stats_log_interval knob to LeiosStore: when non-zero, every Nth
bump_version logs the current map sizes (blocks/block_txs/eb_tx_hashes/
votes/notifications/max_slot/cutoff). Default 0 disables.

Plumb through CoordinatorConfig.leios_store_stats_log_interval and a
matching net-node config field so it can be enabled from TOML or
--node-set transactions.leios_store_stats_log_interval=50.

Useful for memory-leak diagnostics — confirmed slot-window retention
is working as designed (entries evicted past max_slot - retention).

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: bound LeiosStore by slot-window retention
LeiosStore's votes / eb_tx_hashes / blocks / block_txs maps had no
slot-based eviction — receivers accumulated every vote and every
EB manifest forever. With actual EB and voting traffic, that's
~600 vote entries per EB committee × every EB seen = ~70 MB/s of
unbounded growth on a 25-node cluster. Pre-codec-fix, broken tx
flow meant few EBs / votes flowed and the leak was hidden.

Add max_slot + retention_slots to LeiosStoreInner. Each inject_*
updates max_slot; bump_version evicts entries with slot < max_slot
- retention_slots. Default retention is 100 slots, sized for the
13-slot Linear Leios pipeline plus headroom — far smaller than
the LeiosTracker dedup window (1000), since the tracker stores
tiny offer IDs while this store holds full bodies.

new_with_retention exposes the knob for explicit configuration.
Notifications (small fixed-size enum entries used by LeiosNotify
with absolute read indices) are intentionally left growing for
now; pruning them requires reworking the read_index protocol.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: track per-peer announced txs in mempool, stop re-announcing
The TxsRequested handler in main.rs called mempool.peek_up_to (non-
consuming) and returned the same head-of-mempool txs every cycle.
With the codec fix unblocking real tx flow, this hot-loop re-cloned
and re-shipped the same txs to each peer hundreds of times per second
— ~115 MB/s of body memcpy per node.

Move per-peer state into the mempool: peek_unannounced_for_peer marks
each tx as advertised to the given peer; subsequent calls skip those
ids. Push/drain_up_to/drain_all/capacity-eviction prune the affected
ids from every peer set, bounding total state by mempool size.
forget_peer drops the entry on disconnect.

The handler in main.rs becomes a thin call into the mempool. New
unit tests cover the per-peer independence, lazy pruning on tx
removal, and forget_peer.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
net-rs: make telemetry sinks async to propagate backpressure
HttpEventSink::emit and HttpStatsSink::emit each spawned a fresh tokio
task per event, with no bound on in-flight tasks. Under heavy event
load (e.g. once the TxSubmission codec fix made tx propagation actually
work), spawn rate exceeded drain rate and tasks accumulated unboundedly,
each pinning a JSON payload, a reqwest::Client clone, and the in-flight
POST future.

Switch the EventSink/StatsSink traits to async (#[async_trait]) and
have HTTP sinks .await the POST inline. record() and emit_stats() are
now async; record_network_event() too. A slow aggregator now backpressures
the caller chain naturally instead of leaking spawned tasks.

New regression test (http_event_sink_does_not_spawn_per_emit) sanity-checks
that emit doesn't leave background tasks behind.

Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>