Mar 18, 2-3 PM (130)
Mar 18, 3-4 PM (44)
Mar 18, 4-5 PM (26)
Mar 18, 5-6 PM (33)
Mar 18, 6-7 PM (43)
Mar 18, 7-8 PM (39)
Mar 18, 8-9 PM (23)
Mar 18, 9-10 PM (32)
Mar 18, 10-11 PM (40)
Mar 18, 11-12 AM (31)
Mar 19, 12-1 AM (30)
Mar 19, 1-2 AM (13)
Mar 19, 2-3 AM (6)
Mar 19, 3-4 AM (4)
Mar 19, 4-5 AM (4)
Mar 19, 5-6 AM (1)
Mar 19, 6-7 AM (29)
Mar 19, 7-8 AM (58)
Mar 19, 8-9 AM (118)
Mar 19, 9-10 AM (73)
Mar 19, 10-11 AM (199)
Mar 19, 11-12 PM (54)
Mar 19, 12-1 PM (61)
Mar 19, 1-2 PM (48)
Mar 19, 2-3 PM (76)
Mar 19, 3-4 PM (138)
Mar 19, 4-5 PM (29)
Mar 19, 5-6 PM (19)
Mar 19, 6-7 PM (31)
Mar 19, 7-8 PM (19)
Mar 19, 8-9 PM (36)
Mar 19, 9-10 PM (13)
Mar 19, 10-11 PM (41)
Mar 19, 11-12 AM (18)
Mar 20, 12-1 AM (4)
Mar 20, 1-2 AM (4)
Mar 20, 2-3 AM (8)
Mar 20, 3-4 AM (9)
Mar 20, 4-5 AM (10)
Mar 20, 5-6 AM (4)
Mar 20, 6-7 AM (51)
Mar 20, 7-8 AM (27)
Mar 20, 8-9 AM (29)
Mar 20, 9-10 AM (38)
Mar 20, 10-11 AM (33)
Mar 20, 11-12 PM (22)
Mar 20, 12-1 PM (33)
Mar 20, 1-2 PM (91)
Mar 20, 2-3 PM (19)
Mar 20, 3-4 PM (52)
Mar 20, 4-5 PM (24)
Mar 20, 5-6 PM (20)
Mar 20, 6-7 PM (14)
Mar 20, 7-8 PM (15)
Mar 20, 8-9 PM (15)
Mar 20, 9-10 PM (16)
Mar 20, 10-11 PM (22)
Mar 20, 11-12 AM (18)
Mar 21, 12-1 AM (8)
Mar 21, 1-2 AM (2)
Mar 21, 2-3 AM (5)
Mar 21, 3-4 AM (2)
Mar 21, 4-5 AM (1)
Mar 21, 5-6 AM (2)
Mar 21, 6-7 AM (6)
Mar 21, 7-8 AM (3)
Mar 21, 8-9 AM (9)
Mar 21, 9-10 AM (7)
Mar 21, 10-11 AM (5)
Mar 21, 11-12 PM (4)
Mar 21, 12-1 PM (3)
Mar 21, 1-2 PM (8)
Mar 21, 2-3 PM (3)
Mar 21, 3-4 PM (9)
Mar 21, 4-5 PM (7)
Mar 21, 5-6 PM (12)
Mar 21, 6-7 PM (12)
Mar 21, 7-8 PM (2)
Mar 21, 8-9 PM (1)
Mar 21, 9-10 PM (5)
Mar 21, 10-11 PM (24)
Mar 21, 11-12 AM (21)
Mar 22, 12-1 AM (0)
Mar 22, 1-2 AM (6)
Mar 22, 2-3 AM (5)
Mar 22, 3-4 AM (0)
Mar 22, 4-5 AM (1)
Mar 22, 5-6 AM (6)
Mar 22, 6-7 AM (6)
Mar 22, 7-8 AM (1)
Mar 22, 8-9 AM (1)
Mar 22, 9-10 AM (2)
Mar 22, 10-11 AM (3)
Mar 22, 11-12 PM (5)
Mar 22, 12-1 PM (1)
Mar 22, 1-2 PM (1)
Mar 22, 2-3 PM (2)
Mar 22, 3-4 PM (6)
Mar 22, 4-5 PM (6)
Mar 22, 5-6 PM (17)
Mar 22, 6-7 PM (32)
Mar 22, 7-8 PM (48)
Mar 22, 8-9 PM (7)
Mar 22, 9-10 PM (1)
Mar 22, 10-11 PM (21)
Mar 22, 11-12 AM (23)
Mar 23, 12-1 AM (5)
Mar 23, 1-2 AM (4)
Mar 23, 2-3 AM (9)
Mar 23, 3-4 AM (7)
Mar 23, 4-5 AM (2)
Mar 23, 5-6 AM (4)
Mar 23, 6-7 AM (1)
Mar 23, 7-8 AM (7)
Mar 23, 8-9 AM (33)
Mar 23, 9-10 AM (36)
Mar 23, 10-11 AM (17)
Mar 23, 11-12 PM (97)
Mar 23, 12-1 PM (62)
Mar 23, 1-2 PM (53)
Mar 23, 2-3 PM (74)
Mar 23, 3-4 PM (67)
Mar 23, 4-5 PM (86)
Mar 23, 5-6 PM (65)
Mar 23, 6-7 PM (21)
Mar 23, 7-8 PM (18)
Mar 23, 8-9 PM (16)
Mar 23, 9-10 PM (16)
Mar 23, 10-11 PM (35)
Mar 23, 11-12 AM (31)
Mar 24, 12-1 AM (1)
Mar 24, 1-2 AM (3)
Mar 24, 2-3 AM (14)
Mar 24, 3-4 AM (4)
Mar 24, 4-5 AM (3)
Mar 24, 5-6 AM (4)
Mar 24, 6-7 AM (3)
Mar 24, 7-8 AM (102)
Mar 24, 8-9 AM (40)
Mar 24, 9-10 AM (30)
Mar 24, 10-11 AM (167)
Mar 24, 11-12 PM (26)
Mar 24, 12-1 PM (42)
Mar 24, 1-2 PM (129)
Mar 24, 2-3 PM (12)
Mar 24, 3-4 PM (40)
Mar 24, 4-5 PM (47)
Mar 24, 5-6 PM (110)
Mar 24, 6-7 PM (16)
Mar 24, 7-8 PM (9)
Mar 24, 8-9 PM (14)
Mar 24, 9-10 PM (14)
Mar 24, 10-11 PM (27)
Mar 24, 11-12 AM (14)
Mar 25, 12-1 AM (1)
Mar 25, 1-2 AM (2)
Mar 25, 2-3 AM (13)
Mar 25, 3-4 AM (2)
Mar 25, 4-5 AM (10)
Mar 25, 5-6 AM (5)
Mar 25, 6-7 AM (7)
Mar 25, 7-8 AM (14)
Mar 25, 8-9 AM (22)
Mar 25, 9-10 AM (48)
Mar 25, 10-11 AM (28)
Mar 25, 11-12 PM (36)
Mar 25, 12-1 PM (79)
Mar 25, 1-2 PM (5)
Mar 25, 2-3 PM (0)
4,305 commits this week Mar 18, 2026 - Mar 25, 2026
Rework forker usage pattern to avoid Registries (#1910)
# Description

Forkers have been a source of headaches for a very long time already,
and all due to the limitation of the Resource registry that enforces
registries to only be used from known threads. This has lead to a
situation where we had hierarchies of registries just to satisfy that
rule and even we had to extend the resource-registry API to allow
transferring between registries which is unsafe.

The general pattern followed now is:
1. At the ledgerdb level we only keep track of the resources, running
WithTempRegistry while opening the DB and then closing them when closing
the LedgerDB.
2. Forkers that open new handles are open in a `with*` combinator style,
so the handles are deallocated when exiting the scope (or committed).
3. Forkers that only duplicate existing handles (read-only forkers) are
only used in Mempool and LocalStateQuery and there we put them in a
registry, in particular in the top level registry.
4. The handles are not tracked directly---each handle is tracked
indirectly because some forker or the LedgerDB owns it and is tracked.
Forkers are tracked so that they can be released promptly which avoids
space-leaks and permits the backends (LSM/LMDB) to do their gradual
bookkeeping/rearranging work.
5. In particular, when the node is shutting down, there is no need to
close each handle/forker:
   1. in InMemory it doesn't matter as it is pure
2. in LSM, closing the session will close any open handles, and closing
the session happen when shutting down the node,
3. in LMDB, each forker has one single read-transaction open and it will
be closed as described above. There are no further resources allocated
on each step. Also closing the backing store is optional as described in
the LMDB docs (also mentioned in LedgerDB API).

Sadly, the change is so fundamental that there is no meaningful way of
splitting the changes into reviewable commits that compile on their own.

Proposed review order (recommended to disable whitespace-diff in Github
diff view and hide already viewed files):

1. Ouroboros.Consensus.Storage.ChainDB.API
5. Ouroboros.Consensus.MiniProtocol.LocalStateQuery.Server +
Ouroboros.Consensus.Network.NodeToClient
7. Ouroboros.Consensus.Mempool.{Impl.Common, Update, Init}
8. Ouroboros.Consensus.Storage.ChainDB.ChainSel
9. Ouroboros.Consensus.Storage.ChainDB.Impl.Types +
Ouroboros.Consensus.Storage.ChainDB.Impl.Query
10. Ouroboros.Consensus.Storage.ChainDB.Impl
11. Ouroboros.Consensus.Storage.ImmutableDB.{Impl, Impl.Validation}
12. Ouroboros.Consensus.Storage.VolatileDB.Impl
13. Ouroboros.Consensus.Storage.LedgerDB.API (both the API changes and
the initialization)
14. Ouroboros.Consensus.Storage.LedgerDB
15. Ouroboros.Consensus.Storage.LedgerDB.Forker
16. Ouroboros.Consensus.Node (just comments changes)
17. Ouroboros.Consensus.NodeKernel
18. Ouroboros.Consensus.Storage.LedgerDB.V2.LedgerSeq
19. Ouroboros.Consensus.Storage.LedgerDB.V2.Forker
20. Ouroboros.Consensus.Storage.LedgerDB.V2.Backend
21. Ouroboros.Consensus.Storage.LedgerDB.V2.InMemory
22. Ouroboros.Consensus.Storage.LedgerDB.V2.LSM
23. Ouroboros.Consensus.Storage.LedgerDB.V2    
24. Ouroboros.Consensus.Storage.LedgerDB.V1.Forker
25. Ouroboros.Consensus.Storage.LedgerDB.V1.Snapshots
26. Ouroboros.Consensus.Storage.LedgerDB.V1

The rest of the changes are just adapting tests or adapting the
cardano-tools to use the new access pattern. I did not do any meaningful
change in the tests by my will, I only followed GHC errors where they
took me.
net-rs: add Shelley+ header and block body parsers (Phase 4c)
Convert WrappedHeader from opaque tuple struct to named-field struct
with parsed HeaderInfo. Parser navigates era-tagged #6.24-wrapped CBOR
to extract block_number, slot, prev_hash, issuer_vkey, body_size, and
block_body_hash, plus CIP-0164 optional Leios fields (announced_eb,
certified_eb). Array length alone disambiguates which optional fields
are present (10=none, 11=certified_eb, 12=announced_eb, 13=both).

Convert BlockBody similarly with parsed LeiosBlockInfo. Block parser
extracts the optional eb_certificate from the 5th element of the
Shelley+ block array. Byron headers/blocks return None gracefully.

Box InjectBlock header to fix large_enum_variant. Update all call
sites (~20 files). 238 total tests, 17 new parser tests.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: add multi-peer coordinator (Phase 3 complete)
Thread-per-peer coordinator with peer-agnostic application interface.
Each peer runs independent tokio task tree (ChainSync, BlockFetch,
KeepAlive, PeerSharing sub-tasks). Coordinator aggregates tips with
deduplication, routes fetch requests to best peer by RTT, and
reconnects failed peers with exponential backoff (1s-30s).

New modules: net-core/src/peer/ (types, peer_task, coordinator,
connect). Connection helpers moved from net-cli to net-core.
New CLI: multi-follow --host <addr> [--host <addr>...].
ConnectionMode enum supports future ResponderOnly/Duplex modes.

153 tests (147 existing + 6 new). Live-tested against mainnet.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: add per-module and per-protocol READMEs
Hierarchical documentation: top-level README links to net-core/ and
net-cli/, net-core links to bearer/mux/types/protocols/peer modules,
protocols/ links to each of the 8 protocol subdirectories. Each
protocol README includes Mermaid state machine diagram, agency table,
limits, and API entry points. All links point to directories (not
README.md) so GitHub shows files alongside docs.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: remove server Request/Response boilerplate, use Message directly
The server-side Request/Response enums were 1:1 mappings of Message
variants, and receive_request()/send_response() were pure conversion
boilerplate. The Runner already enforces agency and state transitions,
so these types added no safety.

Server code now uses runner.recv()/runner.send() with Message directly,
matching the simplicity of the protocol framework design.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: update CLAUDE.md and implementation plan for Phase 2 completion
Document all Phase 2 deliverables: shared types, ChainSync/BlockFetch/
KeepAlive protocols (client + server), persistent chain follower, fake
server with Poisson blocks/rollbacks, serve.rs in workspace structure,
server-uses-Message-directly design decision. Add Phase 3 detail for
TxSubmission and PeerSharing protocols.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: split types.rs into types/ module (header.rs, block.rs)
Refactor the 1086-line types.rs into a directory module:
- types/mod.rs: Point, Tip, encode/decode_points, constants, re-exports
- types/header.rs: WrappedHeader, HeaderInfo, Shelley+ header parser
- types/block.rs: BlockBody, LeiosBlockInfo, block body parser

All import paths preserved via re-exports. 238 tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: move codec.rs to mux/ and protocol.rs to protocols/
codec.rs wraps mux channels with CBOR framing — belongs in mux/.
protocol.rs defines the protocol state machine framework — belongs
in protocols/. Re-export from parent modules so imports become
crate::mux::{CodecSend,CodecRecv} and crate::protocols::{Protocol,
Runner,Agency,Role,ProtocolError}. Update all ~40 import sites.
238 tests pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: add Leios per-peer task integration (Phase 4d)
Wire LeiosNotify (ID 18) and LeiosFetch (ID 19) into the per-peer task
architecture behind a `leios_enabled` config flag (default false).

- LeiosStore: content-addressed blob store for EBs/votes (separate from
  ChainStore since Leios data is keyed by (slot, hash), not a linear chain)
- Client tasks: spawn_leios_notify (continuous request_next loop),
  spawn_leios_fetch (command-driven, like BlockFetch)
- Server handlers: serve_leios_notify (from LeiosStore + subscribe),
  serve_leios_fetch (block/txs/vote lookups)
- Types: 6 PeerEvent, 2 PeerCommand, 5 NetworkEvent, 3 NetworkCommand variants
- Coordinator: stub-forwards Leios events, populates LeiosStore on fetch
- Wiring: peer_task, responder_task, duplex_task all conditionally register
  and spawn Leios protocol sub-tasks
- CLI: --leios flag on serve (synthetic EB/vote generation) and multi-follow
  (logs Leios notifications) for local end-to-end testing

247 total tests (9 new). Locally tested: serve --leios -> multi-follow --leios.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: add duplex connection mode (Phase 3 complete)
Mux now supports both directions on one connection via composite
(ProtocolId, u16) channel keys. register_with_mode() lets callers
specify direction explicitly; register() remains backward compatible.
Demuxer routes by (protocol_id, mode) — no single-mode validation.
Scheduler stays keyed by ProtocolId (both directions share priority).

New: DuplexConnection, connect_duplex(), accept_duplex() for
bidirectional connection setup. DuplexTask combines client + server
protocol sub-tasks on one mux. Coordinator spawns duplex tasks when
config.duplex is set. multi-follow --duplex flag.

All three ConnectionMode variants now implemented:
- InitiatorOnly: outbound, client protocols
- ResponderOnly: inbound, server protocols
- Duplex: both on one connection

172 tests. Live-tested duplex against mainnet (version 15).

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: add Leios coordinator extensions (Phase 4e)
Replace stub-forwarded Leios events with smart coordinator logic:
- Slot-bounded dedup for EB, TX, and vote offers across peers
- Per-offer peer tracking for RTT-based smart fetch routing
- Pending fetch dedup and cleanup on peer failure
- Separate LeiosBlockTxsOffered/LeiosBlockTxsReceived events
- FetchLeiosBlockTxs (with bitmap) and FetchLeiosVotes commands
- leios_dedup_window config (default 1000 slots)
- 8 new coordinator tests (255 total)
- Live-tested: serve --leios → multi-follow --leios (two connections)

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
net-rs: add PeerSharing protocol (Phase 3, step 2)
Implement PeerSharing mini-protocol (protocol ID 10) — simple
request/reply for peer discovery with IPv4/IPv6 address support.
CLI 'peer-share' command with peer_sharing=1 handshake negotiation
and graceful rejection when node doesn't support it.
18 new tests (147 total), live-tested against mainnet relays.

Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>