Deployed 5d4b02e with MkDocs version: 1.6.1
Apr 03, 11-12 PM (14)
Apr 03, 12-1 PM (50)
Apr 03, 1-2 PM (38)
Apr 03, 2-3 PM (60)
Apr 03, 3-4 PM (15)
Apr 03, 4-5 PM (10)
Apr 03, 5-6 PM (5)
Apr 03, 6-7 PM (14)
Apr 03, 7-8 PM (20)
Apr 03, 8-9 PM (6)
Apr 03, 9-10 PM (15)
Apr 03, 10-11 PM (20)
Apr 03, 11-12 AM (17)
Apr 04, 12-1 AM (6)
Apr 04, 1-2 AM (1)
Apr 04, 2-3 AM (6)
Apr 04, 3-4 AM (1)
Apr 04, 4-5 AM (1)
Apr 04, 5-6 AM (5)
Apr 04, 6-7 AM (10)
Apr 04, 7-8 AM (3)
Apr 04, 8-9 AM (3)
Apr 04, 9-10 AM (5)
Apr 04, 10-11 AM (35)
Apr 04, 11-12 PM (9)
Apr 04, 12-1 PM (24)
Apr 04, 1-2 PM (11)
Apr 04, 2-3 PM (8)
Apr 04, 3-4 PM (12)
Apr 04, 4-5 PM (3)
Apr 04, 5-6 PM (9)
Apr 04, 6-7 PM (2)
Apr 04, 7-8 PM (3)
Apr 04, 8-9 PM (5)
Apr 04, 9-10 PM (17)
Apr 04, 10-11 PM (21)
Apr 04, 11-12 AM (23)
Apr 05, 12-1 AM (2)
Apr 05, 1-2 AM (0)
Apr 05, 2-3 AM (2)
Apr 05, 3-4 AM (1)
Apr 05, 4-5 AM (1)
Apr 05, 5-6 AM (9)
Apr 05, 6-7 AM (13)
Apr 05, 7-8 AM (4)
Apr 05, 8-9 AM (1)
Apr 05, 9-10 AM (0)
Apr 05, 10-11 AM (7)
Apr 05, 11-12 PM (7)
Apr 05, 12-1 PM (5)
Apr 05, 1-2 PM (2)
Apr 05, 2-3 PM (3)
Apr 05, 3-4 PM (3)
Apr 05, 4-5 PM (18)
Apr 05, 5-6 PM (3)
Apr 05, 6-7 PM (2)
Apr 05, 7-8 PM (2)
Apr 05, 8-9 PM (0)
Apr 05, 9-10 PM (5)
Apr 05, 10-11 PM (19)
Apr 05, 11-12 AM (18)
Apr 06, 12-1 AM (4)
Apr 06, 1-2 AM (6)
Apr 06, 2-3 AM (12)
Apr 06, 3-4 AM (11)
Apr 06, 4-5 AM (5)
Apr 06, 5-6 AM (7)
Apr 06, 6-7 AM (4)
Apr 06, 7-8 AM (8)
Apr 06, 8-9 AM (24)
Apr 06, 9-10 AM (15)
Apr 06, 10-11 AM (9)
Apr 06, 11-12 PM (2)
Apr 06, 12-1 PM (38)
Apr 06, 1-2 PM (49)
Apr 06, 2-3 PM (27)
Apr 06, 3-4 PM (8)
Apr 06, 4-5 PM (32)
Apr 06, 5-6 PM (18)
Apr 06, 6-7 PM (3)
Apr 06, 7-8 PM (11)
Apr 06, 8-9 PM (6)
Apr 06, 9-10 PM (9)
Apr 06, 10-11 PM (29)
Apr 06, 11-12 AM (16)
Apr 07, 12-1 AM (8)
Apr 07, 1-2 AM (8)
Apr 07, 2-3 AM (3)
Apr 07, 3-4 AM (4)
Apr 07, 4-5 AM (1)
Apr 07, 5-6 AM (17)
Apr 07, 6-7 AM (6)
Apr 07, 7-8 AM (14)
Apr 07, 8-9 AM (35)
Apr 07, 9-10 AM (38)
Apr 07, 10-11 AM (25)
Apr 07, 11-12 PM (63)
Apr 07, 12-1 PM (38)
Apr 07, 1-2 PM (56)
Apr 07, 2-3 PM (54)
Apr 07, 3-4 PM (24)
Apr 07, 4-5 PM (36)
Apr 07, 5-6 PM (19)
Apr 07, 6-7 PM (22)
Apr 07, 7-8 PM (21)
Apr 07, 8-9 PM (20)
Apr 07, 9-10 PM (16)
Apr 07, 10-11 PM (41)
Apr 07, 11-12 AM (21)
Apr 08, 12-1 AM (13)
Apr 08, 1-2 AM (6)
Apr 08, 2-3 AM (9)
Apr 08, 3-4 AM (9)
Apr 08, 4-5 AM (4)
Apr 08, 5-6 AM (21)
Apr 08, 6-7 AM (40)
Apr 08, 7-8 AM (72)
Apr 08, 8-9 AM (41)
Apr 08, 9-10 AM (24)
Apr 08, 10-11 AM (56)
Apr 08, 11-12 PM (43)
Apr 08, 12-1 PM (36)
Apr 08, 1-2 PM (64)
Apr 08, 2-3 PM (45)
Apr 08, 3-4 PM (17)
Apr 08, 4-5 PM (16)
Apr 08, 5-6 PM (17)
Apr 08, 6-7 PM (27)
Apr 08, 7-8 PM (12)
Apr 08, 8-9 PM (11)
Apr 08, 9-10 PM (6)
Apr 08, 10-11 PM (50)
Apr 08, 11-12 AM (18)
Apr 09, 12-1 AM (7)
Apr 09, 1-2 AM (5)
Apr 09, 2-3 AM (2)
Apr 09, 3-4 AM (4)
Apr 09, 4-5 AM (6)
Apr 09, 5-6 AM (15)
Apr 09, 6-7 AM (36)
Apr 09, 7-8 AM (22)
Apr 09, 8-9 AM (25)
Apr 09, 9-10 AM (33)
Apr 09, 10-11 AM (19)
Apr 09, 11-12 PM (60)
Apr 09, 12-1 PM (68)
Apr 09, 1-2 PM (42)
Apr 09, 2-3 PM (74)
Apr 09, 3-4 PM (21)
Apr 09, 4-5 PM (50)
Apr 09, 5-6 PM (26)
Apr 09, 6-7 PM (22)
Apr 09, 7-8 PM (21)
Apr 09, 8-9 PM (39)
Apr 09, 9-10 PM (18)
Apr 09, 10-11 PM (29)
Apr 09, 11-12 AM (14)
Apr 10, 12-1 AM (5)
Apr 10, 1-2 AM (4)
Apr 10, 2-3 AM (4)
Apr 10, 3-4 AM (12)
Apr 10, 4-5 AM (3)
Apr 10, 5-6 AM (9)
Apr 10, 6-7 AM (29)
Apr 10, 7-8 AM (45)
Apr 10, 8-9 AM (47)
Apr 10, 9-10 AM (19)
Apr 10, 10-11 AM (61)
Apr 10, 11-12 PM (37)
3,160 commits this week
Apr 03, 2026
-
Apr 10, 2026
refactor: use upstream cardano-node-clients for tx helpers
Bump cardano-node-clients to a37cbd6 which provides: - balanceFeeLoop (conservation-aware fee convergence) - computeScriptIntegrity (parameterized by Language) - spendingIndex - placeholderExUnits Remove local ConservationBalance.hs and inline definitions.
Make `sim` operations exception safe by using `withMockFS`
docs: update WIP with resolved bugs and refactoring proposals
refactor: extract balanceFeeLoop for conservation-aware fee convergence
balanceFeeLoop finds the fee fixed point for transactions where output values depend on the fee (conservation equation). Unlike balanceTx, it does not add inputs or change outputs — the caller provides a function Coin -> outputs that recomputes outputs for each candidate fee. Two-phase approach: 1. Evaluate scripts with a fee overestimate to get real ExUnits 2. Patch ExUnits, then run the fee loop with real redeemer sizes Eliminates the hardcoded 600K fee overestimate. The fee converges exactly in 2-3 rounds, with excess going to treasury. Also extracts buildConservationTx to deduplicate ~200 lines between buildModifyTx and buildRejectTx.
test: add multi-request reject to prove fee convergence scales
refactor: extract balanceFeeLoop for conservation-aware fee convergence
balanceFeeLoop finds the fee fixed point for transactions where output values depend on the fee (conservation equation). Unlike balanceTx, it does not add inputs or change outputs — the caller provides a function Coin -> outputs that recomputes outputs for each candidate fee. Two-phase approach: 1. Evaluate scripts with a fee overestimate to get real ExUnits 2. Patch ExUnits, then run the fee loop with real redeemer sizes Eliminates the hardcoded 600K fee overestimate. The fee converges exactly in 2-3 rounds, with excess going to treasury. Also extracts buildConservationTx to deduplicate ~200 lines between buildModifyTx and buildRejectTx.
test: add multi-request reject to prove fee convergence scales
docs: update WIP with resolved bugs and refactoring proposals
Add property tests for Babbage Plutus TxInfo translation
Extend `Test.Cardano.Ledger.Babbage.TxInfoSpec` with a property test for Babbage TxInfo translation across PlutusV1/V2/V3/V4: * correctly translate tx with babbage-era features
Add protocol version validation to `createInitialState`
Validation checks that the current protocol version is within the era's bounds
Update from 01182e5e88c0dcbb20d8b568e0dac6df2d0b37a6
cardano-api-10.26.0.0 revision 1: bump ouroboros-consensus to ^>=3.0 (#1344)
Split tx backlog into separate local and peer queues
The tx_backlog_max_size limit was incorrectly gating locally-generated transactions based on total backlog depth (including peer-received txs). Split the single backlog queue into two separate VecDeques so the limit only applies to locally-generated transactions. Track and report max backlog watermarks independently for each queue. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add min-latency-clusters shard strategy with configurable balance
Agglomerative clustering (Kruskal-style): sorts edges by latency and merges lowest-latency pairs first, maximizing the CMB lookahead. Balance controlled by shard-max-size-pct config (default 200%). Achieves 7-10ms min cross-shard latency vs 1ms for other strategies, but worse cluster shapes increase cross-shard traffic. For uniformly connected topologies, zero-latency-clusters (balanced, 1ms lookahead) still wins on wall clock. Min-latency-clusters would shine on topologies with natural geographic clusters. Also adds CMB lookahead to shard startup log for diagnostics. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add geographic shard strategy using k-means on node coordinates
- Add Geographic variant to ShardStrategy, configurable via shard-strategy - K-means++ clustering on node coordinates, keeping 0-latency components together, falls back to zero-latency-clusters if coordinates missing - Add location field to NodeConfiguration (from topology coordinates) - Extract union-find helpers to shared module - Refactor zero_latency_clusters to expose reusable component-building and balanced-assignment functions Benchmarks show geographic helps when topology has clear regional clusters with high inter-region latency. For uniformly-connected topologies, the balance penalty outweighs the marginal lookahead improvement. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add min-cut shard strategy and cross-shard edge diagnostics
Min-cut uses recursive bisection with Kernighan-Lin refinement to minimize cross-shard edge count. Also logs cross-shard edge count and CMB lookahead at startup for diagnostics. Shard strategy benchmark summary (6 shards, 100 slots, realistic.yaml): | Strategy | Wall | User | Cross-shard | Lookahead | Sizes | |-----------------------|--------|-------|-------------|-----------|--------------------| | (none, 1 shard) | 9m19s | 36m | — | — | [3000] | | zero-latency-clusters | 3m48s | 39m | 82.0% | 1ms | [500x6] | | min-latency-clusters | 4m51s | 43m | 55.9% | 8ms | [600,600,600,600,300,300] | | geographic | 4m18s | 43m | 61.9% | 1ms | [796,743,703,758] | | min-cut | 4m45s | 43m | 80.5% | 1ms | [750,750,375x4] | For uniformly connected topologies, balance dominates — zero-latency- clusters wins despite 82% cross-shard edges. Strategies optimizing for fewer cross-shard edges or higher lookahead create imbalance that negates their gains. Would benefit from topologies with natural clusters. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Refactor: extract actor engine to actor.rs, clean up sim.rs
- Extract actor-model simulation (NodeListWrapper, ActorSimulation, init_nodes, run logic) into sim/actor.rs - sim.rs is now a thin dispatch layer: Simulation newtype wrapping SimulationInner enum (Actor vs Sequential) - Unify sequential single-shard and multi-shard builders into a single build_typed() function - Group cross-shard state into CrossShardState sub-struct - build() takes event_sender directly, creates its own infrastructure - Remove dead code (per_shard_node_configs, init_node_impls) - Net -407 lines across sim.rs + sequential.rs Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Spawn each shard as independent tokio task using CMB conservative PDES
Each shard (ClockCoordinator + NetworkCoordinator + TxProducer) now runs as its own tokio::spawn'd task, enabling true parallel execution across cores. Key changes: - Replace select_all with tokio::spawn per shard (Simulation::run takes self) - Cross-shard messages route directly NC-to-NC via delivery channels - Target NC handles timing locally via its own Connection (no broker) - CMB ceiling: min(peer.time + min_latency) with null message advancement - Notified::enable() prevents missed notifications across concurrent tasks - TX generation rate scaled by shard_count for consistent output - Delete broker.rs (replaced by direct NC-to-NC routing) 4-shard runs ~2x faster than 1-shard with matching simulation results. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Replace sequential-engine bool with engine enum, add turbo.yaml preset
Rename `sequential-engine: true/false` to `engine: sequential | actor` with actor as the default to avoid surprises. Add parameters/turbo.yaml convenience preset (sequential engine, 6 shards, zero-latency-clusters) for ~5x speedup. Update CLAUDE.md and README.md with engine and shard documentation. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Make parallel-threshold configurable, default 10, add to schema
Extract the hardcoded PARALLEL_THRESHOLD constant into a configurable parallel-threshold parameter (default 10, was 32). Add engine and parallel-threshold to config.schema.json. Disable rayon in the determinism test since event channel ordering is non-deterministic under parallel execution. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Refactor: extract shared CPU task and tx generation logic to common.rs/tx.rs
Deduplicate code between actor (driver.rs) and sequential engines: - Extract NodeEvent, CpuTaskWrapper, schedule_cpu_task, complete_cpu_subtask into new common.rs shared module - Extract TxGeneratorCore from both TransactionProducer (actor) and TxGenerator (sequential) into tx.rs; delete TxGenerator entirely - TransactionProducer now wraps TxGeneratorCore as a thin async actor - WeightedLookup made pub(super) for reuse Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add tx backlog instrumentation and configurable cap
Track the maximum tx backlog length (overflow queue behind the mempool) via events, and add a leios-tx-backlog-max-size config parameter that applies backpressure on the tx generator when the backlog is full. Peer transactions are unaffected by the cap. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add configurable peer tx backlog cap, rename tx_backlog to tx_generated_backlog
The peer backlog queue was unbounded and could cause memory explosion. Add leios-tx-peer-backlog-max-size config (null = unbounded) to cap it independently. Rename the existing backlog config from leios-tx-backlog-max-size to leios-tx-generated-backlog-max-size for clarity. Peer txs dropped due to a full backlog are tracked separately as PeerBacklogFull. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Add sequential DES engine with rayon BSP parallelism
Replace the actor-based simulation with a synchronous event loop for single-shard runs. The sequential engine eliminates tokio coordination overhead (channels, oneshot allocs, task scheduling) by processing events directly from a global priority queue. Events at the same timestamp are batched and processed in parallel across nodes using rayon, following a Bulk Synchronous Parallel model: pop batch → resolve deliveries → parallel node compute → apply results. Enabled by default for single-shard (sequential-engine: true in config). Falls back to sequential processing for small batches (<32 events). The actor engine remains available via sequential-engine: false. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>