Apr 28, 9-10 AM (59)
Apr 28, 10-11 AM (53)
Apr 28, 11-12 PM (56)
Apr 28, 12-1 PM (49)
Apr 28, 1-2 PM (54)
Apr 28, 2-3 PM (69)
Apr 28, 3-4 PM (31)
Apr 28, 4-5 PM (14)
Apr 28, 5-6 PM (47)
Apr 28, 6-7 PM (9)
Apr 28, 7-8 PM (9)
Apr 28, 8-9 PM (14)
Apr 28, 9-10 PM (20)
Apr 28, 10-11 PM (34)
Apr 28, 11-12 AM (29)
Apr 29, 12-1 AM (13)
Apr 29, 1-2 AM (1)
Apr 29, 2-3 AM (1)
Apr 29, 3-4 AM (6)
Apr 29, 4-5 AM (1)
Apr 29, 5-6 AM (4)
Apr 29, 6-7 AM (12)
Apr 29, 7-8 AM (45)
Apr 29, 8-9 AM (75)
Apr 29, 9-10 AM (49)
Apr 29, 10-11 AM (28)
Apr 29, 11-12 PM (51)
Apr 29, 12-1 PM (39)
Apr 29, 1-2 PM (21)
Apr 29, 2-3 PM (67)
Apr 29, 3-4 PM (25)
Apr 29, 4-5 PM (36)
Apr 29, 5-6 PM (16)
Apr 29, 6-7 PM (10)
Apr 29, 7-8 PM (14)
Apr 29, 8-9 PM (13)
Apr 29, 9-10 PM (17)
Apr 29, 10-11 PM (25)
Apr 29, 11-12 AM (29)
Apr 30, 12-1 AM (6)
Apr 30, 1-2 AM (8)
Apr 30, 2-3 AM (1)
Apr 30, 3-4 AM (6)
Apr 30, 4-5 AM (2)
Apr 30, 5-6 AM (8)
Apr 30, 6-7 AM (15)
Apr 30, 7-8 AM (17)
Apr 30, 8-9 AM (100)
Apr 30, 9-10 AM (19)
Apr 30, 10-11 AM (50)
Apr 30, 11-12 PM (120)
Apr 30, 12-1 PM (69)
Apr 30, 1-2 PM (45)
Apr 30, 2-3 PM (117)
Apr 30, 3-4 PM (29)
Apr 30, 4-5 PM (34)
Apr 30, 5-6 PM (9)
Apr 30, 6-7 PM (20)
Apr 30, 7-8 PM (23)
Apr 30, 8-9 PM (28)
Apr 30, 9-10 PM (13)
Apr 30, 10-11 PM (25)
Apr 30, 11-12 AM (15)
May 01, 12-1 AM (18)
May 01, 1-2 AM (15)
May 01, 2-3 AM (6)
May 01, 3-4 AM (7)
May 01, 4-5 AM (3)
May 01, 5-6 AM (5)
May 01, 6-7 AM (8)
May 01, 7-8 AM (15)
May 01, 8-9 AM (24)
May 01, 9-10 AM (17)
May 01, 10-11 AM (16)
May 01, 11-12 PM (17)
May 01, 12-1 PM (39)
May 01, 1-2 PM (32)
May 01, 2-3 PM (19)
May 01, 3-4 PM (16)
May 01, 4-5 PM (25)
May 01, 5-6 PM (11)
May 01, 6-7 PM (20)
May 01, 7-8 PM (22)
May 01, 8-9 PM (65)
May 01, 9-10 PM (15)
May 01, 10-11 PM (40)
May 01, 11-12 AM (61)
May 02, 12-1 AM (6)
May 02, 1-2 AM (11)
May 02, 2-3 AM (5)
May 02, 3-4 AM (8)
May 02, 4-5 AM (6)
May 02, 5-6 AM (2)
May 02, 6-7 AM (2)
May 02, 7-8 AM (14)
May 02, 8-9 AM (7)
May 02, 9-10 AM (8)
May 02, 10-11 AM (11)
May 02, 11-12 PM (7)
May 02, 12-1 PM (7)
May 02, 1-2 PM (3)
May 02, 2-3 PM (14)
May 02, 3-4 PM (9)
May 02, 4-5 PM (27)
May 02, 5-6 PM (9)
May 02, 6-7 PM (29)
May 02, 7-8 PM (11)
May 02, 8-9 PM (15)
May 02, 9-10 PM (1)
May 02, 10-11 PM (20)
May 02, 11-12 AM (18)
May 03, 12-1 AM (8)
May 03, 1-2 AM (1)
May 03, 2-3 AM (4)
May 03, 3-4 AM (7)
May 03, 4-5 AM (1)
May 03, 5-6 AM (4)
May 03, 6-7 AM (32)
May 03, 7-8 AM (5)
May 03, 8-9 AM (1)
May 03, 9-10 AM (3)
May 03, 10-11 AM (10)
May 03, 11-12 PM (11)
May 03, 12-1 PM (16)
May 03, 1-2 PM (11)
May 03, 2-3 PM (2)
May 03, 3-4 PM (2)
May 03, 4-5 PM (5)
May 03, 5-6 PM (0)
May 03, 6-7 PM (5)
May 03, 7-8 PM (6)
May 03, 8-9 PM (8)
May 03, 9-10 PM (15)
May 03, 10-11 PM (23)
May 03, 11-12 AM (17)
May 04, 12-1 AM (4)
May 04, 1-2 AM (4)
May 04, 2-3 AM (10)
May 04, 3-4 AM (9)
May 04, 4-5 AM (5)
May 04, 5-6 AM (6)
May 04, 6-7 AM (6)
May 04, 7-8 AM (28)
May 04, 8-9 AM (24)
May 04, 9-10 AM (43)
May 04, 10-11 AM (36)
May 04, 11-12 PM (61)
May 04, 12-1 PM (34)
May 04, 1-2 PM (47)
May 04, 2-3 PM (64)
May 04, 3-4 PM (33)
May 04, 4-5 PM (64)
May 04, 5-6 PM (49)
May 04, 6-7 PM (13)
May 04, 7-8 PM (31)
May 04, 8-9 PM (45)
May 04, 9-10 PM (9)
May 04, 10-11 PM (54)
May 04, 11-12 AM (24)
May 05, 12-1 AM (4)
May 05, 1-2 AM (5)
May 05, 2-3 AM (5)
May 05, 3-4 AM (11)
May 05, 4-5 AM (11)
May 05, 5-6 AM (50)
May 05, 6-7 AM (16)
May 05, 7-8 AM (36)
May 05, 8-9 AM (75)
May 05, 9-10 AM (4)
3,784 commits this week Apr 28, 2026 - May 05, 2026
asteria-game: absorb container-stop signals in driver wrappers
The d3e3a89 run still hit Always:zero-exit on
stub/parallel_driver_asteria_player.sh — the failing example shows
fault_injector applied node/stop to asteria-game (max_duration
6.92s) while two driver scripts were running inside, and they
inherited signal-induced exit codes:

  3680.770  FAULT stop node target=[asteria-game] dur=6.92
  3680.813  cleanup asteria-game
  3692.112  anytime_asteria_admin_singleton.sh rc=137  (SIGKILL)
  3692.139  parallel_driver_asteria_player.sh   rc=255  (aborted)

Same root mechanism as the eventually_alive cold-start bug: a
container-stop fault races a script running inside the container,
just on the front side instead of the back side.

Add sdk_run_signal_safe to helper_sdk.sh — runs a binary, absorbs
129/137/143/124/255 into an sdk_unreachable signal + exit 0,
propagates everything else unchanged. Apply it to the four
driver scripts that previously used `exec /bin/<binary>`:
parallel_driver_asteria_player, anytime_asteria_admin_singleton,
serial_driver_asteria_bootstrap, finally_asteria_consistency.

Real binary errors (any non-zero exit outside the signal set) still
fail the script as before.
fix(database): treat BlockByHash index miss as hard miss
The legacy iterator-fallback in BlockByHashTxn ran a full block-blob
prefix scan on every hash-index miss, dominating fork-resolution CPU
during preview catch-up (11.2% cum, 370 MB/min churn — issue #2105).

Hash-index entries are written for every block since #1915, so a miss
on a healthy DB means the hash is unknown to us. Returning
ErrBlockNotFound directly lets findPeerForkPath rotate to the next peer
without a worst-case scan. Real backend errors are still surfaced.

Add atomic hit/miss counters (BlockByHashStats) so operators can track
the false-fallback rate and decide whether a pre-#1915 index back-fill
is needed — exactly the metric the issue asks for.

Signed-off-by: SAY-5 <[email protected]>
fix(event): right-size per-subscriber channel buffer
newChannelSubscriber allocated EventQueueSize (100k) slots per
subscriber, dominating the steady-state heap (~190 subscribers, ~192
MB / 45% of in-use heap on a serving preview node). The 100k buffer
was sized for the worst-case bulk-sync burst from #1556 / #1914 but
stayed allocated for the lifetime of every subscription, including
quiescent peergov, governance, and housekeeping callbacks.

Subscribe / SubscribeFunc now default to DefaultSubscriberBuffer
(1024) and new SubscribeWithBuffer / SubscribeFuncWithBuffer variants
let callers opt into a larger size. The bursty consumers in
ledger/state.go (chainsync, blockfetch, chain-update) and the
mempool's chain-update consumer opt into EventQueueSize so the
catch-up burst headroom that motivated #1914 is preserved exactly
where it is needed.

Fixes #2106.

Signed-off-by: SAY-5 <[email protected]>
fix(pollux): check nbf claim in JWT.verify
JWT.verify did not validate the nbf (not before) claim, so JWTs with nbf in the future were incorrectly considered valid. This is a security issue per RFC 7519 Section 4.1.5.

Added an explicit nbf check after JWT decode: if nbf is present and the current time is before it, verify() returns false. JWTs without an nbf claim keep the previous behavior (no nbf enforced).

This is a sister fix to #489/#550 (which addressed the exp claim).

Closes #551

Signed-off-by: Seydi Charyyev <[email protected]>