fix(consensus): Enforce Verifier L1 Confs In Derivation#2259
fix(consensus): Enforce Verifier L1 Confs In Derivation#2259
Conversation
The verifier_l1_confs config delayed the L1 head signal sent to the derivation actor, but the pipeline's AlloyChainProvider fetched L1 blocks directly with no upper bound. The safe head was unaffected because the signal was only used as a wake-up trigger — the pipeline could still derive from the latest L1 data. Add a ConfDepthProvider wrapper (matching op-node's ConfDepth pattern) that intercepts block_info_by_number calls and returns a temporary BlockNotFound error when the requested block is beyond l1_head minus conf_depth. The L1 watcher now updates a shared Arc<AtomicU64> with the real L1 head on every poll, and the pipeline reads it to enforce the cutoff.
🟡 Heimdall Review Status
|
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
…test The test was sampling safe/finalized block tags immediately after waiting for gossip-synced latest blocks. The safe head requires the batcher → L1 mining → derivation pipeline cycle, which takes longer. Both nodes showed safe=0 (genesis) because derivation hadn't confirmed any blocks yet when sampling started. - Add explicit 120s wait for builder safe head > 0 before sampling - Extend sampling from 5 to 10 rounds (20s window) so the client's delayed derivation also has time to advance and show the lag - Fix BUG annotation to only fire when b > 0 && b == c (not when both are 0, which is just 'not ready yet')
…for CI Lower VERIFIER_L1_CONFS from 4 to 2 (4s lag instead of 8s) to reduce the time CI needs for the client to show a lagging safe head on slow L1. Increase the builder safe-head wait from 120s to 300s so the batcher→L1→derivation cycle has room to complete on resource-constrained CI runners.
…fier-l1-confs-safe-head
Review SummaryThe fix correctly identifies and addresses the root cause: the derivation pipeline's The
Issues raised in inline comments (prior run, still applicable)
Architecture noteThe L1 watcher's existing head-delay mechanism (fetching block at Test coverage is thorough — unit tests for the |
Summary
The
verifier_l1_confs(akaBASE_NODE_VERIFIER_L1_CONFS) configuration was not actually constraining the derivation pipeline's view of L1. The L1 watcher delayed the head signal sent to the derivation actor, but the pipeline'sAlloyChainProviderfetched L1 blocks directly from the RPC with no upper bound, so the safe head advanced as if no confirmation depth were set.This adds a
ConfDepthProviderwrapper (matching op-node'sConfDepthpattern) that interceptsblock_info_by_numbercalls and returns a temporary error when the requested block exceedsl1_head - conf_depth, causing the pipeline to yield. The L1 watcher now updates a shared atomic with the real L1 head on every poll, and the pipeline reads it to enforce the cutoff. The fix also wiresBASE_NODE_VERIFIER_L1_CONFSinto the docker-compose devnet so it can be tested locally withjust devnet upand the includedcompare-heads.shscript.