Skip to content

Conversation

@Tristan-Wilson
Copy link
Member

This test validates the dynamic batch sizing behavior introduced in commit 05ac6df, which allows DA providers to signal that a message is too large without triggering a fallback to the next writer.

When a DA provider returns ErrMessageTooLarge, the batch poster should:

  1. Re-query GetMaxMessageSize() to learn the new size limit
  2. Rebuild a smaller batch that fits within the limit
  3. Post to the SAME DA provider (not fall back to calldata)

The test has two phases:

  • Phase 1: Posts ~10KB batches with initial max size of 10KB
  • Phase 2: Reduces max size to 5KB mid-stream, verifying that subsequent batches are rebuilt smaller rather than falling back

fixes NIT-4158

This test validates the dynamic batch sizing behavior introduced in
commit 05ac6df, which allows DA providers to signal that a message
is too large without triggering a fallback to the next writer.

When a DA provider returns ErrMessageTooLarge, the batch poster should:
1. Re-query GetMaxMessageSize() to learn the new size limit
2. Rebuild a smaller batch that fits within the limit
3. Post to the SAME DA provider (not fall back to calldata)

The test has two phases:
- Phase 1: Posts ~10KB batches with initial max size of 10KB
- Phase 2: Reduces max size to 5KB mid-stream, verifying that subsequent
  batches are rebuilt smaller rather than falling back
@codecov
Copy link

codecov bot commented Dec 30, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 57.15%. Comparing base (416e512) to head (6a6c125).

Additional details and impacted files
@@             Coverage Diff             @@
##           master    #4183       +/-   ##
===========================================
+ Coverage   33.01%   57.15%   +24.14%     
===========================================
  Files         459      459               
  Lines       55830    55830               
===========================================
+ Hits        18430    31911    +13481     
+ Misses      34185    19108    -15077     
- Partials     3215     4811     +1596     

@github-actions
Copy link

github-actions bot commented Dec 30, 2025

❌ 6 Tests Failed:

Tests completed Failed Passed Skipped
4451 6 4445 0
View the top 3 failed tests by shortest run time
TestDataStreaming_PositiveScenario/Many_senders,_long_messages
Stack Traces | 0.160s run time
... [CONTENT TRUNCATED: Keeping last 20 lines]
        github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic.func1()
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:230 +0x14f
        created by github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic in goroutine 233
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:223 +0x85
        
    protocol_test.go:230: �[31;1m [] too much time has elapsed since request was signed �[0;0m
WARN [12-30|10:27:00.101] Served datastreaming_start               conn=127.0.0.1:46146 reqid=9 duration="151.842µs" err="too much time has elapsed since request was signed"
INFO [12-30|10:27:00.102] rpc response                             method=datastreaming_start logId=9  err="too much time has elapsed since request was signed" result={} attempt=0 args="[\"0x6953a8f3\", \"0x2e\", \"0xd9\", \"0x2695\", \"0xa\", \"0x230614dd668bc1c93bbd198305a8d9ad1e2e5b95417729e1beb7796bf70cfc5913bc88dd908b1304ad54a1089eb67ce3dedbe96d77eb51c6899e91bdd536ad9e00\"]" errorData=null
    protocol_test.go:230: goroutine 260 [running]:
        runtime/debug.Stack()
        	/opt/hostedtoolcache/go/1.25.5/x64/src/runtime/debug/stack.go:26 +0x5e
        github.com/offchainlabs/nitro/util/testhelpers.RequireImpl({0x1568e70, 0xc00047e380}, {0x154f920, 0xc00111e120}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x55
        github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic.func1()
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:230 +0x14f
        created by github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic in goroutine 233
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:223 +0x85
        
    protocol_test.go:230: �[31;1m [] too much time has elapsed since request was signed �[0;0m
--- FAIL: TestDataStreaming_PositiveScenario/Many_senders,_long_messages (0.16s)
TestDataStreaming_PositiveScenario
Stack Traces | 0.190s run time
=== RUN   TestDataStreaming_PositiveScenario
--- FAIL: TestDataStreaming_PositiveScenario (0.19s)
TestVersion30
Stack Traces | 7.760s run time
... [CONTENT TRUNCATED: Keeping last 20 lines]
=== PAUSE TestVersion30
=== CONT  TestVersion30
    precompile_inclusion_test.go:94: goroutine 609805 [running]:
        runtime/debug.Stack()
        	/opt/hostedtoolcache/go/1.25.5/x64/src/runtime/debug/stack.go:26 +0x5e
        github.com/offchainlabs/nitro/util/testhelpers.RequireImpl({0x4100530, 0xc0526bd500}, {0x40bd7a0, 0xc24ce93560}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x55
        github.com/offchainlabs/nitro/system_tests.Require(0xc0526bd500, {0x40bd7a0, 0xc24ce93560}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/system_tests/common_test.go:2034 +0x5d
        github.com/offchainlabs/nitro/system_tests.testPrecompiles(0xc0526bd500, 0x1e, {0xc0aca91db0, 0x6, 0x0?})
        	/home/runner/work/nitro/nitro/system_tests/precompile_inclusion_test.go:94 +0x371
        github.com/offchainlabs/nitro/system_tests.TestVersion30(0xc0526bd500?)
        	/home/runner/work/nitro/nitro/system_tests/precompile_inclusion_test.go:67 +0x798
        testing.tRunner(0xc0526bd500, 0x3d3ec30)
        	/opt/hostedtoolcache/go/1.25.5/x64/src/testing/testing.go:1934 +0xea
        created by testing.(*T).Run in goroutine 1
        	/opt/hostedtoolcache/go/1.25.5/x64/src/testing/testing.go:1997 +0x465
        
    precompile_inclusion_test.go:94: �[31;1m [] execution aborted (timeout = 5s) �[0;0m
--- FAIL: TestVersion30 (7.76s)

📣 Thoughts on this report? Let Codecov know! | Powered by Codecov

Copy link
Member

@pmikolajczyk41 pmikolajczyk41 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if I understand correctly: when the DA writer returns ErrMessageTooLarge, we shouldn't fallback, even if other DA systems (calldata, anytrust) are available - we should just rebuild batches for lower limit; therefore, I think we actually should enable batch poster falling back to e.g. calldata, and ensure that it didn't do so

Comment on lines +1129 to +1131
// Verify follower synced
_, err = l2B.Client.BlockNumber(ctx)
Require(t, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Q1: how does this ensure that the second node actually synced?
Q2: why don't we have a similar check in the 2nd phase?

Comment on lines +1119 to +1127
// Create L1 blocks to trigger batch posting
for i := 0; i < 30; i++ {
SendWaitTestTransactions(t, ctx, builder.L1.Client, []*types.Transaction{
builder.L1Info.PrepareTx("Faucet", "User", 30000, big.NewInt(1e12), nil),
})
}

// Wait for batch to post
time.Sleep(time.Second * 2)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment 1: we can use AdvanceL1 utility instead of a loop here
Comment 2 (covering also the follower node syncing): could we reuse checkBatchPosting method here? I think it would be beneficial for us to have the batch posting check as standardized as possible. If I'm not mistaken, currently checkBatchPosting relies on MaxDelay=0, but maybe we can force posting with builder.L2.ConsensusNode.BatchPoster.MaybePostSequencerBatch(ctx) and some other config flag?

if payloadSize > 6_000 {
t.Errorf("Phase 2: CustomDA payload size %d exceeds expected max ~5KB", payloadSize)
}
} else if daprovider.IsBrotliMessageHeaderByte(headerByte) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have just else here and fail in case there's a batch that is not AltDA; here and in the first phase loop

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants