Skip to content

Conversation

@pmikolajczyk41
Copy link
Member

Make execution.ExecutionRecorder interface use promise API:

  • it makes it aligned with the ExecutionClient interface
  • it makes it easier to expose it as an RPC API (especially that we don't pass the context argument in the member methods)

part of NIT-4063

@codecov
Copy link

codecov bot commented Dec 30, 2025

Codecov Report

❌ Patch coverage is 68.47826% with 29 lines in your changes missing coverage. Please review.
✅ Project coverage is 33.16%. Comparing base (330bf29) to head (25edd6b).

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4186      +/-   ##
==========================================
+ Coverage   32.94%   33.16%   +0.22%     
==========================================
  Files         459      459              
  Lines       55830    55832       +2     
==========================================
+ Hits        18394    18519     +125     
+ Misses      34212    34075     -137     
- Partials     3224     3238      +14     

@github-actions
Copy link

github-actions bot commented Dec 30, 2025

❌ 5 Tests Failed:

Tests completed Failed Passed Skipped
4448 5 4443 0
View the top 3 failed tests by shortest run time
TestDataStreaming_PositiveScenario/Many_senders,_long_messages
Stack Traces | 0.170s run time
=== RUN   TestDataStreaming_PositiveScenario/Many_senders,_long_messages
WARN [12-30|20:17:55.630] Served datastreaming_start               conn=127.0.0.1:33180 reqid=2 duration="113.282µs" err="we have already seen this request; aborting replayed protocol"
INFO [12-30|20:17:55.631] rpc response                             method=datastreaming_start logId=2 err="we have already seen this request; aborting replayed protocol" result={} attempt=0 args="[\"0x69543373\", \"0x2a\", \"0xd9\", \"0x22f8\", \"0xa\", \"0x3f7dd4c6d471adb88e336339262fbde2a9e5ceb43acb31fcf0d704087b9244d1639d107429d1785ff5c693d5cc1a600b4acf0e07d4fe277ee5f60abaf23e6e4e00\"]" errorData=null
    protocol_test.go:230: goroutine 245 [running]:
        runtime/debug.Stack()
        	/opt/hostedtoolcache/go/1.25.5/x64/src/runtime/debug/stack.go:26 +0x5e
        github.com/offchainlabs/nitro/util/testhelpers.RequireImpl({0x161f150, 0xc0006a6540}, {0x16059a0, 0xc00080c090}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x9f
        github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic.func1()
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:230 +0x19b
        created by github.com/offchainlabs/nitro/daprovider/data_streaming.testBasic in goroutine 80
        	/home/runner/work/nitro/nitro/daprovider/data_streaming/protocol_test.go:223 +0x85
        
    protocol_test.go:230: �[31;1m [] we have already seen this request; aborting replayed protocol �[0;0m
--- FAIL: TestDataStreaming_PositiveScenario/Many_senders,_long_messages (0.17s)
TestDataStreaming_PositiveScenario
Stack Traces | 0.190s run time
=== RUN   TestDataStreaming_PositiveScenario
--- FAIL: TestDataStreaming_PositiveScenario (0.19s)
TestVersion40
Stack Traces | 6.040s run time
... [CONTENT TRUNCATED: Keeping last 20 lines]
ERROR[12-31|00:35:41.239] a large batch posting backlog exists     recentlyHitL1Bounds=false currentPosition=67  messageCount=155 messagesPerBatch=1 postedMessages=1 unpostedMessages=88  batchBacklogEstimate=88
    precompile_inclusion_test.go:94: goroutine 610076 [running]:
        runtime/debug.Stack()
        	/opt/hostedtoolcache/go/1.25.5/x64/src/runtime/debug/stack.go:26 +0x5e
        github.com/offchainlabs/nitro/util/testhelpers.RequireImpl({0x40ff7b0, 0xc07652a380}, {0x40bc860, 0xc11116e450}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/util/testhelpers/testhelpers.go:29 +0x55
        github.com/offchainlabs/nitro/system_tests.Require(0xc07652a380, {0x40bc860, 0xc11116e450}, {0x0, 0x0, 0x0})
        	/home/runner/work/nitro/nitro/system_tests/common_test.go:2034 +0x5d
        github.com/offchainlabs/nitro/system_tests.testPrecompiles(0xc07652a380, 0x28, {0xc0fe8addf8, 0x5, 0x39?})
        	/home/runner/work/nitro/nitro/system_tests/precompile_inclusion_test.go:94 +0x371
        github.com/offchainlabs/nitro/system_tests.TestVersion40(0xc07652a380?)
        	/home/runner/work/nitro/nitro/system_tests/precompile_inclusion_test.go:71 +0x64b
        testing.tRunner(0xc07652a380, 0x3d3e050)
        	/opt/hostedtoolcache/go/1.25.5/x64/src/testing/testing.go:1934 +0xea
        created by testing.(*T).Run in goroutine 1
        	/opt/hostedtoolcache/go/1.25.5/x64/src/testing/testing.go:1997 +0x465
        
    precompile_inclusion_test.go:94: �[31;1m [] execution aborted (timeout = 5s) �[0;0m
--- FAIL: TestVersion40 (6.04s)
�[90mlog: len 7.04K vs 21.12K�[0;0m

📣 Thoughts on this report? Let Codecov know! | Powered by Codecov

Copy link
Contributor

@bragaigor bragaigor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need to implement ExecutionRecorder methods for ExecutionRPCServer? Something like:

func (c *ExecutionRPCServer) RecordBlockCreation(
	ctx context.Context,
	pos arbutil.MessageIndex,
	msg *arbostypes.MessageWithMetadata,
	wasmTargets []rawdb.WasmTarget,
) (*execution.RecordResult, error) {
	return c.executionRecorder.RecordBlockCreation(ctx, pos, msg, wasmTargets).Await(ctx)
}

and same for PrepareForRecord? But then we would need to add executionRecorder to ExecutionRPCServer or modify ExecutionClient. Or that's be part of another PR? CC: @diegoximenes

@pmikolajczyk41
Copy link
Member Author

@bragaigor yes, definitely, but I didn't want to do it in the same PR to keep changes short and granular; this PR is just a part of NIT-4063 (see the description)

@bragaigor
Copy link
Contributor

@pmikolajczyk41 got it, apologies I missread the description. Also prefers smaller increment PRs

Copy link
Contributor

@bragaigor bragaigor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

prevHeader = r.execEngine.bc.GetHeaderByNumber(uint64(blockNum - 1))
if prevHeader == nil {
return nil, fmt.Errorf("pos %d prevHeader not found", pos)
) containers.PromiseInterface[*execution.RecordResult] {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of calling stopwaiter.LaunchPromiseThread here you can call in ExecutionNode.RecordBlockCreation.
This will follow the same pattern that is deployed by ExecutionNode/ExecutionClient today, such as being deployed by ExecutionNode.ShouldTriggerMaintenance.
And it will also decrease code diff in this PR.

if prevHeader == nil {
return nil, fmt.Errorf("pos %d prevHeader not found", pos)
) containers.PromiseInterface[*execution.RecordResult] {
return stopwaiter.LaunchPromiseThread(r.execEngine, func(ctx context.Context) (*execution.RecordResult, error) {
Copy link
Contributor

@diegoximenes diegoximenes Dec 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing to do about this right now, this is more a question to @tsahee when he reviews this PR 🙂.
Since Promises/LaunchPromiseThread/etc are not a golang native way to handle concurrency, it is not clear to me what would be the ideal scenario here.

It is not clear to me why StopWaiter.LaunchPromiseThread accepts a ThreadLauncher (a StopWaiter "implements" a ThreadLauncher), while StopWaiter.LaunchThread doesn't, for example.

Here ExecutionEngine's StopWaiter is being used to launch a thread in BlockRecorder, and the ctx related to this thread is being used to interact with RecordingDatabase.
Not sure if there is an issue with this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants