[FLINK-39533][s3] Use abort() instead of drain on close/seek when remaining bytes exceed threshold in NativeS3InputStream#28012
Conversation
|
cc: @gaborgsomogyi |
| */ | ||
| private void releaseStream() { | ||
| // Drop the wrapper without closing it; closing would trigger the drain path. | ||
| bufferedStream = null; |
There was a problem hiding this comment.
What makes sure that system resources which are normally freed in close() will be handled properly?
There was a problem hiding this comment.
In the revised approach, both bufferedStream.close() and currentStream.close() are still called. The abort() call placed before them terminates the underlying HTTP connection, so when BufferedInputStream.close() delegates to ResponseInputStream.close(), the connection is already dead, and no drain occurs. BufferedInputStream itself holds only a byte[] heap buffer with no native resources. The JVM GCs it upon dereferencing. The currentStream.close() call handles any remaining SDK resource cleanup (connection pool return, etc.) after the abort.
There was a problem hiding this comment.
Can we test that somehow? I mean missing this can cause quite some leaks
There was a problem hiding this comment.
Fair point, tried adding a test for it.
TrackingInputStream now tracks three things: wasAborted, wasClosed, and wasAbortedBeforeClose (set inside close() based on whether abort already ran).
New test closeAbortsAndThenClosesUnderlyingStream asserts all three after the stream is closed, and I've applied the same three assertions on the seek() and skip() paths too, since those also swap out the stream mid-life.
The three together catch both failure modes:
- drop close() → wasClosed fails (connection pool leak)
- reverse the order → wasAbortedBeforeClose fails (drain regression)
- drop abort() → wasAborted fails
So any future refactor that reintroduces the leak will break exactly one of them.
| byte[] tail = new byte[20]; | ||
| assertThat(in.read(tail, 0, 20)).isEqualTo(6); | ||
| assertThat(in.getPos()).isEqualTo(256); | ||
| // read past EOF |
There was a problem hiding this comment.
AFAIK hadoop is throwing such case, isn't it?
There was a problem hiding this comment.
Hadoop's S3AInputStream.seek() throws EOFException for negative positions with message "Cannot seek to a negative offset". here the implementation throws IOException with message "Cannot seek to negative position: ", which matches the Hadoop contract. The test verifies isInstanceOf(IOException.class),
so it covers both.
There was a problem hiding this comment.
I mean more like it throws EOFException when in.read() called but no data
There was a problem hiding this comment.
I dug into S3AInputStream and the FSDataInputStream JavaDoc "Can't seek past the end of the stream" and aligned the implementation accordingly. Three changes:
- seek() now throws EOFException, not bare IOException
- Added more to existing negative-seek assertion so it cannot regress to the more permissive IOException
- new tests
seekPastEofThrowsEofExceptionandreadAtEofReturnsMinusOne
There was a problem hiding this comment.
While we're hanging around can we just collapse it into one or more function? This pattern is down below with some tiny diffs
There was a problem hiding this comment.
made changes PTAL
| * @see ResponseInputStream#abort() | ||
| */ | ||
| private void abortCurrentStream() { | ||
| if (currentStream != null) { |
There was a problem hiding this comment.
If currentStream is guarded then the function must guard it. Otherwise a simple lock move from upper call will break things silently
There was a problem hiding this comment.
Good catch — @GuardedBy("lock") is only a static-analysis hint and doesn't actually enforce anything at runtime, so your "silent break on a lock move" scenario was real.
I've added a runtime precondition at the top of both @GuardedBy helpers (abortCurrentStream() and releaseStreams())
…aining bytes exceed threshold in NativeS3InputStream
What is the purpose of the change
NativeS3InputStreamcallsResponseInputStream.close()when releasing streams duringseek(),skip(), andclose()operations. Apache HttpClient'sclose()implementationdrains all remaining bytes from the response body to enable HTTP connection reuse. For large S3 objects where only a small portion was read (e.g., checkpoint metadata from a
multi-GB state file), this drains potentially gigabytes of data over the network — causing severe latency during checkpoint restore and seek-heavy read patterns.
The AWS SDK v2
ResponseInputStreamJavaDoc explicitly recommendscalling
abort()when remaining data is not needed. This PR replacesclose()withabort()in the stream release path.Brief change log
Added
releaseStream()method toNativeS3InputStreamthat callsabort()instead ofclose()on the underlyingResponseInputStream, and drops theBufferedInputStreamwrapper without closing it (closing would delegate to the drain path)
openStreamAtCurrentPosition()andclose()now usereleaseStream()for stream cleanupAdded
NativeS3InputStreamTestwith 8 tests covering abort lifecycle, data correctness, position tracking, and error pathsVerifying this change
This change added tests and can be verified as follows:
Unit Test
Manually validated end-to-end on a local Flink 2.3-SNAPSHOT cluster with a stateful job writing checkpoints (up to 199MB) to S3, triggering a savepoint, restoring from it, and confirming checkpoints completed successfully after restore with zero S3/stream errors
Does this pull request potentially affect one of the following parts:
@Public(Evolving): noDocumentation
Was generative AI tooling used to co-author this PR?