Skip to content

AWS: handle premature connection close#15792

Open
kinolaev wants to merge 1 commit intoapache:mainfrom
kinolaev:aws-handle-premature-connection-close
Open

AWS: handle premature connection close#15792
kinolaev wants to merge 1 commit intoapache:mainfrom
kinolaev:aws-handle-premature-connection-close

Conversation

@kinolaev
Copy link
Copy Markdown

During vectorized Parquet reads, S3InputStream opens an unbounded HTTP range request (bytes=pos-) and reads one row group eagerly into memory. While Spark processes that in-memory row group (which can take several minutes for large batches), the client stops reading from S3. The TCP receive buffer fills up, and S3 eventually tears down the stalled connection.

When the next row group read begins, the connection is already dead and Apache HTTP client throws ConnectionClosedException: Premature end of Content-Length delimited message body (expected: x; received: y) (when using apache http client). This only affects files with multiple row groups (typically >128 MB).

The existing retry policy handles SSLException, SocketTimeoutException, and SocketException, but not this case. This PR extends the retry predicate to reopen the stream at the saved position when this specific exception is encountered, while leaving all other ConnectionClosedException variants (e.g. from abort()) unaffected.

Fixes #9674 and #9679.

@github-actions github-actions bot added the AWS label Mar 27, 2026
@kinolaev kinolaev force-pushed the aws-handle-premature-connection-close branch from 5620bbc to 6b8c61c Compare March 27, 2026 18:29
Signed-off-by: Sergei Nikolaev <kinolaev@gmail.com>
@kinolaev kinolaev force-pushed the aws-handle-premature-connection-close branch from 6b8c61c to e4d88c9 Compare March 27, 2026 18:30
Copy link
Copy Markdown
Contributor

@singhpk234 singhpk234 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reporting this @kinolaev !
I think this stems from the fact that row groups are read sequentially, i had a pr a while back to prefetch the row groups and this would have potentially saved us in this scenario.
PR : https://github.com/apache/iceberg/pull/7279/changes

resuming the connection from last its closed, seems like a way continue the processing,

do you know the case like your, reads ever success ? because if a fresh connection is established and we reprocess this task again from the begining we will still get into same situation ?

Comment on lines +59 to +61
return ex.getClass().getSimpleName().equals("ConnectionClosedException")
&& ex.getMessage() != null
&& ex.getMessage().startsWith("Premature end of Content-Length delimited message body");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a better way to identify this rather than inspecting on exception message (it looks to fragile to me)

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it definitely looks fragile and it most probably doesn't work for urlconnection client, but I couldn't find a better way. ConnectionClosedException is already identified by its simple name here. There are many different cases of ConnectionClosedException in apache http client that are distinguished only by their error messages. And I doubt that we should add all of them to the retry policy.

@singhpk234 singhpk234 requested a review from danielcweeks March 29, 2026 23:28
@kinolaev
Copy link
Copy Markdown
Author

I think this stems from the fact that row groups are read sequentially, i had a pr a while back to prefetch the row groups and this would have potentially saved us in this scenario.
PR : https://github.com/apache/iceberg/pull/7279/changes

If I understand it right, prefetching a row group wouldn't help. This problem is caused by an unbounded HTTP range request (bytes=pos-) and long row group processing time (time between advance() calls). I think, the reader should either continue reading a file while processing the first row group or make bounded range requests (with S3InputStream.readFully) for each row group. But unfortunately I didn't find a way to implement it.

do you know the case like your, reads ever success ? because if a fresh connection is established and we reprocess this task again from the begining we will still get into same situation ?

I encountered this issue only while executing the rewrite_data_files procedure. This PR resolves it, the procedure no longer fails. We don't reprocess the task again from the beginning, we just open a new connection instead of already closed one to read the next row group.

PS: @danielcweeks , this time I've double checked that the problem actually happens in my production environment before opening the PR)

@kinolaev
Copy link
Copy Markdown
Author

kinolaev commented Mar 30, 2026

I think, the reader should either continue reading a file while processing the first row group or make bounded range requests (with S3InputStream.readFully) for each row group.

There is a vectored io in ParquetFileReader that might help to make several bounded range requests instead of one unbounded. I guess we only need to implementing RangeReadable for S3InputStream. I'll try it later this week.

@kinolaev
Copy link
Copy Markdown
Author

RangeReadable is already implemented for S3InputStream. The problem might be already solved in the main branch by the PR #13997.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Iceberg Rewrite DataFiles unmanageable behavior

2 participants