Skip to content

Conversation

@arturobernalg
Copy link
Member

Introduce RequestConfig.requestTimeout: an opt-in, end-to-end call deadline that aborts the entire exchange with InterruptedIOException("Request timeout").

@ok2c
Copy link
Member

ok2c commented Nov 29, 2025

@arturobernalg The proposed solution may solve the problem at the cost of using an extra thread per message exchange but the objective of HTTPCLIENT-1074 was completely different. The idea is to keep track of execution time in the normal course of message exchange execution without making use of additional system resources.

I am not in favor in merging this change-set in its present state.

@arturobernalg
Copy link
Member Author

@arturobernalg The proposed solution may solve the problem at the cost of using an extra thread per message exchange but the objective of HTTPCLIENT-1074 was completely different. The idea is to keep track of execution time in the normal course of message exchange execution without making use of additional system resources.

I am not in favor in merging this change-set in its present state.

@ok2c Fair enough. I'll apply another approach

@arturobernalg arturobernalg force-pushed the HTTPCLIENT-1074 branch 4 times, most recently from 5928561 to c60396c Compare November 29, 2025 19:34
…sic client

Clamp pool lease, connect and response timeouts to the remaining budget
Add classic call-timeout unit/integration tests and example using modern timeout APIs
@arturobernalg
Copy link
Member Author

@arturobernalg The proposed solution may solve the problem at the cost of using an extra thread per message exchange but the objective of HTTPCLIENT-1074 was completely different. The idea is to keep track of execution time in the normal course of message exchange execution without making use of additional system resources.
I am not in favor in merging this change-set in its present state.

@ok2c Fair enough. I'll apply another approach

@arturobernalg The proposed solution may solve the problem at the cost of using an extra thread per message exchange but the objective of HTTPCLIENT-1074 was completely different. The idea is to keep track of execution time in the normal course of message exchange execution without making use of additional system resources.
I am not in favor in merging this change-set in its present state.

@ok2c Fair enough. I'll apply another approach

@ok2c please another pass

@ok2c
Copy link
Member

ok2c commented Nov 30, 2025

@arturobernalg This change-set attempts to enforce a deadline until receipt of a response head. The idea is to enforce a deadline to the_total_ message exchange including transmission of request and response content entities. Even now the change-set adds a considerable amount of ugliness and this is just a start. I am not sure it is worth it. What is your reason for doing all this?

@arturobernalg
Copy link
Member Author

@arturobernalg This change-set attempts to enforce a deadline until receipt of a response head. The idea is to enforce a deadline to the_total_ message exchange including transmission of request and response content entities. Even now the change-set adds a considerable amount of ugliness and this is just a start. I am not sure it is worth it. What is your reason for doing all this?

@ok2c My intention was to cover the problematic cases (stuck lease / connect / first-byte wait) without extra threads or global schedulers, and to keep the plumbing contained.
You are right that the current change only enforces the deadline while HttpClient is in control of the exchange: leasing the connection, connecting, writing the request and waiting for the response head. Once the response has been handed off to user code and the entity is being streamed, we still rely on the existing socket timeout; we do not try to track total wall-clock time for the entire body transfer.
Enforcing the deadline for the full message exchange including entities would mean threading the deadline into entity producers/consumers and possibly wrapping the content stream, which I think becomes quite invasive for the benefit it adds.

I'm fine with dropping this approach and close the ticket

@ok2c
Copy link
Member

ok2c commented Nov 30, 2025

My intention was to cover the problematic cases (stuck lease / connect / first-byte wait) without extra threads or global schedulers, and to keep the plumbing contained.

@arturobernalg In my opinion the connection lease / connect / response timeout are good enough for most of real-life scenarios. If we ever find ourselves in a situation where one needs to enforce a deadline over that process we can re-visit this issue

What may be quite useful is actually timeout support on per H2 stream basis. You do not need to rush to implement it, just something to keep in mind. If done badly this feature could cause more harm than good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants