Skip to content

Conversation

@jhominal
Copy link

@jhominal jhominal commented Jan 16, 2024

Summary

As part of my job, we needed a variant of ASGITransport that supports streaming (as in #2186), and this is my PR to implement that.

Something that I am particularly proud of is that this PR was written without having to spawn a new task, with the consequence that it avoids issues related to task groups and context variables.

Checklist

  • I understand that this PR may be closed in case there was no previous discussion. (This doesn't apply to typos!)
  • I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
  • I've updated the documentation accordingly.

Fixes #2186

@jhominal jhominal force-pushed the asgi-transport-streaming branch 3 times, most recently from e249563 to 672f459 Compare January 17, 2024 17:25
@jhominal
Copy link
Author

The PR was updated, with a new test_asgi_stream_allows_iterative_streaming test, and the added tests do not rely on the app argument in AsyncClient.

@jhominal
Copy link
Author

jhominal commented Jan 18, 2024

After working with the version of ASGITransport in this PR, it has come to my attention that this version does not work when streaming a starlette.responses.StreamingResponse object. The reason for that being, that starlette.responses.StreamingResponse puts the send calls in a spawned sub task, and that spawned sub task does not go through the generator in _AwaitableRunner.__call__.

I have an idea in order to fix that, here is how I think it could work:

  • Replace response_started bool with an Event object;
  • Replace body_chunks list with a trio object Channel / anyio memory stream ;
  • Replace the _AwaitableRunner.__call__ until Callable argument with an Awaitable ;
  • In _AwaitableRunner.__call__, instead of yielding the item from the generator directly, the yield self._next_item will "race" with the until Awaitable object, using structured concurrency to run that race;

I would like your opinion on whether the idea mentioned above should be added to the current PR, or be the object of a new PR.

@lovelydinosaur
Copy link
Contributor

I would like your opinion on whether the idea mentioned above should be added to the current PR, or be the object of a new PR.

Either would be okay with me.

@souliane
Copy link

souliane commented Jun 6, 2024

Hi @jhominal,

[...] this version does not work when streaming a starlette.responses.StreamingResponse object. The reason for that being, that starlette.responses.StreamingResponse puts the send calls in a spawned sub task [...]

Could you please confirm that your current patch will also not work with sse-starlette, for the same reason, because of this:

            task_group.start_soon(wrap, partial(self.stream_response, send))

?

Do you know about some working alternatives? I have seen https://pypi.org/project/async-asgi-testclient/ and https://gist.github.com/richardhundt/17dfccb5c1e253f798999fc2b2417d7e, not sure what to think about it.

Thanks.

@jhominal
Copy link
Author

jhominal commented Jun 6, 2024

Hello @souliane

I had been working on and off on this issue for a while, in short:

  1. I had hoped that I would be able to avoid spawning a new task for running the ASGI app, because:
  • a new task means that context variables set in the app cannot be seen by the caller;
  • under structured concurrency I have to think about when and how the task group is shutdown (with the breakages that happen when the starting task is not the same as the ending task);
  1. I was able to avoid spawning a new task when running on asyncio (which is my production environment), but not when running on trio (anything I can imagine writing seems to have race conditions due to violating what I understand of trio invariants);
  2. I think I should just bite that bullet and spawn a task for running the ASGI app, with an option so that people can choose whether to spawn a task (and benefit from streaming response) or not;

@lovelydinosaur
Copy link
Contributor

anything I can imagine writing seems to have race conditions due to violating what I understand of trio invariants

Yep, that's feasible. I don't really understand if that's inherent to ASGI, or particular to the interfacing a Request/Response API onto ASGI. Shrug.

I think I should just bite that bullet and spawn a task for running the ASGI app

Sure. Does it make sense to have a task group running over the lifespan of the transport? That's then well-bounded.

@zuoc1993
Copy link

Is this PR still ongoing? I encountered the same issue.

@lovelydinosaur
Copy link
Contributor

@zuoc1993 It is still ongoing yep.

There's possibly a discussion to be had around the Transport API, and the limitations of a request -> response API.

@jhominal jhominal force-pushed the asgi-transport-streaming branch 4 times, most recently from 2f771af to ddc87bd Compare January 26, 2025 02:23
@jhominal
Copy link
Author

jhominal commented Jan 26, 2025

@encode/maintainers This PR is ready. I am sorry that it took so long.

In the end:

  • I have not been able to implement my original idea of interleaving app execution in the same task as ASGI response processing:
    • Basic execution interleaving (by exploiting yield pauses in the app) requires a good understanding of coroutines, and simply does not handle cases where send calls are made in a subtask of app;
    • More advanced execution interleaving was quite complex to write in the first place, mostly works on asyncio but does not work at all on trio;
  • The issue with the task group scope was handled by refactoring the code to expose an AsyncGenerator of ASGI response messages - that transformation allowed me to manage the task group with a simple async with;
  • As promised above, a new streaming flag has been added to the ASGITransport class;

@jhominal jhominal force-pushed the asgi-transport-streaming branch from ddc87bd to 49b936d Compare January 26, 2025 03:13
@jhominal jhominal requested a review from a team February 10, 2025 09:36
@jhominal
Copy link
Author

@tomchristie @encode/maintainers Hello, I would really appreciate a review or feedback on this PR.

@jhominal jhominal force-pushed the asgi-transport-streaming branch from 49b936d to 04bd45b Compare March 19, 2025 10:08
@lovelydinosaur
Copy link
Contributor

Hey @jhominal thanks so much for your time on this.

I'm wondering if there's a way we could introduce & stress-test this without immediately directly merging?
For example, a blog post demo'ing a streaming ASGITransport with httpx?
(Or lighter touch... sharing initially as a standalone gist that we can link to from the docs?)

I appreciate that's a frustrating push-back, tho it'd be a route through to sharing your work without being blocked on review here?

I have been looking at the limitations of the Transport API and doing some design work there, perhaps there's a fuller conversation to be had around that in due course...

@jhominal
Copy link
Author

jhominal commented Mar 20, 2025

Hey @jhominal thanks so much for your time on this.

I'm wondering if there's a way we could introduce & stress-test this without immediately directly merging? For example, a blog post demo'ing a streaming ASGITransport with httpx? (Or lighter touch... sharing initially as a standalone gist that we can link to from the docs?)

I appreciate that's a frustrating push-back, tho it'd be a route through to sharing your work without being blocked on review here?

@tomchristie Thank you for this quick feedback!

The frustrating thing for me was that I was not sure how to proceed to make this PR advance - in particular, it is never clear for me, when discussion about a topic that seems important to me has died off, how to advance that discussion (potentially towards a resolution that does not make me happy, but at least it is resolved). In particular, I can see that there are still lots of other issues that need work, and I am only sporadically contributing to making them advance. (And thus it would be rude of me to be too insistent in trying to get people to look at my darlings)

I think that a gist or blog post alone suffer from the drawback that we would be asking users of the streaming transport to own the streaming ASGITransport code.

If we are unwilling to merge this PR as-is, I would suggest the following plan:

  • leave the current ASGITransport class as is. After all, it does its job;
  • publish this PR's ASGITransport as a separate class (ASGIStreamingTransport), and add it either in httpx directly, or in a new pypi micro package (httpx-asgi-streaming-transport) published either under the encode organization, or directly on my personal account;
  • publish a blog post / a documentation note on httpx to make people aware of that new transport class, and that they can use it to get working streaming (but that it has an additional limitation related to context variables);

Nota Bene: after integrating the current PR version of ASGITransport in my at-work project yesterday, I uncovered two bugs in the current PR, related to behavior when using AsyncClient.stream and attempting to close the request when connected to an ASGI app that sends new binary chunks indefinitely, I will add tests/fixes for that soon.

I have been looking at the limitations of the Transport API and doing some design work there, perhaps there's a fuller conversation to be had around that in due course...

I had begun looking more than a year ago at implementing the httpx transport API on top of pycurl (but that work has currently been interrupted for a bit as I switched to something else), however there is nothing finished, but I think that attempting to finish that work could also yield interesting feedback about the httpx transport API.

@adriangb
Copy link
Contributor

I think the fundamental problem here - and in many issues in Starlette as well - is that the maintainers, including myself, are unwilling to take on the time and risk to push changes to something that works for 97% of people (made up number) when there is little to no incentive to do so. Ultimately I think we'd end up in a better place if we were willing to try to make things better for that 3%, deal with the regressions, introduce improved APIs and put old ones through deprecation cycles, etc. But that's a lot of work. So we end up with a lot of situations like this where a valuable contribution is left to rot, which is very frustrating for the contributor.

That said for this change in particular:

  1. It's mostly going to impact tests, which are less risky than production code. I don't know of any other use case for ASGITransport.
  2. It's a well designed change that an author with a track record of high quality contributions has spent significant time on.

If you are willing to help deal with any future issues that crop up from this change @jhominal I'm inclined to support moving forward with it.

If we need to de-risk we could always do something like publish it as ExperimentalASGITransport and upstream it into Starlette's TestClient, which is much more widely used. If no issues crop up from that after 3 months we replace the existing implementation and remove ExperimentalASGITransport. This is similar to @jhominal 's 2nd bullet point. That said I don't think this is necessary for this change unless you feel strongly for a need to dersik @tomchristie .

@provinzkraut
Copy link

I think the fundamental problem here - and in many issues in Starlette as well - is that the maintainers, including myself, are unwilling to take on the time and risk to push changes to something that works for 97% of people (made up number) when there is little to no incentive to do so.

Just chiming in here to give some perspective on incentive, both from my point of view as a developer that uses httpx on the daily, and maintainer of a framework that relies on httpx to implement its own test client, similar to Starlette's.

I agree that outside of testing, there are few use cases of this. The testing part however I think is quite relevant, as there are certain classes of things you currently cannot test with httpx-based testing clients, which covers the vast majority of the ASGI framework landscape.

I'm wondering if there's a way we could introduce & stress-test this without immediately directly merging? For example, a blog post demo'ing a streaming ASGITransport with httpx? (Or lighter touch... sharing initially as a standalone gist that we can link to from the docs?)

Maybe @jhominal's suggestion to (at first?) merge the PR with the new transport not as a replacement, but in addition to the existing one could be a way to achieve this. Over at Litestar we'd be more than happy to switch over our httpx-based test client to this new transport (where it would replace or own workaround), and test-drive it, and of course report and collaborate on any issues that might arise.

I think with a blog post or gist this wouldn't have the same impact; Of course I could just copy-paste the code, but then it'd be quick to become out of sync, hard to report issues and collaboratively work on them etc.

@ArtecOne
Copy link

Its been 2 years...

@jhominal
Copy link
Author

The PR has finally been updated (sorry that it took so long)!

Basically, the ASGITransport class has been restored to its pre-PR state, and all the streaming behaviour is now isolated into its own class (ASGIStreamingTransport)

I hope that this can be merged and made available to users.

@jhominal jhominal force-pushed the asgi-transport-streaming branch from 39cc572 to bbebfb0 Compare December 28, 2025 21:50
@jhominal jhominal force-pushed the asgi-transport-streaming branch 3 times, most recently from 3b145a1 to 8de4e39 Compare December 28, 2025 22:00
… version into separate ASGIStreamingTransport class
@jhominal jhominal force-pushed the asgi-transport-streaming branch from 8de4e39 to 6a240f4 Compare December 28, 2025 22:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ASGITransport does not stream responses.

7 participants