Skip to content

Conversation

@brandtbucher
Copy link
Member

This is a fun little optimization. According to some local microbenchmarks (basically entering and exiting context managers a million times), turning these next calls into for loops saves about 25% of the overhead this class introduces (~5% due to __enter__ and ~20% due to __exit__). This is because:

  • The wrapped iterator is usually a generator. We specialize for loops for generators, so instead of calling through the C code for next and re-entering the interpreter, we can "inline" the frame push instead. (This is also more JIT-friendly.)
  • We don't care about the actual StopIteration exceptions here, just whether or not they were raised. Raising exceptions is expensive, and the interpreter has an optimization to avoid actually raising StopIteration in normal for loops. (This is especially helpful in __exit__, where we advance the generator, expect a StopIteration to be raised, and then just throw it away!)

This change isn't worth it for the async variant of this function, since neither of the above optimizations apply to async for loops.

@brandtbucher brandtbucher self-assigned this Nov 9, 2025
@brandtbucher brandtbucher added performance Performance or resource usage skip issue skip news stdlib Standard Library Python modules in the Lib/ directory labels Nov 9, 2025
@picnixz
Copy link
Member

picnixz commented Nov 9, 2025

For tracking purposes, could you create an issue for that please? I also think it's worth a NEWS entry for people who are interested in this kind of optimization and understanding it.

@picnixz picnixz removed the skip issue label Nov 9, 2025
@serhiy-storchaka
Copy link
Member

The wrapped iterator is usually a generator.

What if it is not? for calls __iter__, not only __next__.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting core review performance Performance or resource usage stdlib Standard Library Python modules in the Lib/ directory

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants