diff --git a/README.md b/README.md index 9450c0bc51..fda89c2a5e 100644 --- a/README.md +++ b/README.md @@ -44,6 +44,35 @@ response = client.responses.create( print(response.output_text) ``` +### Conversation state + +For multi-turn conversations with the Responses API, use `previous_response_id` +to have the API retain context between turns. + +```python +from openai import OpenAI + +client = OpenAI() + +response = client.responses.create( + model="gpt-5.2", + input="Write a haiku about recursion in programming.", +) +print(response.output_text) + +response = client.responses.create( + model="gpt-5.2", + input="Now explain it in plain English.", + previous_response_id=response.id, +) +print(response.output_text) +``` + +If you manually manage conversation history instead, preserve all items from +`response.output` in their original order. Reasoning models may return reasoning +items together with assistant messages, and filtering those items down to only +messages can break subsequent requests. + The previous standard (supported indefinitely) for generating text is the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat). You can use that API to generate text from the model with the code below. ```python diff --git a/examples/responses/conversation_state.py b/examples/responses/conversation_state.py new file mode 100644 index 0000000000..d7e7a0cc35 --- /dev/null +++ b/examples/responses/conversation_state.py @@ -0,0 +1,22 @@ +from openai import OpenAI + + +client = OpenAI() + +response = client.responses.create( + model="gpt-5.2", + input="Write a haiku about recursion in programming.", +) +print(response.output_text) + +response = client.responses.create( + model="gpt-5.2", + input="Now explain it in plain English.", + previous_response_id=response.id, +) +print(response.output_text) + +# If you manually manage conversation history instead of using +# previous_response_id, append response.output items in order. Reasoning models +# may return reasoning items together with assistant messages, and filtering +# those items down to only messages can break the next request.