Skip to content

fix(js): do not append prompt when resuming generation (#4652)#4657

Draft
BenInvertase wants to merge 4 commits intofirebase:mainfrom
BenInvertase:fix/4652-prompt-appends-on-resume
Draft

fix(js): do not append prompt when resuming generation (#4652)#4657
BenInvertase wants to merge 4 commits intofirebase:mainfrom
BenInvertase:fix/4652-prompt-appends-on-resume

Conversation

@BenInvertase
Copy link

@BenInvertase BenInvertase commented Feb 13, 2026

Fixes #4652

When resuming an interrupted generation with messages and resume, the SDK was still appending the prompt as a new user message, so the last message was no longer the model message with a tool request and the resume precondition failed. This change skips appending the prompt when resume is set in both generate.ts (for raw generate calls) and prompt.ts (for definePrompt).

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @BenInvertase, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the generation resumption logic. Previously, when attempting to resume an AI generation, the system would erroneously re-append the initial prompt, disrupting the message sequence and causing the resume operation to fail. The changes ensure that prompts are only added when appropriate, maintaining the integrity of the conversation history during resumed generations.

Highlights

  • Bug Fix: Resolved an issue where the SDK incorrectly appended the prompt as a new user message when resuming an interrupted generation, leading to failed preconditions.
  • Conditional Prompt Appending: Modified the generate.ts and prompt.ts files to conditionally skip appending the prompt when the resume option is set.
  • Unit Tests: Added a new unit test to verify that prompts are not appended when resuming generation.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • js/ai/src/generate.ts
    • Updated toGenerateRequest to prevent appending the prompt if options.resume is true.
    • Modified messagesFromOptions to prevent appending the prompt if options.resume is true.
  • js/ai/src/prompt.ts
    • Added a conditional return in renderUserPrompt to skip prompt rendering if renderOptions.resume is true.
  • js/ai/tests/generate/generate_test.ts
    • Added a new test case to validate that prompts are not appended when the resume option is active.
Activity
  • Unit tests were added to cover the fix.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@google-cla
Copy link

google-cla bot commented Feb 13, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@BenInvertase BenInvertase marked this pull request as draft February 13, 2026 20:57
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly addresses an issue where the prompt was being appended during a resumed generation, which caused precondition failures. The fix is applied consistently in both generate.ts and prompt.ts to handle raw generate calls and definePrompt cases. The addition of a unit test for toGenerateRequest effectively validates the change.

I've added one suggestion in generate.ts to improve the consistency of message processing, which I believe will make the implementation more robust.

@@ -343,7 +343,7 @@ function messagesFromOptions(options: GenerateOptions): MessageData[] {
if (options.messages) {
messages.push(...options.messages);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's an inconsistency in how messages are processed here compared to toGenerateRequest. This function pushes messages from options.messages directly, while toGenerateRequest uses Message.parseData(m) to normalize each message. The type definition for options.messages allows for lenient message formats (e.g., content as a string), which Message.parseData handles. Without parsing, this could lead to incorrectly formatted messages being sent to the model. To ensure consistency and robustness, it would be better to parse the messages here as well.

Suggested change
messages.push(...options.messages);
messages.push(...options.messages.map((m) => Message.parseData(m)));

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this seems preexisting? not something introduced here.

messages.push(...options.messages.map((m) => Message.parseData(m)));
}
if (options.prompt) {
if (options.prompt && !options.resume) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(same as other comment in this file i think)

messages.push(...options.messages);
}
if (options.prompt) {
if (options.prompt && !options.resume) {
Copy link
Contributor

@cabljac cabljac Feb 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only thing i'm thinking is if a user explicitly does e.g

ai.generate({
  prompt: 'count to 10',
  messages: [...],
  resume: { respond: [...] },
})

then the prompt is silently ignored, which might be confusing -

might be better to be defensive here and do:

  if (options.prompt && options.resume) {
    throw new GenkitError({
      status: 'INVALID_ARGUMENT',
      message: 'prompt is not supported when resume is set. The message history in messages is used instead.',
    });
  }

@BenInvertase BenInvertase force-pushed the fix/4652-prompt-appends-on-resume branch from b1a3415 to 79eff1f Compare February 14, 2026 09:36
@BenInvertase BenInvertase force-pushed the fix/4652-prompt-appends-on-resume branch from 79eff1f to 799bc7c Compare February 14, 2026 09:40
@BenInvertase
Copy link
Author

@cabljac Thanks for the review. I've updated based on your suggestion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

[JS] Prompts always appends user message on resume, breaking interrupt resume

2 participants