Skip to content

Conversation

@kxz2002
Copy link
Contributor

@kxz2002 kxz2002 commented Nov 20, 2025

Motivation

When triggering multiple rounds with RLC, we aim to use the prompt_ids and completion_ids from the previous round as input for the current round without requiring concatenation. Additionally, we wish to pass image information via the message field.

Modifications

The process_request_dict function in ernie4_5_vl_processor now supports reading the prompt_token_ids field from the request as input. It also supports reading multimodal information from the messages field in this scenario.

Usage or Command

No change in manual command

Accuracy Tests

No need

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Nov 20, 2025

Thanks for your contribution!

@paddle-bot paddle-bot bot added the contributor External developers label Nov 20, 2025
@codecov-commenter
Copy link

codecov-commenter commented Nov 24, 2025

Codecov Report

❌ Patch coverage is 71.81818% with 31 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@b9bdf82). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/input/ernie4_5_vl_processor/process.py 71.28% 21 Missing and 8 partials ⚠️
fastdeploy/entrypoints/openai/protocol.py 0.00% 0 Missing and 1 partial ⚠️
...put/ernie4_5_vl_processor/ernie4_5_vl_processor.py 87.50% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5148   +/-   ##
==========================================
  Coverage           ?   59.91%           
==========================================
  Files              ?      317           
  Lines              ?    38789           
  Branches           ?     5841           
==========================================
  Hits               ?    23242           
  Misses             ?    13709           
  Partials           ?     1838           
Flag Coverage Δ
GPU 59.91% <71.81%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

prompt_token_ids = request.get("prompt_token_ids", [])
prompt_token_ids_len = len(prompt_token_ids)
if not request.get("messages"):
outputs["input_ids"].append(prompt_token_ids)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个应该用 extend

messages = request.get("messages")
if messages:
self._check_mm_limits(messages)
request.setdefault("enable_thinking", True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里应该不能简单的赋值。 需要看 prompt_token_ids 情况。 这里等 #4302 的 PR 合入之后再调整吧。

LiqinruiG
LiqinruiG previously approved these changes Nov 25, 2025
Copy link
Collaborator

@LiqinruiG LiqinruiG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@LiqinruiG LiqinruiG left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@LiqinruiG LiqinruiG merged commit 2d78759 into PaddlePaddle:develop Nov 25, 2025
15 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants