-
Notifications
You must be signed in to change notification settings - Fork 660
[Feature] The 45VL supports prompt_token_ids + messages input. #5148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Thanks for your contribution! |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #5148 +/- ##
==========================================
Coverage ? 59.91%
==========================================
Files ? 317
Lines ? 38789
Branches ? 5841
==========================================
Hits ? 23242
Misses ? 13709
Partials ? 1838
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| prompt_token_ids = request.get("prompt_token_ids", []) | ||
| prompt_token_ids_len = len(prompt_token_ids) | ||
| if not request.get("messages"): | ||
| outputs["input_ids"].append(prompt_token_ids) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个应该用 extend
| messages = request.get("messages") | ||
| if messages: | ||
| self._check_mm_limits(messages) | ||
| request.setdefault("enable_thinking", True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里应该不能简单的赋值。 需要看 prompt_token_ids 情况。 这里等 #4302 的 PR 合入之后再调整吧。
LiqinruiG
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LiqinruiG
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Motivation
When triggering multiple rounds with RLC, we aim to use the prompt_ids and completion_ids from the previous round as input for the current round without requiring concatenation. Additionally, we wish to pass image information via the message field.
Modifications
The
process_request_dictfunction inernie4_5_vl_processornow supports reading theprompt_token_idsfield from the request as input. It also supports reading multimodal information from themessagesfield in this scenario.Usage or Command
No change in manual command
Accuracy Tests
No need
Checklist
[FDConfig],[APIServer],[Engine],[Scheduler],[PD Disaggregation],[Executor],[Graph Optimization],[Speculative Decoding],[RL],[Models],[Quantization],[Loader],[OP],[KVCache],[DataProcessor],[BugFix],[Docs],[CI],[Optimization],[Feature],[Benchmark],[Others],[XPU],[HPU],[GCU],[DCU],[Iluvatar],[Metax]]pre-commitbefore commit.releasebranch, make sure the PR has been submitted to thedevelopbranch, then cherry-pick it to thereleasebranch with the[Cherry-Pick]PR tag.