-
Notifications
You must be signed in to change notification settings - Fork 0
Add dedicated VLM model configuration and wire it through multi-tenant VLM creation #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -8,7 +8,6 @@ | |
| 重构原因: 统一配置管理,从服务商导向改为功能导向命名 | ||
| """ | ||
|
|
||
| import os | ||
| from typing import Optional | ||
| from pydantic import Field | ||
| from pydantic_settings import BaseSettings | ||
|
|
@@ -22,6 +21,9 @@ class LLMConfig(BaseSettings): | |
| api_key: str = Field(..., description="LLM API Key") | ||
| base_url: str = Field(..., description="LLM API Base URL") | ||
| model: str = Field(default="seed-1-6-250615", description="LLM Model Name") | ||
| vlm_model: str = Field(..., description="VLM Model Name", alias="VLM_MODEL") | ||
| vlm_api_key: Optional[str] = Field(default=None, description="VLM API Key", alias="VLM_API_KEY") | ||
| vlm_base_url: Optional[str] = Field(default=None, description="VLM API Base URL", alias="VLM_BASE_URL") | ||
|
Comment on lines
+24
to
+26
|
||
| vlm_timeout: int = Field(default=120, description="VLM Image Understanding Timeout (seconds)") | ||
| timeout: int = Field(default=60, description="General LLM Timeout (seconds)") | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation says VLM_MODEL is required (必填), but in the code at src/config.py:24, vlm_model uses Field(...) which makes it required at the pydantic validation level. However, the comment also says it should be "independent from LLM_MODEL" (独立于 LLM_MODEL), but the default value for LLM_MODEL and the example value for VLM_MODEL are both the same: seed-1-6-250615. This creates confusion about whether they should actually be different models. Consider clarifying whether VLM_MODEL can use the same model as LLM_MODEL (for models that support both text and vision), or if they must be different models.