-
-
Notifications
You must be signed in to change notification settings - Fork 281
feat: 模型消耗管理与智能匹配 | Model consumption management and intelligent matching #149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat: 模型消耗管理与智能匹配 | Model consumption management and intelligent matching #149
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a JSON model-consumption file and enables JSON imports; implements fuzzy model matching/validation and cached usage refresh; integrates validation into chat/message handlers; sorts and augments model listings with consumption and max_context_length; adds v0 compatibility routes; improves error logging; expands startup options and watch script. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Handler as Chat/Messages Handler
participant Usage as refreshUsage
participant Validator as validateAndReplaceModel
participant Matcher as findMatchingModel
participant State as state.models
Client->>Handler: POST with model in payload
Handler->>Usage: await refreshUsage()
Handler->>Validator: validateAndReplaceModel(requestedModel)
Validator->>Matcher: normalize & attempt matches
Matcher->>State: read available models
State-->>Matcher: models list
alt Match found
Matcher-->>Validator: matchedModel
Validator-->>Handler: {success:true, model: matchedModel}
Handler->>Handler: replace payload.model and continue
Handler->>Client: proceed to completion (200)
else No match
Matcher-->>Validator: null
Validator-->>Handler: {success:false, error: model_not_supported}
Handler->>Client: 400 Bad Request (error)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–30 minutes Areas to inspect closely:
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds model consumption tracking and intelligent model matching capabilities to optimize API usage. The implementation introduces automatic model alias resolution with fuzzy matching and displays consumption metrics (0x, 0.33x, 1x) to help users select cost-effective models.
Key changes:
- Added intelligent model matching system with normalization, prefix matching, and fuzzy matching
- Integrated consumption tracking configuration with model display and sorting
- Enhanced error messages to list available models when invalid models are requested
Reviewed Changes
Copilot reviewed 10 out of 12 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| tsconfig.json | Enables JSON module imports for consumption config |
| start.bat | Updates dev script command |
| src/lib/model-consumption.json | Configuration file mapping model names to consumption multipliers |
| src/lib/model-matcher.ts | Core logic for intelligent model matching and validation |
| src/start.ts | Displays models sorted by consumption on startup |
| src/routes/models/route.ts | Returns models sorted by consumption via API |
| src/routes/chat-completions/handler.ts | Integrates model validation before processing requests |
| src/routes/messages/handler.ts | Integrates model validation for Anthropic-compatible endpoint |
| src/services/copilot/get-models.ts | Filters models by picker availability |
| src/services/copilot/create-chat-completions.ts | Enhanced error logging with available models |
| src/server.ts | Adds /api/v0/* route compatibility |
| bun.lock | Lock file version update |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/lib/model-matcher.ts
Outdated
| .toLowerCase() | ||
| .replace(/_/g, "-") | ||
| .replace(/-(\d{8})$/, "") // Remove -20251001 style suffix | ||
| .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5 |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The regex pattern /(\d)-(\d)/g will incorrectly transform hyphens between all single digits, not just version numbers. For example, "gpt-3-5-turbo" would become "gpt.3.5.turbo" instead of the likely intended "gpt-3.5-turbo". Consider using a more specific pattern like /(-\d+)-(\d+)(?:-|$)/g to only match version-like patterns at word boundaries, or anchor it to specific positions.
| .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5 | |
| .replace(/(\d+)-(\d+)(?=\D|$)/g, "$1.$2") // Replace 4-5 with 4.5, but not 3-5 in gpt-3-5-turbo |
| normalizedAvailable.startsWith(normalizedRequested) || | ||
| normalizedRequested.startsWith(normalizedAvailable) |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bidirectional prefix matching could produce ambiguous results when multiple models share prefixes. For example, if "gpt-4" is requested and both "gpt-4" and "gpt-4o" are available, this could match either one depending on iteration order. Consider matching only one direction or adding explicit preference logic.
| normalizedAvailable.startsWith(normalizedRequested) || | |
| normalizedRequested.startsWith(normalizedAvailable) | |
| normalizedAvailable.startsWith(normalizedRequested) |
| // Create a map for quick consumption lookup | ||
| const consumptionMap = new Map( | ||
| modelConsumptionData.models.map((m) => [m.name, m.consumption]), | ||
| ) | ||
|
|
||
| // Helper function to convert consumption string to number for sorting | ||
| const consumptionToNumber = (consumption: string): number => { | ||
| if (consumption === "N/A") return 999 // Put N/A at the end | ||
| const match = consumption.match(/^([\d.]+)x$/) | ||
| return match ? Number.parseFloat(match[1]) : 999 | ||
| } |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The consumption map creation and consumptionToNumber helper function are duplicated in both src/routes/models/route.ts (lines 17-27) and src/start.ts (lines 84-93). This duplicate logic should be extracted to a shared utility function to maintain consistency and reduce maintenance burden.
| // consola.info( | ||
| // `Full Model Info:\n${ | ||
| // state.models?.data | ||
| // ?.filter(model => model.model_picker_enabled === true) | ||
| // .map(model => { | ||
| // const { | ||
| // capabilities, | ||
| // policy, | ||
| // vendor, | ||
| // preview, | ||
| // model_picker_enabled, | ||
| // object, | ||
| // ...rest } = model; | ||
| // let fullInfo = JSON.stringify(rest, null, 2); | ||
| // return `- ${model.id}\n${fullInfo}`; | ||
| // }).join("\n") | ||
| // }` | ||
| // ) |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Large block of commented-out debug code (lines 64-81) should be removed rather than kept in the production codebase. If this debug functionality is needed, consider moving it behind a debug flag or removing it entirely.
| // consola.info( | |
| // `Full Model Info:\n${ | |
| // state.models?.data | |
| // ?.filter(model => model.model_picker_enabled === true) | |
| // .map(model => { | |
| // const { | |
| // capabilities, | |
| // policy, | |
| // vendor, | |
| // preview, | |
| // model_picker_enabled, | |
| // object, | |
| // ...rest } = model; | |
| // let fullInfo = JSON.stringify(rest, null, 2); | |
| // return `- ${model.id}\n${fullInfo}`; | |
| // }).join("\n") | |
| // }` | |
| // ) |
| created_at: new Date(0).toISOString(), // No date available from source | ||
| owned_by: item.model.vendor, | ||
| display_name: item.model.name, | ||
| max_context_length: item.model.capabilities?.limits?.max_context_window_tokens, |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding max_context_length field to the models API response is a breaking change that may not be expected by API consumers. The field name also differs from the internal naming (max_context_window_tokens). Consider versioning this API change or documenting it clearly for consumers.
| max_context_length: item.model.capabilities?.limits?.max_context_window_tokens, | |
| max_context_window_tokens: item.model.capabilities?.limits?.max_context_window_tokens, |
| return (await response.json()) as ModelsResponse | ||
| const result = await response.json() as ModelsResponse | ||
| result.data = result.data.filter( | ||
| (model: any) => |
Copilot
AI
Nov 12, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using any type annotation defeats TypeScript's type safety. Since the Model interface has been updated to include model_picker_category (line 56), the filter should use the typed Model interface instead of any.
| (model: any) => | |
| (model: Model) => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
src/routes/chat-completions/handler.ts (1)
24-35: Same concerns as messages/handler.ts.This code has the same issues noted in
src/routes/messages/handler.ts:
- Non-null assertion on Line 35 can be avoided (see suggestion in messages/handler.ts review)
- Validation logic is duplicated between both handlers
Consider applying the same refactoring suggestions from the messages/handler.ts review to this file as well.
🧹 Nitpick comments (2)
src/routes/messages/handler.ts (2)
38-49: Model validation integration looks good, but avoid non-null assertion.The validation flow is correct, but the non-null assertion on Line 49 can be avoided with better type narrowing.
Apply this diff to remove the non-null assertion:
// Validate and potentially replace model const validation = validateAndReplaceModel(openAIPayload.model) if (!validation.success) { return c.json({ error: validation.error }, 400) } // Replace model if a match was found - openAIPayload.model = validation.model! + if (validation.model) { + openAIPayload.model = validation.model + }Alternatively, refactor
validateAndReplaceModelto return a discriminated union that makes the type system understand thatvalidation.modelis defined whenvalidation.successis true.
38-49: Consider extracting shared validation logic.This validation pattern is duplicated in
src/routes/chat-completions/handler.ts(lines 24-35). Consider extracting this into a shared middleware or helper function to reduce duplication.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
-
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
-
src/lib/model-consumption.json(1 hunks) -
src/lib/model-matcher.ts(1 hunks) -
src/routes/chat-completions/handler.ts(2 hunks) -
src/routes/messages/handler.ts(2 hunks) -
src/routes/models/route.ts(2 hunks) -
src/server.ts(1 hunks) -
src/services/copilot/create-chat-completions.ts(1 hunks) -
src/services/copilot/get-models.ts(2 hunks) -
src/start.ts(2 hunks) -
start.bat(1 hunks) -
tsconfig.json(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-11T04:33:30.513Z
Learnt from: caozhiyuan
Repo: ericc-ch/copilot-api PR: 142
File: src/routes/messages/handler.ts:50-52
Timestamp: 2025-11-11T04:33:30.513Z
Learning: In src/routes/messages/handler.ts, forcing anthropicPayload.model to getSmallModel() when no tools are present is intentional behavior to fix Claude Code 2.0.28 warmup requests consuming premium model tokens. This applies to all requests without tools, not just warmup requests, and is an accepted design decision.
Applied to files:
src/routes/messages/handler.tssrc/services/copilot/create-chat-completions.tssrc/routes/chat-completions/handler.tssrc/start.tssrc/routes/models/route.ts
🧬 Code graph analysis (7)
src/routes/messages/handler.ts (1)
src/lib/model-matcher.ts (1)
validateAndReplaceModel(95-139)
src/server.ts (3)
src/routes/models/route.ts (1)
modelRoutes(8-8)src/routes/chat-completions/route.ts (1)
completionRoutes(7-7)src/routes/embeddings/route.ts (1)
embeddingRoutes(9-9)
src/lib/model-matcher.ts (1)
src/lib/state.ts (1)
state(20-25)
src/services/copilot/create-chat-completions.ts (1)
src/lib/state.ts (1)
state(20-25)
src/routes/chat-completions/handler.ts (1)
src/lib/model-matcher.ts (1)
validateAndReplaceModel(95-139)
src/start.ts (1)
src/lib/state.ts (1)
state(20-25)
src/routes/models/route.ts (1)
src/lib/state.ts (1)
state(20-25)
🪛 ESLint
src/routes/messages/handler.ts
[error] 49-49: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
src/services/copilot/get-models.ts
[error] 12-12: Replace await·response.json( with (await·response.json()
(prettier/prettier)
[error] 14-14: Unexpected any. Specify a different type.
(@typescript-eslint/no-explicit-any)
[error] 15-15: Replace ·&&·model.model_picker_enabled·===·true with ⏎······&&·model.model_picker_enabled·===·true,
(prettier/prettier)
src/lib/model-matcher.ts
[error] 12-12: Insert ⏎·····
(prettier/prettier)
[error] 33-33: 'normalizedRequested' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 35-35: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 36-36: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
[error] 37-37: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 57-57: Delete ·||
(prettier/prettier)
[error] 58-58: Insert ·||
(prettier/prettier)
[error] 106-106: Insert ⏎·····
(prettier/prettier)
src/services/copilot/create-chat-completions.ts
[error] 39-39: Replace Failed·to·create·chat·completions·for·model:·${payload.model} with ⏎······Failed·to·create·chat·completions·for·model:·${payload.model},⏎····
(prettier/prettier)
[error] 42-42: Delete ····
(prettier/prettier)
[error] 48-48: Delete ········
(prettier/prettier)
[error] 52-52: Replace (m)·=>·typeof·m.capabilities?.limits?.max_context_window_tokens·===·"number" with ⏎··············(m)·=>⏎················typeof·m.capabilities?.limits?.max_context_window_tokens⏎················===·"number",⏎············
(prettier/prettier)
[error] 60-60: Delete ····
(prettier/prettier)
src/routes/chat-completions/handler.ts
[error] 35-35: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
src/start.ts
[error] 16-16: Expected "./lib/model-consumption.json" to come before "./server".
(perfectionist/sort-imports)
[error] 69-69: Delete ·
(prettier/prettier)
[error] 70-70: Delete ·
(prettier/prettier)
[error] 71-71: Delete ·
(prettier/prettier)
[error] 72-72: Delete ·
(prettier/prettier)
[error] 73-73: Delete ·
(prettier/prettier)
[error] 85-85: Replace m·=>·[m.name,·m.consumption]) with (m)·=>·[m.name,·m.consumption]),
(prettier/prettier)
[error] 86-86: Delete ;
(prettier/prettier)
[error] 89-89: Move arrow function 'consumptionToNumber' to the outer scope.
(unicorn/consistent-function-scoping)
[error] 90-90: Delete ;
(prettier/prettier)
[error] 91-91: Delete ;
(prettier/prettier)
[error] 92-92: Delete ;
(prettier/prettier)
[error] 93-93: Delete ;
(prettier/prettier)
[error] 96-97: Delete ⏎······
(prettier/prettier)
[error] 98-98: Replace ··.map(model with .map((model)
(prettier/prettier)
[error] 99-99: Replace ··········let·maxTokens·=·model.capabilities?.limits?.max_context_window_tokens; with ········let·maxTokens·=·model.capabilities?.limits?.max_context_window_tokens
(prettier/prettier)
[error] 99-99: 'maxTokens' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 100-100: Replace ··········let·maxTokensStr·=·"N/A"; with ········let·maxTokensStr·=·"N/A"
(prettier/prettier)
[error] 101-101: Delete ··
(prettier/prettier)
[error] 102-102: Replace ············maxTokensStr·=·maxTokens·>=·1000·?·${maxTokens·/·1000}k·:·${maxTokens}; with ··········maxTokensStr·=⏎············maxTokens·>=·1000·?·${maxTokens·/·1000}k·:·${maxTokens}``
(prettier/prettier)
[error] 103-103: Delete ··
(prettier/prettier)
[error] 104-104: Replace ··const·consumption·=·consumptionMap.get(model.name)·||·"N/A"; with const·consumption·=·consumptionMap.get(model.name)·||·"N/A"
(prettier/prettier)
[error] 105-105: Replace ··········return·{·model,·maxTokensStr,·consumption·}; with ········return·{·model,·maxTokensStr,·consumption·}
(prettier/prettier)
[error] 106-106: Delete ··
(prettier/prettier)
[error] 107-107: Replace ··.filter(item with .filter((item)
(prettier/prettier)
[error] 108-108: Replace ··.sort((a,·b)·=>·consumptionToNumber(a.consumption)·-·consumptionToNumber(b.consumption)) with .sort(⏎········(a,·b)·=>⏎··········consumptionToNumber(a.consumption)⏎··········-·consumptionToNumber(b.consumption),
(prettier/prettier)
[error] 109-109: Replace ···· with ··)⏎······
(prettier/prettier)
[error] 110-110: Replace ··········const·consumptionStr·=·(${item.consumption}).padEnd(8,·"·"); with ········const·consumptionStr·=·(${item.consumption}).padEnd(8,·"·")
(prettier/prettier)
[error] 111-111: Replace ··········const·idStr·=·item.model.id.padEnd(24,·"·"); with ········const·idStr·=·item.model.id.padEnd(24,·"·")
(prettier/prettier)
[error] 112-112: Replace ··const·nameStr·=·item.model.name.padEnd(32,·"·"); with const·nameStr·=·item.model.name.padEnd(32,·"·")
(prettier/prettier)
[error] 113-113: Replace ··········const·contextStr·=·context:·${item.maxTokensStr}; with ········const·contextStr·=·context:·${item.maxTokensStr}``
(prettier/prettier)
[error] 114-114: Replace ··········return·${i·+·1}.\t${consumptionStr}${idStr}\t${nameStr}\t${contextStr}; with ········return·${i·+·1}.\t${consumptionStr}${idStr}\t${nameStr}\t${contextStr}``
(prettier/prettier)
[error] 115-116: Replace ········}).join("\n")⏎····}`` with ······})⏎······.join("\n")},
(prettier/prettier)
src/routes/models/route.ts
[error] 6-6: Expected "/lib/model-consumption.json" to come before "/lib/utils".
(perfectionist/sort-imports)
[error] 39-39: Replace (a,·b)·=>·consumptionToNumber(a.consumption)·-·consumptionToNumber(b.consumption) with ⏎········(a,·b)·=>⏎··········consumptionToNumber(a.consumption)⏎··········-·consumptionToNumber(b.consumption),⏎······
(prettier/prettier)
[error] 48-48: Insert ⏎·········
(prettier/prettier)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Agent
🔇 Additional comments (5)
tsconfig.json (1)
9-9: LGTM: Enables JSON module imports for consumption data.This change correctly enables JSON module imports, which is necessary for importing
model-consumption.jsonthroughout the codebase.start.bat (1)
18-18: LGTM: Enhanced start command with watch mode.The updated command explicitly specifies the entry point and enables watch mode, providing better development experience.
src/server.ts (1)
30-33: LGTM: Backward compatibility routes added.The
/api/v0/*routes provide backward compatibility while reusing existing handlers, following the same pattern as the/v1/routes.src/services/copilot/create-chat-completions.ts (1)
38-60: Enhanced error logging is safe and correct.Verification confirms that
state.modelsis properly populated during server startup (src/start.ts:62) before any requests are handled. The code also uses safe optional chaining (state.models?.data), and a fallback caching mechanism exists in the routes layer. The error handling enhancement provides valuable debugging information without introducing any safety concerns.src/services/copilot/get-models.ts (1)
12-17: Removeanytype annotation—filtering behavior is intentional and working as expected.The
anytype on line 14 defeats TypeScript's type safety. Since theModelinterface is properly defined in this file, you can safely remove the explicitanyannotation and let TypeScript infer the type:const result = await response.json() as ModelsResponse result.data = result.data.filter( - (model: any) => + (model) => model.model_picker_category !== undefined && model.model_picker_enabled === true ) return resultRegarding the filtering behavior: code analysis confirms this is intentional. All downstream callers (start.ts, routes, model-matcher.ts) either apply additional filters or don't attempt to access models outside the picker criteria. No caller expects or needs the unfiltered response, confirming the filtering here is the correct design.
src/lib/model-consumption.json
Outdated
| { | ||
| "models": [ | ||
| { | ||
| "name": "GPT-4.1", | ||
| "consumption": "0x" | ||
| }, | ||
| { | ||
| "name": "GPT-4o", | ||
| "consumption": "0x" | ||
| }, | ||
| { | ||
| "name": "GPT-5 mini", | ||
| "consumption": "0x" | ||
| }, | ||
| { | ||
| "name": "Grok Code Fast 1", | ||
| "consumption": "0x" | ||
| }, | ||
| { | ||
| "name": "Claude Haiku 4.5", | ||
| "consumption": "0.33x" | ||
| }, | ||
| { | ||
| "name": "Claude Sonnet 4", | ||
| "consumption": "1x" | ||
| }, | ||
| { | ||
| "name": "Claude Sonnet 4.5", | ||
| "consumption": "1x" | ||
| }, | ||
| { | ||
| "name": "Gemini 2.5 Pro", | ||
| "consumption": "1x" | ||
| }, | ||
| { | ||
| "name": "GPT-5", | ||
| "consumption": "1x" | ||
| }, | ||
| { | ||
| "name": "GPT-5-Codex (Preview)", | ||
| "consumption": "1x" | ||
| } | ||
| ] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify model names match actual GitHub Copilot models.
The consumption data structure looks good, but some model names appear unusual (e.g., "GPT-5", "GPT-5 mini", "GPT-5-Codex (Preview)", "Grok Code Fast 1"). These may be placeholders or future models.
Please verify that these model names match actual GitHub Copilot model IDs by searching the web for the latest Copilot model documentation:
Minor note: The AI summary mentions 11 models, but the file contains 10 models.
🌐 Web query:
What are the current GitHub Copilot model names and IDs available as of November 2024?
💡 Result:
Here are the GitHub Copilot models announced/available around November 2024 (names as published during the GitHub Universe Oct 2024 multi‑model announcement). Sources below.
- Anthropic: Claude 3.5 Sonnet. [1][2]
- Google: Gemini 1.5 Pro. [1][2]
- OpenAI: GPT‑4o (and OpenAI “o1‑preview” / o1‑mini variants mentioned for Copilot). [1][2]
Sources:
[1] The Verge — “GitHub Copilot will support models from Anthropic, Google, and OpenAI” (Oct 29, 2024).
[2] GitHub Universe coverage / press summaries reporting the Oct 2024 Copilot multi‑model announcement.
Critical: Update model names to match actual GitHub Copilot models available as of November 2024.
The current GitHub Copilot models announced are: Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google), and GPT-4o (OpenAI), with o1-preview and o1-mini variants also available.
Most names in this file are inaccurate:
- Remove: GPT-4.1, GPT-5, GPT-5 mini, GPT-5-Codex, Grok Code Fast 1, Claude Haiku 4.5, Claude Sonnet 4, Claude Sonnet 4.5
- Fix: Gemini 2.5 Pro → Gemini 1.5 Pro
- Keep: GPT-4o (correct)
- Add missing: o1-preview, o1-mini
| let normalizedRequested = requestedModel | ||
| .toLowerCase() | ||
| .replace(/_/g, "-") | ||
| .replace(/-(\d{8})$/, "") // Remove -20251001 style suffix | ||
| .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normalization breaks valid IDs like gpt-4-1106-preview
replace(/(\d)-(\d)/g, "$1.$2") also fires on multi-digit suffixes, so a request for gpt-4-1106-preview becomes gpt-4.1106-preview and can no longer match the real model ID. This makes validateAndReplaceModel reject legitimate models. Please constrain the normalization to single-digit version fragments only, e.g.:
- .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5
+ .replace(/\b(\d)-(\d)\b/g, (_match, major, minor) => `${major}.${minor}`) // Replace 4-5 with 4.5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let normalizedRequested = requestedModel | |
| .toLowerCase() | |
| .replace(/_/g, "-") | |
| .replace(/-(\d{8})$/, "") // Remove -20251001 style suffix | |
| .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5 | |
| let normalizedRequested = requestedModel | |
| .toLowerCase() | |
| .replace(/_/g, "-") | |
| .replace(/-(\d{8})$/, "") // Remove -20251001 style suffix | |
| .replace(/\b(\d)-(\d)\b/g, (_match, major, minor) => `${major}.${minor}`) // Replace 4-5 with 4.5 |
🧰 Tools
🪛 ESLint
[error] 33-33: 'normalizedRequested' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 35-35: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 36-36: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
[error] 37-37: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
🤖 Prompt for AI Agents
In src/lib/model-matcher.ts around lines 33 to 38, the normalization step's
pattern that replaces digit-dash-digit sequences also matches multi-digit
fragments (e.g. transforms "gpt-4-1106-preview" to "gpt-4.1106-preview");
restrict that replacement so it only converts single-digit version fragments (a
single digit, a dash, a single digit) and does not fire when the digit after the
dash is followed by additional digits (i.e. ensure the second digit is not part
of a multi-digit sequence or use a word boundary), so multi-digit suffixes
remain unchanged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
src/routes/chat-completions/handler.ts (1)
28-39: Same non-null assertion issue as messages handler.The non-null assertion on line 39 has the same type safety concern as in
src/routes/messages/handler.ts(line 53). Consider applying the same refactor to both handlers for consistency.src/start.ts (1)
77-94: Remove commented-out debug code.This large commented-out block should be removed from the production codebase. If this debug functionality is needed, consider moving it behind a debug flag or removing it entirely.
Based on past review comments.
src/lib/model-matcher.ts (2)
94-99: CRITICAL: Regex breaks multi-digit model IDs.The regex pattern
/(\d)-(\d)/gon line 98 will incorrectly transform all digit-hyphen-digit sequences, including multi-digit identifiers. For example,gpt-4-1106-previewbecomesgpt-4.1106-preview, which no longer matches the actual model ID and causes validation to fail.This issue was flagged in previous review comments but remains unresolved.
Apply this fix to restrict replacement to single-digit version patterns only:
let normalizedRequested = requestedModel .toLowerCase() - .replace(/_/g, "-") - .replace(/-(\d{8})$/, "") // Remove -20251001 style suffix - .replace(/(\d)-(\d)/g, "$1.$2") // Replace 4-5 with 4.5 + .replaceAll("_", "-") + .replace(/-\d{8}$/, "") // Remove -20251001 style suffix + .replace(/\b(\d)-(\d)\b/g, "$1.$2") // Replace 4-5 with 4.5 only at word boundariesThe
\bword boundaries ensure the pattern only matches when the digits are standalone (e.g.,4-5) and not part of longer sequences like4-1106.Based on past review comments.
112-126: Bidirectional prefix matching may cause ambiguity.The bidirectional prefix check (lines 118-119) could produce ambiguous results when multiple models share prefixes. For example, if "gpt-4" is requested and both "gpt-4" and "gpt-4o" are available, the match depends on iteration order. Consider matching only one direction or adding explicit preference logic.
- if ( - normalizedAvailable.startsWith(normalizedRequested) || - normalizedRequested.startsWith(normalizedAvailable) - ) { + if (normalizedAvailable.startsWith(normalizedRequested)) {Based on past review comments.
🧹 Nitpick comments (4)
src/routes/messages/handler.ts (1)
42-53: Avoid non-null assertion for better type safety.While the non-null assertion on line 53 is safe due to the early return on validation failure, it bypasses TypeScript's type checking. Consider restructuring for clearer type safety.
Apply this diff to eliminate the non-null assertion:
- // Replace model if a match was found - openAIPayload.model = validation.model! + // Use the validated/replaced model + if (validation.model) { + openAIPayload.model = validation.model + }Or destructure for cleaner code:
// Validate and potentially replace model const validation = validateAndReplaceModel(openAIPayload.model) if (!validation.success) { return c.json({ error: validation.error }, 400) } - // Replace model if a match was found - openAIPayload.model = validation.model! + const { model } = validation + openAIPayload.model = model!src/start.ts (2)
96-107: Consider moving helper to outer scope (optional).The
consumptionToNumberhelper could be moved to module scope for potential reuse and to satisfy ESLint'sconsistent-function-scopingrule. However, keeping it local is also reasonable if it's only used here.If you prefer module scope:
// At module level, before runServer function consumptionToNumber(consumption: string): number { if (consumption === "N/A") return 999 const match = consumption.match(/^([\d.]+)x$/) return match ? Number.parseFloat(match[1]) : 999 }
108-130: Model listing logic is sound, but has numerous formatting issues.The consumption-based model listing and sorting logic is correct. However, ESLint has flagged numerous formatting issues (spacing, const vs. let on line 112, line breaks, etc.). While these don't affect functionality, running
prettieror your formatter would clean up the code.Key improvement:
- let maxTokens = model.capabilities?.limits?.max_context_window_tokens; + const maxTokens = model.capabilities?.limits?.max_context_window_tokensConsider running your code formatter to address the remaining spacing and formatting issues flagged by ESLint.
src/lib/model-matcher.ts (1)
53-158: High complexity: consider extracting matching strategies.ESLint reports a complexity of 21 (max 16) for this function. While the multi-strategy matching logic is inherently complex, consider extracting each matching strategy into separate functions for better readability and testability.
Example refactor:
function tryExactMatch(requested: string, available: string[]): string | null { return available.includes(requested) ? requested : null } function tryNormalizedMatch(normalized: string, available: string[]): string | null { for (const id of available) { if (id.toLowerCase() === normalized) return id } return null } function tryPrefixMatch(normalized: string, available: string[]): string | null { // ... prefix logic } function tryBaseMatch(normalized: string, available: string[]): string | null { // ... base comparison logic }Then compose them in
findMatchingModel:export function findMatchingModel(requestedModel: string): string | null { // ... setup ... return tryExactMatch(requestedModel, availableModelIds) || tryNormalizedMatch(normalizedRequested, availableModelIds) || tryPrefixMatch(normalizedRequested, availableModelIds) || tryBaseMatch(normalizedRequested, availableModelIds) || getFallback(highUsage, zeroConsumptionModels) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
-
src/lib/model-matcher.ts(1 hunks) -
src/lib/refresh-usage.ts(1 hunks) -
src/lib/state.ts(2 hunks) -
src/routes/chat-completions/handler.ts(2 hunks) -
src/routes/messages/handler.ts(3 hunks) -
src/start.ts(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-11T04:33:30.513Z
Learnt from: caozhiyuan
Repo: ericc-ch/copilot-api PR: 142
File: src/routes/messages/handler.ts:50-52
Timestamp: 2025-11-11T04:33:30.513Z
Learning: In src/routes/messages/handler.ts, forcing anthropicPayload.model to getSmallModel() when no tools are present is intentional behavior to fix Claude Code 2.0.28 warmup requests consuming premium model tokens. This applies to all requests without tools, not just warmup requests, and is an accepted design decision.
Applied to files:
src/routes/messages/handler.tssrc/routes/chat-completions/handler.tssrc/lib/model-matcher.ts
🧬 Code graph analysis (6)
src/routes/messages/handler.ts (2)
src/lib/refresh-usage.ts (1)
refreshUsage(15-51)src/lib/model-matcher.ts (1)
validateAndReplaceModel(164-208)
src/lib/refresh-usage.ts (2)
src/services/github/get-copilot-usage.ts (1)
getCopilotUsage(5-15)src/lib/state.ts (1)
state(24-29)
src/lib/state.ts (1)
src/services/github/get-copilot-usage.ts (1)
QuotaDetail(17-26)
src/start.ts (2)
src/lib/refresh-usage.ts (2)
forceRefreshUsage(56-59)getCurrentUsagePercent(64-69)src/lib/state.ts (1)
state(24-29)
src/routes/chat-completions/handler.ts (2)
src/lib/refresh-usage.ts (1)
refreshUsage(15-51)src/lib/model-matcher.ts (1)
validateAndReplaceModel(164-208)
src/lib/model-matcher.ts (1)
src/lib/state.ts (1)
state(24-29)
🪛 ESLint
src/routes/messages/handler.ts
[error] 53-53: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
src/lib/refresh-usage.ts
[error] 4-4: Expected "~/services/github/get-copilot-usage" (value-internal) to come before "./state" (value-sibling).
(perfectionist/sort-imports)
[error] 39-39: Possible race condition: lastUsageFetchTime might be reassigned based on an outdated value of lastUsageFetchTime.
(require-atomic-updates)
[error] 49-49: Possible race condition: isFetching might be reassigned based on an outdated value of isFetching.
(require-atomic-updates)
src/start.ts
[error] 16-16: Expected "./lib/model-consumption.json" to come before "./server".
(perfectionist/sort-imports)
[error] 82-82: Delete ·
(prettier/prettier)
[error] 83-83: Delete ·
(prettier/prettier)
[error] 84-84: Delete ·
(prettier/prettier)
[error] 85-85: Delete ·
(prettier/prettier)
[error] 86-86: Delete ·
(prettier/prettier)
[error] 98-98: Replace m·=>·[m.name,·m.consumption]) with (m)·=>·[m.name,·m.consumption]),
(prettier/prettier)
[error] 99-99: Delete ;
(prettier/prettier)
[error] 102-102: Move arrow function 'consumptionToNumber' to the outer scope.
(unicorn/consistent-function-scoping)
[error] 103-103: Delete ;
(prettier/prettier)
[error] 104-104: Delete ;
(prettier/prettier)
[error] 105-105: Delete ;
(prettier/prettier)
[error] 106-106: Delete ;
(prettier/prettier)
[error] 109-110: Delete ⏎······
(prettier/prettier)
[error] 111-111: Replace ··.map(model with .map((model)
(prettier/prettier)
[error] 112-112: Replace ··········let·maxTokens·=·model.capabilities?.limits?.max_context_window_tokens; with ········let·maxTokens·=·model.capabilities?.limits?.max_context_window_tokens
(prettier/prettier)
[error] 112-112: 'maxTokens' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 113-113: Replace ··········let·maxTokensStr·=·"N/A"; with ········let·maxTokensStr·=·"N/A"
(prettier/prettier)
[error] 114-114: Delete ··
(prettier/prettier)
[error] 115-115: Replace ············maxTokensStr·=·maxTokens·>=·1000·?·${maxTokens·/·1000}k·:·${maxTokens}; with ··········maxTokensStr·=⏎············maxTokens·>=·1000·?·${maxTokens·/·1000}k·:·${maxTokens}``
(prettier/prettier)
[error] 116-116: Delete ··
(prettier/prettier)
[error] 117-117: Replace ··const·consumption·=·consumptionMap.get(model.name)·||·"N/A"; with const·consumption·=·consumptionMap.get(model.name)·||·"N/A"
(prettier/prettier)
[error] 118-118: Replace ··········return·{·model,·maxTokensStr,·consumption·}; with ········return·{·model,·maxTokensStr,·consumption·}
(prettier/prettier)
[error] 119-119: Delete ··
(prettier/prettier)
[error] 120-120: Replace ··.filter(item with .filter((item)
(prettier/prettier)
[error] 121-121: Replace ··.sort((a,·b)·=>·consumptionToNumber(a.consumption)·-·consumptionToNumber(b.consumption)) with .sort(⏎········(a,·b)·=>⏎··········consumptionToNumber(a.consumption)⏎··········-·consumptionToNumber(b.consumption),
(prettier/prettier)
[error] 122-122: Replace ···· with ··)⏎······
(prettier/prettier)
[error] 123-123: Replace ··········const·consumptionStr·=·(${item.consumption}).padEnd(8,·"·"); with ········const·consumptionStr·=·(${item.consumption}).padEnd(8,·"·")
(prettier/prettier)
[error] 124-124: Replace ··········const·idStr·=·item.model.id.padEnd(24,·"·"); with ········const·idStr·=·item.model.id.padEnd(24,·"·")
(prettier/prettier)
[error] 125-125: Replace ··const·nameStr·=·item.model.name.padEnd(32,·"·"); with const·nameStr·=·item.model.name.padEnd(32,·"·")
(prettier/prettier)
[error] 126-126: Replace ··········const·contextStr·=·context:·${item.maxTokensStr}; with ········const·contextStr·=·context:·${item.maxTokensStr}``
(prettier/prettier)
[error] 127-127: Replace ··········return·${i·+·1}.\t${consumptionStr}${idStr}\t${nameStr}\t${contextStr}; with ········return·${i·+·1}.\t${consumptionStr}${idStr}\t${nameStr}\t${contextStr}``
(prettier/prettier)
[error] 128-129: Replace ········}).join("\n")⏎····}`` with ······})⏎······.join("\n")},
(prettier/prettier)
src/routes/chat-completions/handler.ts
[error] 39-39: Forbidden non-null assertion.
(@typescript-eslint/no-non-null-assertion)
src/lib/model-matcher.ts
[error] 35-35: Array type using 'string[]' is forbidden. Use 'Array' instead.
(@typescript-eslint/array-type)
[error] 37-37: Insert ⏎·····
(prettier/prettier)
[error] 53-53: Function 'findMatchingModel' has a complexity of 21. Maximum allowed is 16.
(complexity)
[error] 55-55: Insert ⏎·····
(prettier/prettier)
[error] 94-94: 'normalizedRequested' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 96-96: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 97-97: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
[error] 98-98: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 118-118: Delete ·||
(prettier/prettier)
[error] 119-119: Insert ·||
(prettier/prettier)
[error] 175-175: Insert ⏎·····
(prettier/prettier)
🔇 Additional comments (11)
src/lib/state.ts (1)
2-2: LGTM! Clean extension for usage tracking.The addition of
premiumInteractionsas an optional property is appropriate since it's populated asynchronously by the refresh mechanism. The type import and interface extension are clean and maintain type safety.Also applies to: 19-21
src/routes/messages/handler.ts (1)
28-30: LGTM! Usage refresh properly integrated.Refreshing usage information before processing the request ensures consumption decisions are based on current data. The async call is properly awaited.
src/routes/chat-completions/handler.ts (1)
20-22: LGTM! Consistent usage refresh pattern.Usage refresh is consistently applied across both chat-completions and messages routes, ensuring up-to-date consumption data before processing requests.
src/lib/refresh-usage.ts (3)
6-10: LGTM! Reasonable cache configuration.The 1-minute TTL balances freshness with API call reduction. Module-level state variables are appropriate for this singleton caching pattern.
15-51: Effective caching implementation with good error handling.The cache and concurrency guard work well for reducing redundant API calls. ESLint's race condition warnings on lines 39 and 49 are false positives in JavaScript's single-threaded execution model—the pattern is adequate for its intended purpose.
Error handling gracefully continues with existing state rather than blocking the main flow, which is the right approach for non-critical usage tracking.
56-69: LGTM! Clean utility functions.Both
forceRefreshUsageandgetCurrentUsagePercentare straightforward helpers with appropriate null handling.src/start.ts (1)
64-75: LGTM! Initial usage fetch on startup.Fetching usage information at startup provides immediate visibility into premium interaction consumption. The dynamic import and conditional logging are well-implemented.
src/lib/model-matcher.ts (4)
9-18: LGTM! Clean consumption lookup.The consumption lookup and parsing logic is straightforward. Using 999 as a sentinel for N/A values works well for sorting purposes.
23-30: LGTM! Clear usage threshold check.The 50% threshold check is clear and handles the case when usage data is unavailable appropriately.
35-45: LGTM! Clear filtering for 0x models.The two-step filtering (valid context window, then 0x consumption) is clear and correct.
164-208: LGTM! Clean validation wrapper.The
validateAndReplaceModelfunction provides a clean, structured API aroundfindMatchingModelwith appropriate error handling and logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/lib/model-matcher.ts (2)
132-136: Critical: Regex pattern breaks valid model IDs.This issue was flagged in previous reviews but remains unresolved. The pattern
/(\d)-(\d)/gmatches ALL digit-dash-digit sequences, including multi-digit model version strings. For example:
gpt-3-5-turbo→gpt.3.5.turbo(breaks the official model name)gpt-4-1106-preview→gpt-4.1106-preview(breaks version suffix)This causes legitimate model requests to fail validation.
Apply this diff to restrict the pattern to single-digit versions only:
- let normalizedRequested = requestedModel + const normalizedRequested = requestedModel .toLowerCase() - .replace(/_/g, "-") // 下划线转连字符 - .replace(/-(\d{8})$/, "") // 移除 -20251001 风格的日期后缀 - .replace(/(/\d)-(\d)/g, "$1.$2") // 版本号:4-5 -> 4.5 + .replace(/_/g, "-") // 下划线转连字符 + .replace(/-\d{8}$/, "") // 移除 -20251001 风格的日期后缀 + .replace(/\b(\d)-(\d)\b/g, "$1.$2") // 版本号:4-5 -> 4.5(仅单个数字)
158-159: Bidirectional prefix matching can produce ambiguous results.This issue was flagged in previous reviews. When multiple models share prefixes (e.g.,
gpt-4andgpt-4o), the bidirectional check can match either depending on iteration order. If a user requestsgpt-4, it might incorrectly matchgpt-4oinstead.Apply this diff to use unidirectional matching:
// 双向前缀检查:请求模型是可用模型的前缀,或可用模型是请求模型的前缀 if ( - normalizedAvailable.startsWith(normalizedRequested) || - normalizedRequested.startsWith(normalizedAvailable) + normalizedAvailable.startsWith(normalizedRequested) ) { consola.info( `🔄 前缀匹配成功:'${requestedModel}' -> '${availableId}'`, )
🧹 Nitpick comments (2)
src/lib/model-matcher.ts (2)
18-33: Cache the consumption map at module level to avoid repeated allocations.The Map is recreated on every function call, but since
modelConsumptionDatais static, the Map should be created once and reused.Apply this diff to cache the Map at module level:
+// 模块级别缓存消耗映射,避免重复创建 +const consumptionMap = new Map( + modelConsumptionData.models.map((m) => [m.name, m.consumption]), +) + /** * 获取模型消耗值 * 从配置文件中查询指定模型的消耗系数 * * @param modelName - 模型名称 * @returns 消耗系数(如 1.0, 2.0 等),未找到或解析失败返回 999 * * @example * getModelConsumption("claude-3.5-sonnet") // 返回 1.0 * getModelConsumption("gpt-4") // 返回 2.0 * getModelConsumption("unknown-model") // 返回 999 */ function getModelConsumption(modelName: string): number { - // 将模型消耗数据转换为 Map 结构,方便快速查询 - const consumptionMap = new Map( - modelConsumptionData.models.map((m) => [m.name, m.consumption]), - ) - // 获取消耗值,未找到则返回 "N/A" const consumption = consumptionMap.get(modelName) || "N/A"
94-202: Function complexity exceeds maximum allowed.The function has a cyclomatic complexity of 18, which exceeds the maximum allowed complexity of 16. Consider extracting the matching strategies into separate helper functions.
For example, you could extract each strategy:
function tryExactMatch(requestedModel: string, availableIds: string[]): string | null { // Strategy 1 logic } function tryPrefixMatch(requestedModel: string, availableIds: string[]): string | null { // Strategy 2 logic } function tryBaseNameMatch(requestedModel: string, availableIds: string[]): string | null { // Strategy 3 logic }Then simplify
findMatchingModel:export function findMatchingModel(requestedModel: string): string | null { // ... setup code ... return ( tryExactMatch(normalizedRequested, availableModelIds) || tryPrefixMatch(normalizedRequested, availableModelIds) || tryBaseNameMatch(normalizedRequested, availableModelIds) || getFallbackModel(highUsage, zeroConsumptionModels) ) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/lib/model-matcher.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-11-11T04:33:30.513Z
Learnt from: caozhiyuan
Repo: ericc-ch/copilot-api PR: 142
File: src/routes/messages/handler.ts:50-52
Timestamp: 2025-11-11T04:33:30.513Z
Learning: In src/routes/messages/handler.ts, forcing anthropicPayload.model to getSmallModel() when no tools are present is intentional behavior to fix Claude Code 2.0.28 warmup requests consuming premium model tokens. This applies to all requests without tools, not just warmup requests, and is an accepted design decision.
Applied to files:
src/lib/model-matcher.ts
🧬 Code graph analysis (1)
src/lib/model-matcher.ts (1)
src/lib/state.ts (1)
state(24-29)
🪛 ESLint
src/lib/model-matcher.ts
[error] 9-9: Delete ·
(prettier/prettier)
[error] 12-12: Delete ·
(prettier/prettier)
[error] 23-23: Delete ··
(prettier/prettier)
[error] 29-29: Delete ··
(prettier/prettier)
[error] 37-37: Delete ·
(prettier/prettier)
[error] 40-40: Delete ·
(prettier/prettier)
[error] 56-56: Delete ·
(prettier/prettier)
[error] 58-58: Delete ·
(prettier/prettier)
[error] 61-61: Array type using 'string[]' is forbidden. Use 'Array' instead.
(@typescript-eslint/array-type)
[error] 64-64: Insert ⏎·····
(prettier/prettier)
[error] 77-77: Delete ·
(prettier/prettier)
[error] 82-82: Delete ·
(prettier/prettier)
[error] 86-86: Delete ·
(prettier/prettier)
[error] 89-89: Delete ·
(prettier/prettier)
[error] 94-94: Function 'findMatchingModel' has a complexity of 18. Maximum allowed is 16.
(complexity)
[error] 97-97: Insert ⏎·····
(prettier/prettier)
[error] 107-107: Delete ··
(prettier/prettier)
[error] 110-110: Delete ··
(prettier/prettier)
[error] 120-122: Replace ⏎······,⏎···· with ⚠️··高级交互使用率·>50%,模糊匹配仅限·0x·消耗模型
(prettier/prettier)
[error] 132-132: 'normalizedRequested' is never reassigned. Use 'const' instead.
(prefer-const)
[error] 134-134: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 134-134: Delete ················
(prettier/prettier)
[error] 135-135: Capturing group number 1 is defined but never used.
(regexp/no-unused-capturing-group)
[error] 135-135: Delete ··········
(prettier/prettier)
[error] 136-136: Prefer String#replaceAll() over String#replace().
(unicorn/prefer-string-replace-all)
[error] 136-136: Delete ····
(prettier/prettier)
[error] 143-145: Replace ⏎········🔄·标准化匹配成功:'${requestedModel}'·->·'${availableId}',⏎······ with 🔄·标准化匹配成功:'${requestedModel}'·->·'${availableId}'
(prettier/prettier)
[error] 158-158: Delete ·||
(prettier/prettier)
[error] 159-159: Insert ·||
(prettier/prettier)
[error] 161-163: Replace ⏎········🔄·前缀匹配成功:'${requestedModel}'·->·'${availableId}',⏎······ with 🔄·前缀匹配成功:'${requestedModel}'·->·'${availableId}'
(prettier/prettier)
[error] 206-206: Delete ·
(prettier/prettier)
[error] 211-211: Delete ·
(prettier/prettier)
[error] 217-217: Delete ·
(prettier/prettier)
[error] 222-222: Delete ·
(prettier/prettier)
[error] 239-239: Insert ⏎·····
(prettier/prettier)
[error] 265-267: Replace ⏎······✓·模型匹配并替换:${requestedModel}·->·${matchedModel},⏎···· with ✓·模型匹配并替换:${requestedModel}·->·${matchedModel}
(prettier/prettier)
🔇 Additional comments (3)
src/lib/model-matcher.ts (3)
43-52: LGTM!The logic correctly checks if premium usage exceeds 50% and handles the case when
premiumInteractionsis undefined.
61-73: LGTM!The function correctly filters models with 0 consumption that have valid context window tokens.
227-277: LGTM!The validation wrapper correctly handles both success and failure cases, provides clear error messages with available models listed, and logs the results appropriately.
功能概述 | Overview
添加模型消耗跟踪和智能模型匹配功能,支持模型别名自动替换和消耗优化。
Add model consumption tracking and intelligent model matching features, with support for automatic model alias replacement and consumption optimization.
主要变更 | Main Changes
新增功能 | New Features
src/lib/model-consumption.json)src/lib/model-matcher.ts)/modelsAPI 添加按消耗排序 | Add consumption-based sorting for/modelsAPI/api/v0/*路由支持 | Add/api/v0/*route support使用率监控 | Usage Monitoring
模型匹配功能 | Model Matching Features
4-5→4.5)claude-haiku-4-5-20251001→claude-haiku-4.5)用户体验改进 | UX Improvements
文件变更 | File Changes
Summary by CodeRabbit
New Features
Improvements
Chores
✏️ Tip: You can customize this high-level summary in your review settings.