Skip to content

Conversation

@jmagar
Copy link

@jmagar jmagar commented Feb 4, 2026

Implements custom providers with multi-model support, adds IPC model discovery, and updates onboarding to encrypt tokens immediately.

Copilot AI review requested due to automatic review settings February 4, 2026 13:10
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 961b3797d8

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 489 to 493
const handleDelete = (id: string) => {
if (window.confirm("Are you sure you want to delete this provider?")) {
if (activeProviderId === id) {
setActiveProviderId(null)
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reset selected model when deleting active provider

When a user deletes the currently active custom provider, you clear activeProviderId but leave lastSelectedModelId pointing at the custom model. The transport fallback uses lastSelectedModelId even when no provider is active (see ipc-chat-transport.ts modelString construction), so after deletion the app can still try to send a model id that only existed in the removed provider, which can fail for default Claude Code. This happens if a user deletes the active provider without first switching to a standard model. Consider resetting lastSelectedModelId to a standard model (or the first available base model) when removing the active provider.

Useful? React with 👍 / 👎.

Comment on lines +259 to 263
// New: Model providers storage
export const modelProvidersAtom = atomWithStorage<ModelProvider[]>(
"agents:model-providers",
[OFFLINE_PROVIDER], // Start with offline provider
undefined,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Migrate legacy custom Claude config to providers

The new model provider storage only initializes from agents:model-providers and drops the previous agents:claude-custom-config key entirely. There’s no migration path in this commit, so existing users who set a custom API key/model via the legacy storage will lose that configuration on upgrade and fall back to the default Claude Code setup until they re‑enter credentials. Consider migrating the old customClaudeConfig into a provider entry (or reading it as a fallback) to avoid silent regression for existing installs.

Useful? React with 👍 / 👎.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements a comprehensive provider-based model configuration system replacing the previous single custom configuration approach. The changes enable users to configure multiple model providers (e.g., Ollama, LM Studio, custom endpoints) with multi-model support per provider.

Changes:

  • Refactored model configuration from single customClaudeConfigAtom to multiple ModelProvider entries in modelProvidersAtom
  • Added immediate token encryption during onboarding flow using Electron's safeStorage API
  • Implemented IPC model discovery endpoint that supports both Ollama and OpenAI-compatible APIs
  • Updated UI components to display and select from multiple providers and their models
  • Added provider management dialog in settings with model discovery feature

Reviewed changes

Copilot reviewed 10 out of 11 changed files in this pull request and generated 13 comments.

Show a summary per file
File Description
src/renderer/lib/atoms/index.ts Replaced single custom config with ModelProvider type and provider array storage
src/renderer/features/onboarding/api-key-onboarding-page.tsx Updated onboarding to encrypt tokens immediately and create provider entries
src/renderer/features/agents/main/new-chat-form.tsx Added provider-aware model dropdown with sections for standard and custom models
src/renderer/features/agents/main/chat-input-area.tsx Updated chat input model selector to support custom providers
src/renderer/features/agents/main/active-chat.tsx Updated active chat display to show custom model indicator
src/renderer/features/agents/lib/models.ts Added getAvailableModels function to combine standard and custom provider models
src/renderer/features/agents/lib/ipc-chat-transport.ts Updated transport to use provider-based configuration and encrypt tokens on-the-fly
src/renderer/components/dialogs/settings-tabs/agents-models-tab.tsx Replaced simple override fields with full provider management UI including add/edit/delete
src/main/lib/trpc/routers/claude.ts Added encryptToken and fetchModels endpoints, updated query to decrypt tokens
package.json Version bump to 0.0.55 and added TypeScript native preview dependency
bun.lock Updated lockfile with new dependencies and version changes

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 2414 to 2446
try {
const ollamaRes = await fetch(`${cleanUrl}/api/tags`)
if (ollamaRes.ok) {
const data = await ollamaRes.json()
if (Array.isArray(data.models)) {
const models = data.models
.map((model: { name?: string }) => model.name)
.filter((name: string | undefined): name is string => Boolean(name))
return { models }
}
}
} catch (error) {
console.warn("[models] Failed to fetch Ollama tags:", error)
}

try {
const res = await fetch(`${cleanUrl}/v1/models`, {
headers: authToken ? { Authorization: `Bearer ${authToken}` } : {},
})
if (res.ok) {
const data = await res.json()
if (Array.isArray(data.data)) {
const models = data.data
.map((model: { id?: string }) => model.id)
.filter((id: string | undefined): id is string => Boolean(id))
return { models }
}
}
} catch (error) {
console.warn("[models] Failed to fetch OpenAI-compatible models:", error)
}

return { models: [] as string[] }
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fetchModels endpoint tries Ollama format first, then falls back to OpenAI-compatible format. However, both attempts swallow errors and return empty models array on failure. This makes it difficult to distinguish between:

  1. A server that responds but has no models
  2. A server that's unreachable
  3. A server that returns an error

Consider returning more detailed information about what failed, especially for debugging purposes. At minimum, the response could include a status field indicating whether the request succeeded or failed and why.

Suggested change
try {
const ollamaRes = await fetch(`${cleanUrl}/api/tags`)
if (ollamaRes.ok) {
const data = await ollamaRes.json()
if (Array.isArray(data.models)) {
const models = data.models
.map((model: { name?: string }) => model.name)
.filter((name: string | undefined): name is string => Boolean(name))
return { models }
}
}
} catch (error) {
console.warn("[models] Failed to fetch Ollama tags:", error)
}
try {
const res = await fetch(`${cleanUrl}/v1/models`, {
headers: authToken ? { Authorization: `Bearer ${authToken}` } : {},
})
if (res.ok) {
const data = await res.json()
if (Array.isArray(data.data)) {
const models = data.data
.map((model: { id?: string }) => model.id)
.filter((id: string | undefined): id is string => Boolean(id))
return { models }
}
}
} catch (error) {
console.warn("[models] Failed to fetch OpenAI-compatible models:", error)
}
return { models: [] as string[] }
const debugInfo: {
ollama?: { ok: boolean; status?: number; error?: string }
openai?: { ok: boolean; status?: number; error?: string }
} = {}
// Try Ollama-compatible endpoint first
try {
const ollamaRes = await fetch(`${cleanUrl}/api/tags`)
debugInfo.ollama = {
ok: ollamaRes.ok,
status: ollamaRes.status,
}
if (ollamaRes.ok) {
const data = await ollamaRes.json()
if (Array.isArray((data as any).models)) {
const models = (data as any).models
.map((model: { name?: string }) => model.name)
.filter((name: string | undefined): name is string => Boolean(name))
if (models.length > 0) {
return {
models,
status: {
success: true,
source: "ollama" as const,
details: debugInfo,
},
}
}
debugInfo.ollama.error = "No models found in Ollama response"
} else {
debugInfo.ollama.error = "Ollama response missing 'models' array"
}
} else {
debugInfo.ollama.error = `Ollama endpoint returned HTTP ${ollamaRes.status}`
}
} catch (error) {
const message =
error instanceof Error ? error.message : String(error)
debugInfo.ollama = {
ok: false,
error: `Failed to fetch Ollama tags: ${message}`,
}
console.warn("[models] Failed to fetch Ollama tags:", error)
}
// Fallback: try OpenAI-compatible /v1/models endpoint
try {
const res = await fetch(`${cleanUrl}/v1/models`, {
headers: authToken ? { Authorization: `Bearer ${authToken}` } : {},
})
debugInfo.openai = {
ok: res.ok,
status: res.status,
}
if (res.ok) {
const data = await res.json()
if (Array.isArray((data as any).data)) {
const models = (data as any).data
.map((model: { id?: string }) => model.id)
.filter((id: string | undefined): id is string => Boolean(id))
if (models.length > 0) {
return {
models,
status: {
success: true,
source: "openai" as const,
details: debugInfo,
},
}
}
debugInfo.openai.error = "No models found in OpenAI-compatible response"
} else {
debugInfo.openai.error = "OpenAI-compatible response missing 'data' array"
}
} else {
debugInfo.openai.error = `OpenAI-compatible endpoint returned HTTP ${res.status}`
}
} catch (error) {
const message =
error instanceof Error ? error.message : String(error)
debugInfo.openai = {
ok: false,
error: `Failed to fetch OpenAI-compatible models: ${message}`,
}
console.warn("[models] Failed to fetch OpenAI-compatible models:", error)
}
// Both attempts failed to return any models
return {
models: [] as string[],
status: {
success: false,
source: "none" as const,
reason:
"Failed to fetch models from both Ollama and OpenAI-compatible endpoints",
details: debugInfo,
},
}

Copilot uses AI. Check for mistakes.
Comment on lines +520 to +557
<div className="bg-background rounded-lg border border-border overflow-hidden divide-y divide-border">
{providers.map(provider => (
<div key={provider.id} className="flex items-center justify-between p-3 hover:bg-muted/50">
<div className="flex items-center gap-3">
<div className="flex-shrink-0">
{provider.isOffline ? (
<div className="h-8 w-8 rounded bg-blue-500/10 flex items-center justify-center text-blue-500">
<Server className="h-4 w-4" />
</div>
) : (
<div className="h-8 w-8 rounded bg-purple-500/10 flex items-center justify-center text-purple-500">
<Globe className="h-4 w-4" />
</div>
)}
</div>
<div>
<div className="text-sm font-medium flex items-center gap-2">
{provider.name}
{provider.isOffline && <Badge variant="secondary" className="text-[10px] h-4">OFFLINE</Badge>}
</div>
<div className="text-xs text-muted-foreground">
{provider.models.length} model{provider.models.length === 1 ? "" : "s"} • {provider.baseUrl}
</div>
</div>
</div>
<div className="flex items-center gap-1">
<Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-foreground" onClick={() => handleEdit(provider)}>
<Pencil className="h-3.5 w-3.5" />
</Button>
{!provider.isOffline && (
<Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-red-500" onClick={() => handleDelete(provider.id)}>
<Trash className="h-3.5 w-3.5" />
</Button>
)}
</div>
</div>
))}
</div>
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provider list includes the offline Ollama provider by default (OFFLINE_PROVIDER). However, the settings UI shows all providers in the list including the offline one. Users might be confused why they can't delete the offline provider (line 549 prevents deletion with !provider.isOffline check). Consider adding a visual indicator or help text explaining that the offline provider is built-in and cannot be removed.

Suggested change
<div className="bg-background rounded-lg border border-border overflow-hidden divide-y divide-border">
{providers.map(provider => (
<div key={provider.id} className="flex items-center justify-between p-3 hover:bg-muted/50">
<div className="flex items-center gap-3">
<div className="flex-shrink-0">
{provider.isOffline ? (
<div className="h-8 w-8 rounded bg-blue-500/10 flex items-center justify-center text-blue-500">
<Server className="h-4 w-4" />
</div>
) : (
<div className="h-8 w-8 rounded bg-purple-500/10 flex items-center justify-center text-purple-500">
<Globe className="h-4 w-4" />
</div>
)}
</div>
<div>
<div className="text-sm font-medium flex items-center gap-2">
{provider.name}
{provider.isOffline && <Badge variant="secondary" className="text-[10px] h-4">OFFLINE</Badge>}
</div>
<div className="text-xs text-muted-foreground">
{provider.models.length} model{provider.models.length === 1 ? "" : "s"} {provider.baseUrl}
</div>
</div>
</div>
<div className="flex items-center gap-1">
<Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-foreground" onClick={() => handleEdit(provider)}>
<Pencil className="h-3.5 w-3.5" />
</Button>
{!provider.isOffline && (
<Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-red-500" onClick={() => handleDelete(provider.id)}>
<Trash className="h-3.5 w-3.5" />
</Button>
)}
</div>
</div>
))}
</div>
<>
<div className="bg-background rounded-lg border border-border overflow-hidden divide-y divide-border">
{providers.map(provider => (
<div key={provider.id} className="flex items-center justify-between p-3 hover:bg-muted/50">
<div className="flex items-center gap-3">
<div className="flex-shrink-0">
{provider.isOffline ? (
<div className="h-8 w-8 rounded bg-blue-500/10 flex items-center justify-center text-blue-500">
<Server className="h-4 w-4" />
</div>
) : (
<div className="h-8 w-8 rounded bg-purple-500/10 flex items-center justify-center text-purple-500">
<Globe className="h-4 w-4" />
</div>
)}
</div>
<div>
<div className="text-sm font-medium flex items-center gap-2">
{provider.name}
{provider.isOffline && <Badge variant="secondary" className="text-[10px] h-4">OFFLINE</Badge>}
</div>
<div className="text-xs text-muted-foreground">
{provider.models.length} model{provider.models.length === 1 ? "" : "s"} {provider.baseUrl}
</div>
</div>
</div>
<div className="flex items-center gap-1">
<Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-foreground" onClick={() => handleEdit(provider)}>
<Pencil className="h-3.5 w-3.5" />
</Button>
{!provider.isOffline && (
<Button size="icon" variant="ghost" className="h-8 w-8 text-muted-foreground hover:text-red-500" onClick={() => handleDelete(provider.id)}>
<Trash className="h-3.5 w-3.5" />
</Button>
)}
</div>
</div>
))}
</div>
{providers.some(p => p.isOffline) && (
<div className="text-[11px] text-muted-foreground">
The offline provider is built in and cannot be removed.
</div>
)}
</>

Copilot uses AI. Check for mistakes.
Comment on lines +2456 to +2458
const encrypted = safeStorage.isEncryptionAvailable()
? safeStorage.encryptString(input.token).toString("base64")
: Buffer.from(input.token, "utf-8").toString("base64")
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fallback encryption when safeStorage is unavailable uses plain base64 encoding, which provides no actual encryption. While it's prefixed with "enc:", this gives a false sense of security. Consider either:

  1. Refusing to store tokens when encryption is unavailable and displaying an error to the user
  2. Clearly documenting this limitation and warning users
  3. Using an alternative encryption method as fallback

This is critical because tokens stored with base64 can be easily decoded by anyone with file system access.

Suggested change
const encrypted = safeStorage.isEncryptionAvailable()
? safeStorage.encryptString(input.token).toString("base64")
: Buffer.from(input.token, "utf-8").toString("base64")
if (!safeStorage.isEncryptionAvailable()) {
throw new Error(
"Secure token storage is not available on this system; cannot encrypt token.",
)
}
const encrypted = safeStorage.encryptString(input.token).toString("base64")

Copilot uses AI. Check for mistakes.
appStore.set(modelProvidersAtom, updatedProviders)
}
} catch (err) {
console.error("[models] Failed to encrypt custom model token:", err)
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The encryption error is caught and logged but the function continues with an unencrypted token in the customConfig. This could lead to tokens being sent in plain text. After a failed encryption attempt, the token should either:

  1. Not be used at all (abort the operation)
  2. Be set to undefined/empty to prevent accidental plain text transmission
  3. Re-throw the error to prevent the chat from proceeding

Additionally, the user should be notified that their token could not be secured.

Suggested change
console.error("[models] Failed to encrypt custom model token:", err)
console.error("[models] Failed to encrypt custom model token:", err)
Sentry.captureException(err)
// Ensure we do not continue with a plaintext token after a failed encryption
customConfig = { ...customConfig, token: undefined }
if (activeProviderId) {
const providers = appStore.get(modelProvidersAtom)
const updatedProviders = providers.map((provider) =>
provider.id === activeProviderId
? { ...provider, token: undefined }
: provider,
)
appStore.set(modelProvidersAtom, updatedProviders)
}
toast.error(
"Your API token could not be securely encrypted. It has been cleared. Please re-enter it in settings.",
)

Copilot uses AI. Check for mistakes.
const { encrypted } = await encryptTokenMutation.mutateAsync({
token: trimmedToken,
})
const providerId = `provider-${Date.now()}`
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Date.now() for ID generation can lead to collisions if multiple providers are created in rapid succession (e.g., within the same millisecond). Consider using a more robust ID generation method such as:

  • crypto.randomUUID() for truly unique IDs
  • A counter combined with timestamp
  • nanoid or similar library

This is particularly important in the onboarding flow where users might submit quickly or the same code path could be triggered multiple times.

Copilot uses AI. Check for mistakes.
)
.mutation(async ({ input }) => {
const cleanUrl = input.baseUrl.replace(/\/$/, "")
const authToken = input.token ? decryptIfNeeded(input.token) : ""
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The token field is optional (z.string().optional()) but the decryptIfNeeded function is called on it without checking if it exists first. While decryptIfNeeded does handle falsy values, the type signature suggests the token could be undefined. Consider making the validation more explicit by using .nullable() or adding a default value to make the intent clearer.

Copilot uses AI. Check for mistakes.
Comment on lines +260 to 265
export const modelProvidersAtom = atomWithStorage<ModelProvider[]>(
"agents:model-providers",
[OFFLINE_PROVIDER], // Start with offline provider
undefined,
{ getOnInit: true },
)
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old customClaudeConfigAtom (stored at "agents:claude-custom-config") has been completely removed without providing a migration path for existing users. Users who configured custom models in previous versions will lose their configuration after this update. Consider adding a one-time migration that:

  1. Checks for existing data in the old storage key
  2. Converts it to the new ModelProvider format
  3. Adds it to modelProvidersAtom
  4. Clears the old storage key

This should be done during app initialization to ensure a smooth upgrade experience.

Copilot uses AI. Check for mistakes.
const { encrypted } = await encryptTokenMutation.mutateAsync({
token: key.trim(),
})
const providerId = `provider-${Date.now()}`
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Date.now() for ID generation can lead to collisions if multiple providers are created in rapid succession (e.g., within the same millisecond). Consider using a more robust ID generation method such as:

  • crypto.randomUUID() for truly unique IDs
  • A counter combined with timestamp
  • nanoid or similar library

This is particularly important in the onboarding flow where users might submit quickly or the same code path could be triggered multiple times.

Suggested change
const providerId = `provider-${Date.now()}`
const providerId = `provider-${
typeof crypto !== "undefined" && typeof crypto.randomUUID === "function"
? crypto.randomUUID()
: Date.now()
}`

Copilot uses AI. Check for mistakes.
}

const newProvider: ModelProvider = {
id: provider?.id || `provider-${Date.now()}`,
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Date.now() for ID generation can lead to collisions if multiple providers are created in rapid succession. Consider using a more robust ID generation method such as crypto.randomUUID(), a counter combined with timestamp, or nanoid.

Suggested change
id: provider?.id || `provider-${Date.now()}`,
id: provider?.id || `provider-${crypto.randomUUID()}`,

Copilot uses AI. Check for mistakes.

import { useAtom, useAtomValue, useSetAtom } from "jotai"
import { ChevronDown, Loader2, RefreshCw, Zap } from "lucide-react"
import { ChevronDown, Loader2, RefreshCw, Zap, Globe } from "lucide-react"
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unused imports Loader2, RefreshCw.

Suggested change
import { ChevronDown, Loader2, RefreshCw, Zap, Globe } from "lucide-react"
import { ChevronDown, Zap, Globe } from "lucide-react"

Copilot uses AI. Check for mistakes.
@jmagar
Copy link
Author

jmagar commented Feb 4, 2026

Addressed review feedback: added legacy custom config migration, reset lastSelectedModelId on provider deletion, improved fetchModels status/debug info, switched provider id to randomUUID, and removed unused imports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant