-
Notifications
You must be signed in to change notification settings - Fork 1.3k
fix: forward fetch and headers options to AI SDK providers #1297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🦋 Changeset detectedLatest commit: c438746 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Greptile OverviewGreptile SummaryFixed custom Key Changes:
Impact:
Confidence Score: 5/5
Important Files ChangedFile Analysis
Sequence DiagramsequenceDiagram
participant User
participant Stagehand
participant LLMProvider
participant getAISDKLanguageModel
participant AISDKCreator
participant AIProvider
User->>Stagehand: new Stagehand({model: {modelName, fetch, headers}})
User->>Stagehand: act() / extract() / observe()
Stagehand->>LLMProvider: getClient(modelName, clientOptions)
alt modelName contains "/"
LLMProvider->>LLMProvider: Parse subProvider and subModelName
LLMProvider->>getAISDKLanguageModel: call(subProvider, subModelName, apiKey, baseURL, headers, fetch)
getAISDKLanguageModel->>getAISDKLanguageModel: Build providerConfig object
Note over getAISDKLanguageModel: Add optional fields:<br/>apiKey, baseURL, headers, fetch
getAISDKLanguageModel->>AISDKCreator: creator(providerConfig)
Note over getAISDKLanguageModel: Type assertion: providerConfig as {apiKey: string}
AISDKCreator->>AIProvider: Initialize with config
AIProvider-->>getAISDKLanguageModel: provider instance
getAISDKLanguageModel->>AIProvider: provider(subModelName)
AIProvider-->>getAISDKLanguageModel: languageModel
getAISDKLanguageModel-->>LLMProvider: languageModel
LLMProvider->>LLMProvider: new AISdkClient({model: languageModel})
else predefined model
LLMProvider->>LLMProvider: Create provider-specific client
end
LLMProvider-->>Stagehand: LLMClient
Note over Stagehand,AIProvider: When AI SDK makes API requests,<br/>custom fetch and headers are used
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Additional Comments (1)
-
packages/core/lib/v3/llm/LLMProvider.ts, line 136-145 (link)logic: the
elsepath (when no apiKey is provided) doesn't receiveheadersorfetchoptions - users without explicit apiKeys will have their options silently ignored
2 files reviewed, 1 comment
6e684de to
8384272
Compare
Updated getAISDKLanguageModel() to always use creator functions with optional config object. This ensures custom fetch/headers work for ALL users, including those relying on environment variables. Changes: - Removed if/else branching (addresses bot feedback) - Build provider config with optional fields only - Creator functions automatically use env vars when apiKey not provided - Custom fetch/headers now forwarded in all scenarios Testing: - Verified with real website without explicit apiKey - Custom fetch called successfully - All custom headers forwarded correctly - Environment variable fallback works as expected Fixes browserbase#1296
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2 files reviewed, no comments
fix: forward fetch and headers options to AI SDK providers
Fixes #1296
why
When using AI SDK provider models (e.g.,
openai/gpt-4o-mini), customfetchandheadersoptions fromClientOptionsare silently ignored, even though:ClientOptionsTypeScript interface includes thembaseURLoption IS forwarded (inconsistent behavior)This blocks important use cases:
Users currently receive no error when these options are ignored, making this bug difficult to discover.
what changed
Modified:
packages/core/lib/v3/llm/LLMProvider.tsExtendedClientOptionsinterface for type-safe property accessgetAISDKLanguageModel()function signature to acceptheadersandfetchparametersproviderConfigobject (forwarded to AI SDK provider)getClient()to pass the options with type assertionsChanges:
ExtendedClientOptionsinterface for type-safe property access (lines 20-23)Type Safety:
ExtendedClientOptionsinterface to avoid@typescript-eslint/no-explicit-anyerrorsCompatibility:
baseURLparametertest plan
Manual Testing
Run the provided test example:
Expected output:
Runtime Verification ✅
Verified with actual OpenAI API call:
This confirms:
Test File
Added
examples/test-custom-fetch.tswhich:Real-World Use Case
This fix enables our production LLM proxy integration:
Code Quality
ExtendedClientOptions)baseURL)Existing Tests
All existing tests continue to pass (no breaking changes).
Context
We currently use a runtime patch in production that modifies the compiled
dist/index.jsto work around this bug. This PR provides a proper source-code fix that:The fix enables LLM proxy authentication, which is critical for production deployments where all LLM requests are routed through an authenticated proxy for billing, monitoring, and security.
Happy to help test and refine this fix!
Additional Notes: