ClientAgentJS is a plain JavaScript browser library for adding AI agent capabilities to web applications without building your own backend.
ClientAgentJS uses a Zero-Backend Architecture. Unlike traditional AI applications that proxy requests through a server, this library runs entirely in the browser. It follows a Direct Client-to-Provider model where the user's credentials never leave their device and requests go directly from the browser to the AI provider.
- User Form Assistance: Enhance public-facing web forms with AI capabilities for text generation, translation, or content improvement—helping end users fill out profiles, reviews, or applications without changing your backend.
- Admin & Editor Form Assistance: Restrict AI-powered form assistance to internal users, administrators, or content editors. Streamline product descriptions, service listings, CRM entries, and internal tool workflows while maintaining control over who accesses the AI features.
- Internal & Productivity Tools: Build tools for teams that already have their own AI keys, avoiding infrastructure costs and privacy concerns.
- Contextual Web Assistants: Create agents that read the current page content or user selection to provide summaries or answer questions.
- Creative Editors: Integrate AI help directly into writing tools, CMS editors, or image generators where the user controls their own consumption.
- Rapid Prototyping: Validate AI-powered ideas in minutes without setting up any server-side environment.
- Multi-Provider Apps: Allow users to choose between OpenAI, Anthropic, or local models (via Ollama) within the same interface.
Try the interactive examples online:
- Chat Explorer - Full-featured chat interface
- Form Assistance - AI-powered form helpers
- MCP Tools - Model Context Protocol integration
- Zero-Backend: No proxy servers, no hidden costs. Your browser talks directly to the AI provider.
- Multi-Provider: Native support for OpenAI, Anthropic (Claude), and Google (Gemini).
- Automatic Retries: Built-in resilience against transient network errors and rate limiting (429) with exponential backoff.
- Detailed Error Reporting: Captures and displays descriptive error messages directly from the AI providers to simplify debugging.
- Global Distribution: Available as both ESM modules and a global
ClientAgentobject for classic web environments. - MCP Compatible: Connect your agent to the vast ecosystem of Model Context Protocol tools and servers.
- Privacy First: API keys and conversation history stay in your browser's local/session storage.
ClientAgentJS supports the Model Context Protocol, allowing agents to use external tools and data sources.
- External Tools: Connect to any MCP server that supports the
streamable-httptransport (or compatible SSE/JSON-RPC over HTTP). - The CORS Challenge: Since the library runs entirely in the browser, external MCP servers must support CORS (Cross-Origin Resource Sharing).
- Hosting Options:
- Local Servers: Users can run MCP servers on their own machines (localhost) to interact with local files or databases.
- Provider Servers: Application developers can host and provide their own MCP servers pre-configured for their users.
- Shared Servers: Reuse the same MCP configuration across different profiles and sessions.
For more details on implementing or connecting to MCP servers, see the MCP Tools example.
- no backend owned by the application developer
- no framework
- no Node.js at runtime
- no bundler to consume the distributed library
createAgent()with no required arguments- profile storage in
sessionStorageorlocalStorage ask()for one-shot requestsstream()for streaming responsescreateSession()for multi-turn conversations- provider adapters for OpenAI-compatible, Anthropic-native and Google-native profiles
- optional MCP server configuration per profile
- optional local tool registration
- a browser configuration panel for profiles and MCP servers
Build the distributable files:
npm run buildRun tests:
npm testSince the project is not yet published on npm, use the pre-built files from the dist/ folder:
ESM (recommended for modern projects):
import { createAgent } from './dist/clientagentjs.esm.js';Global script (for classic web environments):
<script src="dist/clientagentjs.global.js"></script>
<script>
// Access via the global variable ClientAgent
const agent = ClientAgent.createAgent({
storageKey: "my-app-unique-key"
});
</script>import { createAgent } from './dist/clientagentjs.esm.js';
const agent = createAgent();
// Check if a profile is configured
if (!agent.isReady()) {
agent.openConfigPanel();
}
// Simple request
const response = await agent.ask("Hello, who are you?");
console.log(response.text);
// Streaming request
for await (const chunk of agent.stream("Tell me a story")) {
process.stdout.write(chunk.delta);
}const controller = new AbortController()
for await (const chunk of agent.stream("Write a headline", {
signal: controller.signal
})) {
console.log(chunk.text)
}const session = agent.createSession()
await session.ask("Who won the previous round?")
await session.ask("Now summarize it in one sentence.")Profiles store provider, model, credentials, prompt defaults and related options. MCP server configuration is stored separately and referenced from profiles.
const backup = agent.exportProfiles()
agent.importProfiles(backup)Calling importProfiles() always shows a confirmation warning before replacing the current profiles and MCP configuration.
The agent can register developer-defined tools that run in the browser and can be used automatically by compatible providers.
agent.registerTool("calculate_sum", ({ numbers }) => {
return {
total: numbers.reduce((acc, value) => acc + Number(value || 0), 0)
}
})Public methods:
registerTool(name, handler)unregisterTool(name)listTools()runTool(name, input)
Tool handlers receive one input object and can return plain JSON-compatible data or a string. If the tool is used through model tool-calling, the result is serialized and injected back into the model flow.
Use local tools for lightweight client-side actions. For advanced or external integrations, prefer MCP servers.
The built-in UI opens from the public API:
agent.openConfigPanel()
agent.openMcpPanel()The built-in panels use English translations.
The profile field providerType selects the adapter implementation:
openai-compatibleanthropic-nativegoogle-native
Since ClientAgentJS runs entirely in the browser, it requires AI providers to support CORS (Cross-Origin Resource Sharing). Not all providers enable CORS by default, and some require special configuration.
- Anthropic: Supports browser access when the
anthropic-dangerous-direct-browser-accessheader is included. Handled automatically by the library. - Google (Gemini): Supports CORS on the generative language API. No additional configuration needed.
- OpenAI: Does not support direct browser access. Use a backend proxy or OpenRouter instead.
- Ollama: Requires manual CORS configuration (see below).
- Other OpenAI-compatible providers: CORS support varies. Check their documentation, or use OpenRouter as a unified alternative.
- Use your own backend proxy: Route API requests through a server you control, then point ClientAgentJS at your proxy endpoint using an OpenAI-compatible configuration.
- Use OpenRouter: OpenRouter supports CORS and provides access to models from OpenAI, Anthropic, Mistral, Meta, and many others through a single endpoint — useful when you need browser-compatible access to models whose native APIs do not support it.
- Use a public CORS proxy (development only): Services like
cors-anywherecan add CORS headers to proxied requests, but are unreliable and should not be used in production.
Ollama does not enable CORS by default. To use it with ClientAgentJS, start Ollama with the appropriate CORS headers:
OLLAMA_ORIGINS="*" ollama serveOr set the environment variable permanently:
# Linux/macOS
export OLLAMA_ORIGINS="*"
ollama serve
# Windows
set OLLAMA_ORIGINS=*
ollama serveFor production environments, restrict origins to your specific domain instead of *:
OLLAMA_ORIGINS="http://localhost:*" ollama serveMCP servers also need to support CORS when accessed from the browser. Many MCP servers are designed to run as local processes and are not built for direct browser access.
Options:
- Run MCP servers locally: A server on
localhostaccessed from a page also onlocalhostis same-origin and avoids cross-origin restrictions entirely. - Host your own MCP servers: Configure them to include the appropriate
Access-Control-Allow-Originresponse headers. - Route through a backend proxy: Forward browser requests to MCP servers through your own proxy, keeping the MCP servers unexposed.
When adding an MCP server in the configuration panel, verify that the endpoint either supports CORS or is running on localhost.
- Chat Explorer: A full-featured chat interface demonstrating sessions, streaming, event logs, and configuration portability.
- Form Assistance: Using the agent to help users fill out and improve content in standard web forms.
- MCP Tools: Connecting the agent to external services via the Model Context Protocol.
- Direct Client-to-Provider: The browser connects directly to OpenAI, Anthropic, or Ollama using credentials stored locally. This ensures maximum privacy and zero infrastructure costs for the developer.
- Managed Proxy: The developer provides an endpoint that acts as a thin proxy to inject keys or manage quotas. The library still manages the agent logic, history, and tools in the browser.
- Local-Only: The agent communicates with a local model (like Ollama or LM Studio) for a 100% private, offline-capable experience.
This architecture makes the project ideal for internal tools, developer-focused products, and any application where data privacy and user control are priorities.
See license: ClientAgentJS