MetaCortex is a serverless MCP memory service backed by Firestore vector search and deployed through Firebase Cloud Functions 2nd Gen.
Tip
Brain & Body: MetaCortex resides in the cloud as the intelligence core, while Nanobot acts as its local manifest (body). See the full Architecture & Use Cases for details.
The practical target is a remote MCP server that chat clients such as ChatGPT web or Claude web can use for:
- searching what the project already knows
- saving new durable memories from chat
- fetching the full stored memory behind a search result
As of March 10, 2026, Cloud Functions production deployment requires the Firebase Blaze plan. The original Spark-only production target from the initial spec is not compatible with current Firebase Functions deployment rules, though low-traffic usage can still remain close to zero cost within Blaze no-cost quotas.
This project is set up for these workflows:
- A chat client asks, "What do we already know about auth/session handling?"
The model calls
search_context. - The search results include document ids and external artifact refs when available.
The model can call
fetch_contextfor the one result it wants in full. - A user says, "Remember that we use Ktor for shared Android and iOS networking."
The model calls
remember_context. - A user shares a screenshot and says to save it for later retrieval.
The model calls
remember_contextwith image input plusartifact_refsif the real asset lives in storage.
The current MCP surface is intentionally split between:
- a 3-tool client-facing contract for browser-hosted chat clients
- a 1-tool admin-only maintenance surface for operators
That means the server currently exposes 4 MCP tools total, but normal browser clients should only see 3 of them.
This is the public/browser contract:
remember_contextThe single write tool for normal chat use and advanced admin writes. The client supplies the memory text, optional topic, optionaldraft=trueor explicitbranch_state, optional image input, and optionalartifact_refs. The server fills in sensible defaults.search_contextVector search over stored memories. Results include document ids and artifact refs when available.fetch_contextFetch one memory by document id aftersearch_context.
This remains on the server, but it should stay off browser-hosted client profiles:
deprecate_contextSoft-delete obsolete memories.
WIP consolidation is currently an internal maintenance workflow, not a public MCP tool.
remember_context keeps the public write surface simple while still supporting advanced lifecycle control when needed:
topicis the public label and maps to the storedmodule_nameinternally- omitted
branch_statestores canonical memory asactive draft=truestores draft material aswip- explicit
branch_stateis available for advanced admin workflows such asmerged
These fields exist in stored records because they help maintenance and filtering.
Lifecycle state for a stored memory:
activeCanonical memory that normal search should returnwipDraft memory awaiting consolidationmergedIncorporated memory that is no longer the main active recorddeprecatedObsolete memory kept only for history/audit
For browser clients, prefer remember_context with its defaults and use draft=true only when the user is explicitly saving rough notes. Admin flows can set branch_state explicitly when needed.
Public topic or subsystem label for MCP clients. Internally this is stored as module_name.
Examples:
authbillingkmp-networkingui-settings
If omitted, the server defaults it to general.
This project supports image-backed memories, but it does not store raw image bytes for later download.
What happens today:
- the image is normalized into retrieval text by Gemini
- that text is embedded and stored
- optional
artifact_refscan point to the real asset, for examplegs://bucket/path.png - search results and fetched records return those artifact refs when they exist
That means the practical image flow is:
- save a screenshot with
remember_context - store the real asset elsewhere
- include its
artifact_refs - let semantic search find the memory
- let the client follow the returned artifact ref to the actual screenshot
- Default Streamable HTTP MCP endpoint:
/metaCortexMcp/mcp - Client-scoped Streamable HTTP MCP endpoint:
/metaCortexMcp/clients/<clientId>/mcp
Security model:
- the default
/mcpendpoint is the admin endpoint clients/<clientId>endpoints let you expose smaller toolsets to specific consumersMCP_ALLOWED_ORIGINSapplies only to the default admin endpoint- browser CORS should be configured per client profile through
MCP_CLIENT_PROFILES_JSON[].allowedOrigins - leave
MCP_ALLOWED_ORIGINSempty unless you intentionally want browser access to the admin endpoint
Recommended browser read/write toolset:
remember_contextsearch_contextfetch_context
This 3-tool browser contract is the intended v1 public surface.
For browser-hosted MCP clients, register the scoped endpoint, not the admin endpoint:
- ChatGPT web URL:
https://<FUNCTION_BASE_URL>/clients/chatgpt-web/mcp?auth_token=<YOUR_CHATGPT_TOKEN> - Claude web URL:
https://<FUNCTION_BASE_URL>/clients/claude-web/mcp - bearer token: the
tokenvalue from the matching client profile - allowed browser origins: the matching profile's
allowedOrigins
Do not register https://<FUNCTION_BASE_URL>/mcp with ChatGPT web or Claude web. That endpoint is the admin surface and uses MCP_ADMIN_TOKEN.
Use separate client profiles per browser client:
chatgpt-webwithallowedOrigins=["https://chatgpt.com"]claude-webwithallowedOrigins=["https://claude.ai"]
ChatGPT's current MCP UI does not support configuring custom Authorization: Bearer headers. To work around this security limitation, MetaCortex supports passing the token securely via the URL.
- Open ChatGPT Web or Desktop.
- Open Settings -> Connected Apps (or MCP Settings).
- Click "Add new App" or "Connect MCP Server".
- Set Auth Type to No Authentication.
- Set the MCP URL to your tokenized endpoint:
https://<FUNCTION_BASE_URL>/clients/chatgpt-web/mcp?auth_token=<YOUR_CHATGPT_TOKEN>
MetaCortex will validate the token from the URL and reject unauthenticated requests even though ChatGPT is configured for "No Auth".
Depending on your Claude client (e.g., experimental web extensions or custom UIs), you can configure the connection in two ways:
Option 1: Standard Headers (Preferred)
- MCP URL:
https://<FUNCTION_BASE_URL>/clients/claude-web/mcp - Auth Type: Bearer Token / Service Token
- Token:
Bearer <YOUR_CLAUDE_TOKEN>
Option 2: Tokenized URL (If headers are unsupported)
- Auth Type: No Authentication
- MCP URL:
https://<FUNCTION_BASE_URL>/clients/claude-web/mcp?auth_token=<YOUR_CLAUDE_TOKEN>
The v1 client-facing tools return one TextContent block whose text is a single JSON object.
Minimal text memory:
{
"content": "We use Ktor for shared Android and iOS networking.",
"topic": "kmp-networking"
}Typical result:
{
"item": {
"id": "abc123",
"content": "We use Ktor for shared Android and iOS networking.",
"metadata": {
"topic": "kmp-networking",
"branch_state": "active",
"modality": "text",
"created_at": "2026-03-14T12:00:00.000Z",
"updated_at": "2026-03-14T12:00:00.000Z"
}
},
"write_status": "created"
}Image-backed memory with an external asset reference:
{
"content": "Settings screen screenshot for the Compose UI.",
"topic": "ui-settings",
"artifact_refs": ["gs://your-bucket/settings-screen.png"],
"image_base64": "<base64 image bytes>",
"image_mime_type": "image/png"
}Example input:
{
"query": "shared networking for android and ios",
"filter_topic": "kmp-networking",
"filter_state": "active"
}Typical result:
{
"matches": [
{
"id": "abc123",
"summary": "We use Ktor for shared Android and iOS networking.",
"score": 0.92,
"content_preview": "We use Ktor for shared Android and iOS networking.",
"metadata": {
"topic": "kmp-networking",
"branch_state": "active",
"modality": "text",
"created_at": "2026-03-14T12:00:00.000Z",
"updated_at": "2026-03-14T12:00:00.000Z"
}
}
],
"applied_filters": {
"filter_topic": "kmp-networking",
"filter_state": "active"
}
}If an item has external refs, they appear in metadata.artifact_refs.
If nothing matches, the result is:
{
"matches": [],
"applied_filters": {
"filter_topic": null,
"filter_state": "active"
}
}Example input:
{
"document_id": "abc123"
}Typical result:
{
"item": {
"id": "abc123",
"content": "We use Ktor for shared Android and iOS networking.",
"metadata": {
"topic": "kmp-networking",
"branch_state": "active",
"modality": "text",
"created_at": "2026-03-14T12:00:00.000Z",
"updated_at": "2026-03-14T12:00:00.000Z"
}
}
}search_context does one exact metadata filter step and one vector step:
filter_stateis always applied before nearest-neighbor searchfilter_topic, when present, is an exact match on the stored topic label- vector search then runs Firestore
findNearest()with cosine distance - the result count is
limitwhen provided, otherwiseSEARCH_RESULT_LIMIT - the default state is
activeunless the client profile allows and requests another state
fetch_context can still fail with 403 if the document exists but its branch_state is outside that client profile's allowedFilterStates.
Write behavior that matters in production:
- request bodies are limited to
1mb, including base64 image data contentorimage_base64is requiredimage_mime_typeis required wheneverimage_base64is provided- images are normalized into retrieval text and embedded as text; raw image bytes are not stored for download
- if you want the real asset later, store it elsewhere and include
artifact_refs - exact duplicate writes within the current idempotency window are replay-safe and reuse the existing document id
- duplicate suppression is intentionally light and based on the normalized write fingerprint, not semantic similarity
remember_context defaults:
- omitted
topicbecomesgeneral - omitted
draftand omittedbranch_statestorebranch_state=active draft=truestoresbranch_state=wip- explicit
branch_stateoverrides the default lifecycle state draftandbranch_stateare mutually exclusive
Recommended usage:
- Browser clients save durable memories with
remember_context. - Use
draft=trueonly for provisional notes that should not appear in normal active search. - WIP review and consolidation stay in internal maintenance workflows.
- After writing the canonical replacement, admins can mark obsolete records with the admin-only
deprecate_contexttool.
Current lifecycle behavior:
remember_contextdefaults toactive, supportsdraft=trueforwip, and also accepts explicitbranch_statefor advanced writesdeprecate_contextdoes not delete data; it setsbranch_state=deprecatedand recordssuperseded_bymergedexists as a searchable historical state for explicit admin writes
After deployment, there are three places to look:
memory_vectorsin Firestore shows the current memory corpusmemory_eventsin Firestore shows client-attributed tool usage over time- Cloud Logging shows request failures and structured tool-event logs
memory_events records one document per tool call and one document per ingress rejection. Events include:
client_idevent_typestatustimestamplatency_ms- a compact
requestsummary - either a compact
responsesummary, anerror, or a request rejection reason
Examples:
-
remember_contextevents record the writtendocument_id,topic,branch_state, andmodality -
search_contextevents record the requested filters,result_count, and returnedresult_ids -
fetch_contextevents record whichdocument_idwas read -
deprecate_contextevents recorddocument_id,superseding_document_id, andprevious_state -
rejected browser/admin requests record
reason=origin_not_allowedorreason=unauthorizedTraceability is by client profile id, so: -
admin endpoint traffic is attributed to
client_id=default -
ChatGPT web traffic is attributed to
client_id=chatgpt-web -
Claude web traffic is attributed to
client_id=claude-web
What is intentionally not stored in observability events:
- full memory bodies
- full image bytes
- raw image downloads
Search events do include a short query_preview, but the observability collection is designed to track behavior, not duplicate the corpus.
-
Install dependencies:
npm --prefix functions install
-
Create local env vars:
cp functions/.env.example functions/.env
For browser-hosted clients, set a scoped client profile in
functions/.envorfunctions/.env.prod:MCP_CLIENT_PROFILES_JSON=[{"id":"chatgpt-web","token":"replace-chatgpt-token","allowedTools":["remember_context","search_context","fetch_context"],"allowedFilterStates":["active"],"allowedOrigins":["https://chatgpt.com"]},{"id":"claude-web","token":"replace-claude-token","allowedTools":["remember_context","search_context","fetch_context"],"allowedFilterStates":["active"],"allowedOrigins":["https://claude.ai"]}]
-
Run verification:
npm --prefix functions test npm --prefix functions run build -
Start emulators:
npm --prefix functions run serve
-
Optional MCP smoke test:
cd functions MCP_BASE_URL="http://127.0.0.1:5001/demo-open-brain/us-central1/metaCortexMcp/mcp" \ MCP_ADMIN_TOKEN="replace-me" \ MCP_SMOKE_MODE="admin-read-write" \ npm run smoke
Browser-client flow:
cd functions MCP_BASE_URL="http://127.0.0.1:5001/demo-open-brain/us-central1/metaCortexMcp/clients/chatgpt-web/mcp" \ MCP_ADMIN_TOKEN="replace-chatgpt-token" \ MCP_SMOKE_MODE="browser-read-write" \ npm run smoke
Repeat with
/clients/claude-web/mcpand the Claude token to verify Claude separately.
Deployment playbook: docs/DEPLOYMENT.md
For the next production deployment session, start with:
cd /Users/nick/git/metacortex
./scripts/deploy-session-preflight.sh