From 0f26dd85ed385868d90b27c415087c7560d65731 Mon Sep 17 00:00:00 2001 From: TylerJNewman Date: Thu, 26 Feb 2026 17:41:38 -0800 Subject: [PATCH 1/5] docs: add deployment architecture, plan, and system research Research documents covering Basic Memory's full architecture, a sidecar deployment strategy with R2 sync hub, and phased deployment plan for Railway with local bisync. Co-Authored-By: Claude Opus 4.6 Signed-off-by: TylerJNewman --- .gitignore | 1 + deployment-architecture.md | 96 ++++++++++ deployment-plan.md | 340 +++++++++++++++++++++++++++++++++ research.md | 376 +++++++++++++++++++++++++++++++++++++ 4 files changed, 813 insertions(+) create mode 100644 deployment-architecture.md create mode 100644 deployment-plan.md create mode 100644 research.md diff --git a/.gitignore b/.gitignore index 78ab38d90..d1be40351 100644 --- a/.gitignore +++ b/.gitignore @@ -58,3 +58,4 @@ claude-output .mcpregistry_* /.testmondata .benchmarks/ +memories/ diff --git a/deployment-architecture.md b/deployment-architecture.md new file mode 100644 index 000000000..45290b942 --- /dev/null +++ b/deployment-architecture.md @@ -0,0 +1,96 @@ +# Deployment Architecture: Basic Memory as a Sidecar + +## Overview + +Basic Memory runs as a single sidecar container alongside a chat app/agent on a server (e.g., Railway). Multiple local users can sync their files to the remote instance. The architecture is intentionally simple. + +## Architecture + +``` +Local User A Railway / Server +┌─────────────┐ ┌─────────────────────────┐ +│ Basic Memory │──bisync─────────▶│ │ +│ (local) │ │ Chat App / Agent │ +└─────────────┘ │ │ │ + │ │ HTTP (:8000) │ +Local User B │ ▼ │ +┌─────────────┐ │ Basic Memory Sidecar │ +│ Basic Memory │──bisync─────────▶│ (single instance, │ +│ (local) │ │ single project, │ +└─────────────┘ │ SQLite + WAL) │ + └─────────────────────────┘ +``` + +## Why a Sidecar + +- ~5ms overhead vs in-process, negligible compared to ~2000ms LLM API calls +- Independent deployability — update Basic Memory without touching the app +- Clean separation of concerns +- Same FastAPI + ASGI transport the system already uses + +## Single Project, Shared Knowledge + +No need for per-user projects. One project, one shared knowledge base. + +### Shared Knowledge (the default) + +All users contribute to and benefit from the same pool: + +``` +notes/product-knowledge.md +notes/troubleshooting.md +decisions/architecture.md +``` + +Every useful insight from any user interaction goes here. Knowledge compounds across all users. + +### Per-User Preferences (rare, directory-scoped) + +``` +users/alice.md → - [preference] Prefers detailed explanations +users/bob.md → - [preference] Wants concise answers #communication +``` + +The agent reads user context with `read_note("users/alice")` when needed, writes shared knowledge everywhere else. No project juggling required. + +## Concurrency + +- SQLite WAL mode handles concurrent reads without blocking +- Writes are serialized but fast — a few users will never notice +- Different users writing different files means no contention in practice +- If contention ever becomes real (dozens of concurrent writers), switch to Postgres backend (already supported, zero code changes) + +## Persistence & Backups + +### Markdown files are the source of truth + +The database is a disposable index. Delete it, run `bm reset`, it rebuilds from files. Protect the files, not the DB. + +### Railway volumes + +- Persistent across deploys and restarts (network-attached storage) +- Single-node, no replication — hardware failure could lose data (rare) +- No built-in volume snapshots + +### Backup strategy (layered) + +1. **Local sync as primary backup** — each local user has a copy of the files via `bm project bisync`. If the remote volume dies, files exist locally. This is the best safety net. +2. **Periodic file backup** — cron job in the container that tars markdown files to S3/R2/Tigris. Cheap and reliable. +3. **Git as version control** — commit markdown files to a repo periodically. Get history + offsite backup. Diffs are meaningful since files are plain markdown. + +### Recovery + +- Volume lost? Restore from local sync or object storage backup. +- DB corrupted? Delete it, `bm reset` rebuilds from files. +- Files corrupted? Restore from git history or local copies. + +## Scaling Thresholds + +| Concern | When to worry | Solution | +|---------|--------------|----------| +| Write contention | Dozens of concurrent writers to same project | Switch to Postgres backend | +| Search latency | Heavy vector search under many simultaneous queries | Switch to Postgres + pgvector | +| Storage | Thousands of large files | Volume size increase, standard ops | +| Availability | Need zero-downtime guarantees | Multiple instances + Postgres + shared object storage | + +For a few users: none of these apply. Start with SQLite, revisit if you hit actual problems. diff --git a/deployment-plan.md b/deployment-plan.md new file mode 100644 index 000000000..394b74713 --- /dev/null +++ b/deployment-plan.md @@ -0,0 +1,340 @@ +# Deployment Plan: Basic Memory Sidecar with Local Sync + +## Goal + +Deploy Basic Memory as a sidecar container on Railway. Local Basic Memory instances sync markdown files via object storage (Cloudflare R2). Either local or remote works independently — sync keeps them converged. + +``` +Local Machine Cloudflare R2 Railway +┌─────────────┐ ┌─────────────┐ ┌─────────────────┐ +│ Basic Memory │──rclone bisync─│ R2 Bucket │─rclone───│ Basic Memory │ +│ (local files)│ │ (sync hub) │ bisync │ Sidecar │ +│ │ └─────────────┘ │ (SSE on :8000) │ +│ Works offline│ │ │ +│ Full MCP │ │ Chat App/Agent │ +└─────────────┘ └─────────────────┘ +``` + +--- + +## Phase 1: Remote Sidecar on Railway + +### 1.1 Prepare the Docker Image + +Basic Memory already has a Dockerfile. Key settings: + +```dockerfile +# Existing Dockerfile runs: +# basic-memory mcp --transport sse --host 0.0.0.0 --port 8000 +# Exposes port 8000, non-root user, Python 3.12-slim +``` + +Additions needed for sync support: + +```dockerfile +# Install rclone for bisync +RUN curl https://rclone.org/install.sh | bash +``` + +### 1.2 Deploy to Railway + +1. Create a new Railway project +2. Add a service from the Basic Memory repo (or Docker image) +3. Configure a **persistent volume** mounted at `/app/data` (markdown files) +4. Set environment variables: + +```env +BASIC_MEMORY_SYNC_CHANGES=true +BASIC_MEMORY_SYNC_DELAY=1000 +``` + +5. Expose port 8000 (internal or public with auth, depending on chat app setup) +6. Verify: `curl https:///health` or similar + +### 1.3 Initialize the Project on the Sidecar + +SSH into the Railway container or run via Railway CLI: + +```bash +# Create the default project +basic-memory project add "shared" /app/data +basic-memory project default "shared" + +# Verify +basic-memory status +``` + +--- + +## Phase 2: Object Storage Sync Hub (Cloudflare R2) + +### Why R2 + +- S3-compatible (rclone works out of the box) +- Free egress (no cost for syncing down) +- 10 GB free tier (plenty for markdown files) +- No proprietary lock-in + +### 2.1 Create the R2 Bucket + +1. Cloudflare dashboard → R2 → Create bucket +2. Name: `basic-memory-sync` (or similar) +3. Create an API token with read/write access +4. Note: Account ID, Access Key ID, Secret Access Key + +### 2.2 Configure rclone on the Remote Sidecar + +Create rclone config on the Railway container: + +```ini +# ~/.config/rclone/rclone.conf +[r2] +type = s3 +provider = Cloudflare +access_key_id = +secret_access_key = +endpoint = https://.r2.cloudflarestorage.com +acl = private +``` + +Test connectivity: + +```bash +rclone ls r2:basic-memory-sync +``` + +### 2.3 Configure rclone on Local Machine + +Same rclone config locally: + +```bash +rclone config +# Add remote "r2" with same credentials +# Test: rclone ls r2:basic-memory-sync +``` + +--- + +## Phase 3: Bidirectional Sync + +### 3.1 Establish Baseline (First Sync) + +Pick one side as the source of truth for initial sync. If starting fresh, local is the source: + +```bash +# From local machine — push local files to R2 +rclone sync ~/basic-memory/shared r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" \ + --exclude ".obsidian/**" \ + --exclude "*.pyc" + +# From Railway sidecar — pull R2 files to container +rclone sync r2:basic-memory-sync/shared /app/data \ + --exclude ".basic-memory/**" + +# Rebuild the DB index on the sidecar +basic-memory reset +``` + +### 3.2 Establish Bisync Baseline + +rclone bisync requires a one-time `--resync` to establish tracking state: + +```bash +# On local machine +rclone bisync ~/basic-memory/shared r2:basic-memory-sync/shared \ + --resync \ + --exclude ".basic-memory/**" \ + --exclude ".obsidian/**" \ + --create-empty-src-dirs + +# On Railway sidecar +rclone bisync /app/data r2:basic-memory-sync/shared \ + --resync \ + --exclude ".basic-memory/**" \ + --create-empty-src-dirs +``` + +### 3.3 Ongoing Sync + +After baseline, regular bisync (no `--resync`): + +```bash +# Local → R2 → Remote (two-step) + +# Step 1: Local bisync with R2 +rclone bisync ~/basic-memory/shared r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" \ + --exclude ".obsidian/**" + +# Step 2: Remote bisync with R2 +rclone bisync /app/data r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" +``` + +### 3.4 Automate Sync + +**On the Railway sidecar** — cron job or supervisor process: + +```bash +# Sync every 5 minutes +*/5 * * * * rclone bisync /app/data r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" 2>&1 | logger -t bm-sync +``` + +After each sync, Basic Memory's file watcher (`BASIC_MEMORY_SYNC_CHANGES=true`) detects changes and reindexes automatically. + +**On local machine** — same cron, or on-demand: + +```bash +# Manual sync when you want to push/pull +rclone bisync ~/basic-memory/shared r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" \ + --exclude ".obsidian/**" +``` + +Or use a launchd plist (macOS) / systemd timer (Linux) for automatic periodic sync. + +--- + +## Phase 4: Chat App Integration + +### 4.1 Connect Agent to Sidecar + +The chat app/agent connects to the sidecar via HTTP on port 8000 (SSE transport). Within Railway, this is internal networking — no public exposure needed. + +``` +Chat App Service → http://basic-memory-sidecar.railway.internal:8000 +``` + +The agent uses MCP tools (`write_note`, `search_notes`, `build_context`, etc.) over this connection. + +### 4.2 Knowledge Organization + +``` +/app/data/ +├── notes/ # Shared knowledge (all users benefit) +│ ├── product.md +│ ├── troubleshooting.md +│ └── decisions.md +├── conversations/ # Conversation summaries +│ └── 2026-02-26-alice-onboarding.md +└── users/ # Per-user preferences (rare) + ├── alice.md + └── bob.md +``` + +The agent: +- Reads `users/{name}.md` at conversation start for preferences +- Writes shared knowledge to `notes/` by default +- Records conversation summaries to `conversations/` + +### 4.3 Agent System Prompt Guidance + +``` +You have access to Basic Memory via MCP tools. + +- Search before creating: always check if knowledge exists before writing new notes +- Write shared knowledge to "notes/" directory +- Read user preferences from "users/{username}.md" at conversation start +- Record important decisions and discoveries to "notes/" or "decisions/" +- Use observations: - [category] content #tags (context) +- Use relations: - relates_to [[Other Note]] to build the knowledge graph +``` + +--- + +## Phase 5: Local Development & Switching + +### 5.1 Work Locally (Offline) + +Local Basic Memory works independently with its own SQLite DB: + +```bash +# Local MCP server (stdio transport, used by Claude Desktop etc.) +basic-memory mcp + +# Everything works offline — search, write, read, build_context +``` + +### 5.2 Sync Before/After Local Work + +```bash +# Pull latest from R2 before starting +rclone bisync ~/basic-memory/shared r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" --exclude ".obsidian/**" + +# Work locally... + +# Push changes when done +rclone bisync ~/basic-memory/shared r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" --exclude ".obsidian/**" +``` + +### 5.3 Conflict Handling + +rclone bisync handles conflicts by renaming: +- If both sides changed the same file, the remote version gets a `.conflict` suffix +- Review and merge manually (rare with a few users) +- Avoid by convention: shared notes are append-mostly, user notes are single-owner + +--- + +## Phase 6: Backups + +### 6.1 R2 IS the Backup + +With bisync running, R2 always has a copy of all files. Three copies exist: +1. Local machine filesystem +2. Cloudflare R2 bucket +3. Railway persistent volume + +### 6.2 Git Snapshots (Optional, Recommended) + +On the Railway sidecar, periodically commit to a git repo: + +```bash +# Cron: daily git snapshot +cd /app/data && git add -A && git commit -m "snapshot $(date +%Y-%m-%d)" || true +git push origin main +``` + +This gives you versioned history of all knowledge changes. + +### 6.3 Recovery Scenarios + +| Scenario | Recovery | +|----------|----------| +| Railway volume lost | `rclone sync r2:basic-memory-sync/shared /app/data` then `bm reset` | +| R2 bucket lost | `rclone sync /local/path r2:basic-memory-sync/shared` (restore from local) | +| Local machine lost | `rclone sync r2:basic-memory-sync/shared ~/basic-memory/shared` (restore from R2) | +| DB corrupted (either side) | `bm reset` — rebuilds from markdown files | +| File conflict | Check `.conflict` files, merge manually, re-sync | + +--- + +## Checklist + +### Setup (One-Time) +- [ ] Create Cloudflare R2 bucket and API token +- [ ] Deploy Basic Memory container to Railway with persistent volume +- [ ] Configure rclone on Railway sidecar (R2 credentials) +- [ ] Configure rclone on local machine (same R2 credentials) +- [ ] Initialize Basic Memory project on sidecar (`bm project add`) +- [ ] Run initial sync: local → R2 → remote +- [ ] Establish bisync baseline (`--resync`) on both sides +- [ ] Set up sync cron on Railway sidecar +- [ ] Verify: write a note locally, sync, confirm it appears on sidecar +- [ ] Verify: write a note via agent on sidecar, sync, confirm it appears locally + +### Chat App Integration +- [ ] Connect chat app to sidecar on Railway internal network +- [ ] Configure agent system prompt with Basic Memory guidance +- [ ] Create directory structure (`notes/`, `users/`, `conversations/`) +- [ ] Test: agent writes a note, verify it syncs to local + +### Ongoing +- [ ] Set up local sync automation (launchd/systemd or manual habit) +- [ ] Optional: git snapshot cron on sidecar +- [ ] Optional: R2 lifecycle rules for old versions +- [ ] Monitor Railway volume usage diff --git a/research.md b/research.md new file mode 100644 index 000000000..f7530cd4f --- /dev/null +++ b/research.md @@ -0,0 +1,376 @@ +# Basic Memory: Complete System Research + +## What Is Basic Memory? + +Basic Memory is a local-first knowledge management system built on the Model Context Protocol (MCP). It gives LLMs persistent memory across conversations through a personal knowledge graph stored as plain markdown files. + +**Core insight:** Markdown files are the source of truth. The database (SQLite or Postgres) is a disposable index for fast search and graph traversal. Delete the DB, rebuild it from files. Edit a file in Obsidian or vim, the sync system picks up the change. + +--- + +## System Architecture + +```mermaid +graph TB + User["User / LLM"] + + subgraph Entrypoints + MCP["MCP Server
(bm mcp)"] + CLI["CLI
(bm)"] + API["REST API
(FastAPI)"] + end + + subgraph "MCP Layer" + Tools["MCP Tools
(20 tools)"] + Prompts["MCP Prompts
(4 prompts)"] + Clients["Typed Clients
(Knowledge, Search,
Memory, Resource,
Directory, Project)"] + end + + subgraph "Application Layer" + Services["Services
(Entity, Search, File,
Context, Link, Project)"] + DI["Dependency Injection
(deps/)"] + end + + subgraph "Data Layer" + Repos["Repositories
(project-scoped)"] + FileSystem["File System
(.md files)"] + SearchIdx["Search Index
(FTS5 / tsvector
+ vectors)"] + DB["Database
(SQLite / Postgres)"] + end + + subgraph "Sync Layer" + Watch["WatchService
(watchfiles)"] + Sync["SyncService
(parse, index, resolve)"] + Coord["SyncCoordinator
(lifecycle)"] + end + + User --> MCP & CLI + MCP --> Tools & Prompts + Tools --> Clients + Clients -->|"ASGI (local) or
HTTP (cloud)"| API + CLI --> API + API --> DI --> Services + Services --> Repos & FileSystem & SearchIdx + Repos --> DB + FileSystem -->|"file changes"| Watch + Watch --> Sync + Sync --> Services + Coord --> Watch & Sync +``` + +Each entrypoint has a **composition root** (`container.py`) that reads config once and wires dependencies explicitly. No global singletons. + +--- + +## The Knowledge Model + +```mermaid +graph LR + Entity["Entity
(markdown file)"] + Obs["Observations
[category] content #tags (context)"] + Rel["Relation
relation_type → Target"] + Target["Entity
(another file)"] + + Entity -->|"has many"| Obs + Entity -->|"connects via"| Rel + Rel -->|"resolves to"| Target +``` + +### Entities +A markdown file = an entity. Has: title, type (default "note"), permalink, external UUID, file path, checksum. + +### Relations +Directed edges via WikiLink syntax: `- depends_on [[Database Layer]]`. Relation type is active voice. Target resolved by LinkResolver (UUID → permalink → title → path → FTS fallback). Unresolved relations retry on future syncs. + +### Observations: The Four Components + +Each observation line carries up to four pieces of information with different indexing weights: + +``` +- [Decision] Migrate to PostgreSQL #architecture #infra (Approved Q4 planning) + │ │ │ │ + ▼ ▼ ▼ ▼ + category content tags context +``` + +| Component | Storage | Index Weight | Purpose | +|-----------|---------|-------------|---------| +| `[category]` | Dedicated DB column + index | **Heavy** — column index + vector embedding | "What type of fact?" Filter by decision/bug/feature | +| content | Text in `Observation.content` | **Heavy** — FTS + vector embedding | The actual fact | +| `#tags` | JSON array in `Observation.tags` + embedded in content | **Moderate** — metadata filter + FTS | Cross-cutting keywords across categories | +| `(context)` | Optional text in `Observation.context` | **Light** — DB column only, excluded from search index + vectors | Provenance: who/when/where | + +**Why three annotation mechanisms?** They answer different questions at different indexing weights. Category gives structural type (one per observation, heavily indexed). Tags enable cross-cutting discovery (many per observation, moderately indexed). Context adds provenance without polluting search relevance (lightly indexed). A flat tagging system would collapse all three into one bucket. + +**Finding: `(context)` documentation gap.** The `write_note` tool docstring documents context annotations with examples like `(All state comes from files)`. But the `ai_assistant_guide` resource — the primary document teaching LLMs how to use Basic Memory — never mentions `(context)` in any of its observation examples or templates. This means LLMs likely underutilize context annotations in practice. + +### The Markdown Format + +```markdown +--- +title: Search Feature +type: spec +permalink: specs/search-feature +tags: [search, v1] +--- + +Free-form markdown content (preserved exactly). + +- [feature] Boolean operators like AND, OR, NOT #search #query +- [feature] FTS5 backend with BM25 scoring #performance (added v0.12) +- [decision] Hybrid search by default #architecture (approved Q3) + +- depends_on [[Database Layer]] +- implements [[Search Requirements]] +``` + +**Parsing:** `observation_plugin` in `markdown/plugins.py` uses markdown-it + regex. Extracts `[category]`, content, `#tags` (extracted to array but left in content), and trailing `(context)` (removed from content, stored separately). `MarkdownProcessor` handles the reverse: serializing back to `- [category] content (context)`. + +--- + +## Request Flow + +```mermaid +sequenceDiagram + participant LLM as LLM / Claude + participant Tool as MCP Tool + participant PC as get_project_client() + participant Client as Typed Client + participant API as FastAPI Router + participant Svc as EntityService + participant File as FileService + participant DB as Repository + participant Search as SearchService + + LLM->>Tool: write_note("Title", content, "dir/") + Tool->>PC: resolve project + create client + PC-->>Tool: (client, ProjectItem) + Tool->>Client: KnowledgeClient.create_entity() + Client->>API: POST /v2/projects/{uuid}/knowledge/entities + API->>Svc: create entity (DI injects scoped repos) + Svc->>Svc: generate unique permalink + Svc->>File: atomic write (.md to disk) + Svc->>DB: INSERT entity + observations + relations + Svc->>Search: index (FTS + optional vectors) + Svc-->>API: EntityResponse + API-->>Client: HTTP 201 + Client-->>Tool: parsed response + Tool-->>LLM: formatted summary +``` + +**Key detail:** Local tools use ASGI transport (in-process, zero network). Cloud tools use HTTP with Bearer token. Same API layer handles both. + +--- + +## All MCP Tools (20) + +### Content Management + +| Tool | Description | Key Parameters | +|------|-------------|----------------| +| `write_note` | Create or update a markdown note with semantic observations and relations | `title`, `content`, `directory`, `tags`, `note_type`, `metadata` | +| `read_note` | Read note by title, permalink, or memory:// URL. Fallback: direct → title → FTS | `identifier`, `page`, `page_size`, `include_frontmatter` | +| `view_note` | Read note as formatted artifact for better readability | `identifier`, `page`, `page_size` | +| `edit_note` | Incremental edits without full rewrite | `identifier`, `operation` (append/prepend/find_replace/replace_section), `content`, `find_text`, `section`, `expected_replacements` | +| `delete_note` | Delete a note or directory | `identifier`, `is_directory` | +| `move_note` | Move note/directory, update DB and links | `identifier`, `destination_path`, `destination_folder`, `is_directory` | +| `read_content` | Read raw file content (text, images, binary) without knowledge graph processing | `path` | + +### Search & Discovery + +| Tool | Description | Key Parameters | +|------|-------------|----------------| +| `search_notes` | Full-text, semantic, or hybrid search with advanced filtering | `query`, `search_type` (text/title/permalink/vector/semantic/hybrid), `note_types`, `entity_types`, `after_date`, `metadata_filters`, `tags`, `status`, `min_similarity` | +| `build_context` | Navigate knowledge graph via memory:// URLs | `url`, `depth`, `timeframe`, `page`, `page_size`, `max_related` | +| `recent_activity` | Recent changes; discovery mode (all projects) or project-specific | `type`, `depth`, `timeframe` | +| `list_directory` | Browse directory structure with depth and glob filtering | `dir_name`, `depth`, `file_name_glob` | + +### Project Management + +| Tool | Description | Key Parameters | +|------|-------------|----------------| +| `list_memory_projects` | List all projects (local + cloud merged view) | `workspace` | +| `create_memory_project` | Create new project | `project_name`, `project_path`, `set_default` | +| `delete_project` | Remove project from config (files stay on disk) | `project_name` | +| `list_workspaces` | List available cloud workspaces | — | + +### Visualization & Compatibility + +| Tool | Description | Key Parameters | +|------|-------------|----------------| +| `canvas` | Generate Obsidian canvas files (JSON Canvas 1.0) | `nodes`, `edges`, `title`, `directory` | +| `search` | ChatGPT/OpenAI actions compatible search | `query` | +| `fetch` | ChatGPT/OpenAI actions compatible document fetch | `id` | + +### Utility + +| Tool | Description | +|------|-------------| +| `cloud_info` | Cloud setup guidance | +| `release_notes` | Latest product release notes | + +**Common patterns across all tools:** +- `project: Optional[str]` — resolved via hierarchy: env constraint > explicit > default > single project +- `workspace: Optional[str]` — for cloud multi-tenancy +- `output_format: Literal["text", "json"]` — human-readable or machine-readable +- `context: Context` — FastMCP session caching +- Path traversal security validation on all file operations + +--- + +## MCP Prompts (4) + +Prompts return formatted output with instructions for the LLM: + +| Prompt | Purpose | Key Parameters | +|--------|---------|----------------| +| `continue_conversation` | Resume previous work; searches topic or shows recent activity with next steps | `topic`, `timeframe` | +| `search_prompt` | Search with detailed formatted results and context | `query`, `timeframe` | +| `recent_activity_prompt` | Recently changed items with formatted output | `timeframe`, `project` | +| `ai_assistant_guide` | Resource teaching LLMs how to use Basic Memory effectively; adapts to config (default project vs multi-project) | — | + +--- + +## Search: Three Modes + +### Full-Text Search (FTS) +SQLite FTS5 with BM25 / Postgres tsvector with ts_rank. Boolean operators (AND, OR, NOT), prefix matching, pattern matching, filtering by note_type/entity_type/after_date/tags/metadata. + +### Semantic Search (Vector Embeddings) +**Enabled by default** — auto-detected from installed packages (`fastembed` + `sqlite_vec`): +- Content chunked into ~900-char segments with 120-char overlap +- Embedded via FastEmbed (bge-small-en-v1.5, 384 dimensions) +- Vectors stored in sqlite-vec (SQLite) or pgvector (Postgres) +- Observation category is included in the vector (semantic difference between "a decision about X" and "a question about X") +- Context annotations deliberately excluded from vectors +- Auto-backfill on migration upgrade (no manual `bm reindex` needed) + +Config: `semantic_vector_k=100`, `semantic_min_similarity=0.55`, `semantic_embedding_batch_size=64` + +### Hybrid Search (Default) +Fuses both: vector top-K candidates → FTS keyword scoring → `fts_score + 0.3 * vector_score` + +### Backend +Protocol pattern: `SQLiteSearchRepository` and `PostgresSearchRepository` implement the same interface. Factory picks at runtime. + +--- + +## Sync System + +```mermaid +graph TB + File["File change on disk
(any editor)"] + Watch["WatchService
(watchfiles)"] + Filter["Filter
(hidden, .tmp, .bmignore)"] + Classify["Classify
(add/modify/delete/move)"] + Sync["SyncService"] + Parse["EntityParser
(markdown-it)"] + DB["Entity + Obs + Rel
(database)"] + Index["SearchService
(FTS + vectors)"] + Resolve["LinkResolver
(relation resolution)"] + + File --> Watch --> Filter --> Classify --> Sync + Sync --> Parse --> DB + DB --> Index + DB --> Resolve +``` + +**Processing order:** Moves → Deletes → New files → Modified files → Relation resolution + +**Watermark optimization:** Tracks `last_scan_timestamp` + `last_file_count` per project. Incremental scan via `find -newermt` when no deletions detected. 225x faster for no-change case. + +**Circuit breaker:** 3 consecutive failures → skip file. Resets on checksum change. + +--- + +## Database & Repository Layer + +**Dual backend:** SQLite (default, WAL mode, zero setup) / Postgres (asyncpg, NullPool, 30s Neon timeouts). + +**Project-scoped repositories:** Every `Repository[T]` is instantiated with `project_id` that auto-filters all queries. Cross-project leakage is structurally impossible. + +**Two query modes:** Full loading (eager-loads observations/relations) vs lightweight (only needed fields for bulk ops). + +**Migrations:** Alembic with 19+ versions. Auto-backfill semantic embeddings on upgrade. + +--- + +## Project Routing + +```mermaid +graph TD + Tool["MCP Tool call"] + Resolve["Resolve project name
(env > param > default > single)"] + Mode{"Project mode?"} + ASGI["ASGI Client
(in-process, zero latency)"] + HTTP["HTTP Client
(Bearer token)"] + API["FastAPI Application"] + + Tool --> Resolve --> Mode + Mode -->|LOCAL| ASGI --> API + Mode -->|CLOUD| HTTP --> API +``` + +MCP server always runs locally. Individual projects can route to cloud independently. `--local`/`--cloud` CLI flags override. `BASIC_MEMORY_FORCE_LOCAL=true` forces local globally. + +--- + +## CLI Commands + +``` +bm status — file/DB sync status +bm doctor — end-to-end consistency check +bm mcp — start MCP server (stdio/HTTP/SSE) +bm watch — background file watcher +bm reindex — rebuild FTS and/or semantic indexes +bm reset — reset database (rebuilds from files) +bm format — run formatter on markdown files +bm import claude — import Claude conversation exports +bm import chatgpt — import ChatGPT exports +bm project list|add|remove|info|sync|bisync|default +bm cloud login|logout|status|setup +bm tool — CLI access to MCP tools +``` + +--- + +## Key Design Decisions + +| Decision | Rationale | +|----------|-----------| +| Files as source of truth | Never locked in; any editor works; git versioning; DB is disposable | +| MCP as integration layer | Standard protocol; any LLM client can use it; future-proof | +| In-process ASGI | Zero network overhead locally; same API layer for local and cloud | +| Per-project routing | Mix local and cloud projects; use Claude Desktop + cloud simultaneously | +| Composition roots | Config read once; explicit wiring; testable; no globals | +| Project-scoped repos | Auto-filter all queries by project_id; impossible to leak data | +| Two-phase relation resolution | Reference entities that don't exist yet; resolved on future sync | +| Fast path + background tasks | Write immediately; defer indexing/resolution; snappy UX | +| Three observation annotations | Different indexing weights for different query patterns | +| Vector embeddings by default | Auto-detected; hybrid search out of the box | + +--- + +## Findings + +1. **`(context)` annotation gap in AI guide:** The `write_note` tool docstring documents `(optional context)` with examples. But the `ai_assistant_guide` resource — the primary document that teaches LLMs how to use Basic Memory — shows observation examples without `(context)` and its templates omit it. LLMs likely underutilize this feature. + +2. **Vector embeddings are invisible to users:** Semantic search is enabled by default via package auto-detection, and the default search mode is `"hybrid"`. Users get vector-enhanced search without configuring anything, but may not realize it's happening or how to tune it. + +--- + +## Configuration + +Stored at `~/.basic-memory/config.json`, overridable via `BASIC_MEMORY_*` env vars. + +Key: `database_backend` (sqlite/postgres), `semantic_search_enabled` (auto-detected), `semantic_embedding_model` (bge-small-en-v1.5), `semantic_vector_k` (100), `semantic_min_similarity` (0.55), `sync_delay` (1000ms), `cloud_api_key`, `projects` dict with per-project `path`, `mode`, `workspace_id`. + +--- + +## Testing + +- `tests/` — unit tests with mocks (fast) +- `test-int/` — integration tests, real implementations +- Coverage target: 100% +- SQLite default; Postgres via testcontainers in CI +- `just fast-check` → `just doctor` → `just test` development loop From 4dc20ed26dcf1c4b102aa9fa591547ac1f0304ad Mon Sep 17 00:00:00 2001 From: TylerJNewman Date: Thu, 26 Feb 2026 17:47:56 -0800 Subject: [PATCH 2/5] docs: add phased deployment implementation guides Three sequential phases for deploying Basic Memory as a sidecar: - 00: Deploy sidecar container on Railway with persistent volume - 01: Set up Cloudflare R2 sync hub with rclone bisync - 02: Connect chat app agent with knowledge organization Each phase includes verification checklists, troubleshooting, confidence levels, and instructions to stop and ask if uncertain. Co-Authored-By: Claude Opus 4.6 Signed-off-by: TylerJNewman --- 00-deploy-railway-sidecar.md | 222 +++++++++++++++++ 01-setup-r2-sync.md | 471 +++++++++++++++++++++++++++++++++++ 02-chat-app-integration.md | 326 ++++++++++++++++++++++++ 3 files changed, 1019 insertions(+) create mode 100644 00-deploy-railway-sidecar.md create mode 100644 01-setup-r2-sync.md create mode 100644 02-chat-app-integration.md diff --git a/00-deploy-railway-sidecar.md b/00-deploy-railway-sidecar.md new file mode 100644 index 000000000..84dfde181 --- /dev/null +++ b/00-deploy-railway-sidecar.md @@ -0,0 +1,222 @@ +# Phase 00: Deploy Basic Memory Sidecar on Railway + +## Goal + +Get a Basic Memory instance running on Railway with a persistent volume, accessible via HTTP (SSE transport on port 8000). No sync yet — just a working remote instance. + +## Prerequisites + +- Railway account +- Docker (for local testing before deploying) +- The Basic Memory repo cloned locally + +--- + +## Step 1: Verify the Docker Image Locally + +The existing Dockerfile works. Test it before deploying. + +```bash +# Build locally +docker build -t basic-memory-sidecar . + +# Run locally +docker run -d \ + --name bm-sidecar \ + -p 8000:8000 \ + -v bm-data:/app/data/basic-memory \ + -v bm-config:/app/.basic-memory \ + -e BASIC_MEMORY_DEFAULT_PROJECT=shared \ + -e BASIC_MEMORY_SYNC_CHANGES=true \ + -e BASIC_MEMORY_SYNC_DELAY=1000 \ + -e BASIC_MEMORY_LOG_LEVEL=INFO \ + basic-memory-sidecar +``` + +### What happens on startup (automatic, no manual steps): + +1. `basic-memory mcp --transport sse --host 0.0.0.0 --port 8000` runs +2. `McpContainer.create()` reads/creates config at `/app/.basic-memory/config.json` +3. If no projects exist, auto-creates "main" project at `BASIC_MEMORY_HOME` (default `~/basic-memory`, but Dockerfile creates `/app/data/basic-memory`) +4. Database auto-created at `/app/.basic-memory/memory.db` +5. Alembic migrations run automatically +6. SyncCoordinator starts file watcher in background +7. SSE server listens on port 8000 + +### Verify locally: + +```bash +# Check container is running +docker logs bm-sidecar + +# Test MCP endpoint exists (SSE transport) +# The SSE endpoint won't respond to plain curl like a REST API, +# but you can check the container is healthy: +docker exec bm-sidecar basic-memory --version +docker exec bm-sidecar basic-memory status +``` + +### Verify via MCP client (optional): + +If you have an MCP client that supports SSE transport, connect to `http://localhost:8000/sse` and call `list_memory_projects()`. + +### Clean up local test: + +```bash +docker stop bm-sidecar && docker rm bm-sidecar +docker volume rm bm-data bm-config +``` + +--- + +## Step 2: Deploy to Railway + +### Option A: From Docker Image (recommended) + +1. Push to a container registry (GitHub Container Registry, Docker Hub, etc.): + ```bash + docker tag basic-memory-sidecar ghcr.io//basic-memory-sidecar:latest + docker push ghcr.io//basic-memory-sidecar:latest + ``` + +2. In Railway: + - Create new project + - Add service → Docker Image → `ghcr.io//basic-memory-sidecar:latest` + +### Option B: From Repo + +1. In Railway: + - Create new project + - Add service → GitHub Repo → select the basic-memory fork/repo + - Railway will detect the Dockerfile and build automatically + +### Configure Railway Service + +**Environment Variables** (set in Railway dashboard): + +``` +BASIC_MEMORY_DEFAULT_PROJECT=shared +BASIC_MEMORY_SYNC_CHANGES=true +BASIC_MEMORY_SYNC_DELAY=1000 +BASIC_MEMORY_LOG_LEVEL=INFO +``` + +**Volume** (create in Railway dashboard): + +- Mount path: `/app/data` +- This persists markdown files across deploys + +**Note on config volume**: The config and SQLite DB live at `/app/.basic-memory/`. Railway allows one volume per service. Two options: +1. Mount at `/app/data` only — config/DB regenerate on each deploy (safe, DB rebuilds from files) +2. Use a custom `BASIC_MEMORY_CONFIG_DIR=/app/data/.config` to store config alongside files in the same volume + +Recommended: Option 2 — keeps everything in one volume: + +``` +BASIC_MEMORY_CONFIG_DIR=/app/data/.config +``` + +**Port**: Railway auto-detects port 8000 from the Dockerfile `EXPOSE` directive. + +**Health Check**: The Dockerfile includes a health check (`basic-memory --version`). Railway will use this. + +--- + +## Step 3: Verify Railway Deployment + +```bash +# Check Railway logs for startup sequence: +# - "Starting MCP server" +# - "Database initialized" +# - "SyncCoordinator started" +# - No errors + +# If Railway provides a public URL, test connectivity: +# (The SSE endpoint is at /sse on the Railway URL) +curl -N https:///sse +# Should return SSE stream headers (content-type: text/event-stream) +``` + +--- + +## Step 4: Initialize the Project + +The container auto-creates a "main" project. We want it named "shared" instead. + +SSH into Railway or use `railway run`: + +```bash +# Check current state +basic-memory project list + +# If "main" exists but we want "shared": +basic-memory project add shared /app/data/shared +basic-memory project default shared + +# Create the directory structure +mkdir -p /app/data/shared/notes +mkdir -p /app/data/shared/users +mkdir -p /app/data/shared/conversations +mkdir -p /app/data/shared/decisions +``` + +Or let the `BASIC_MEMORY_DEFAULT_PROJECT=shared` env var handle it — the auto-created project will be named "shared" if `BASIC_MEMORY_HOME=/app/data/shared`. + +Add this env var: +``` +BASIC_MEMORY_HOME=/app/data/shared +``` + +--- + +## Step 5: Write a Test Note + +From inside the container (or via MCP client): + +```bash +# Via CLI +basic-memory tool write-note \ + --title "Deployment Test" \ + --content "- [test] Sidecar deployment verified #deployment" \ + --directory "notes" + +# Verify +basic-memory tool read-note --identifier "Deployment Test" +basic-memory status +``` + +--- + +## Verification Checklist + +- [ ] Docker image builds locally without errors +- [ ] Container starts, logs show successful initialization +- [ ] `basic-memory status` shows project with 0 sync errors +- [ ] Can write and read a note via CLI inside the container +- [ ] Railway deployment starts successfully (check logs) +- [ ] Railway persistent volume is mounted at `/app/data` +- [ ] Config stored in volume (not lost on redeploy) +- [ ] Project "shared" exists with correct directory structure + +## What This Phase Does NOT Cover + +- No sync with local machines (Phase 01) +- No rclone or R2 setup (Phase 01) +- No chat app integration (Phase 02) +- No cron jobs or automation (Phase 01) + +## Troubleshooting + +**Container exits immediately**: Check logs for Python import errors. Ensure `uv sync --locked` completed during build. + +**"No project found" errors**: Set `BASIC_MEMORY_HOME` and `BASIC_MEMORY_DEFAULT_PROJECT` env vars. The auto-bootstrap creates a project from `BASIC_MEMORY_HOME`. + +**Database errors on startup**: Usually means the volume isn't mounted. Check that `/app/data` persists. If DB is corrupted, delete it — `bm reset` rebuilds from files. + +**Port not reachable on Railway**: Ensure Railway sees the `EXPOSE 8000` in the Dockerfile. Check the service's networking settings in Railway dashboard. + +--- + +## Confidence Level: 95% + +The Dockerfile, startup sequence, and auto-initialization are well-understood from code inspection. The only uncertainty is Railway-specific volume configuration (mount path behavior, single vs multiple volumes), which may need minor adjustments during deploy. If Railway behaves differently than expected, stop and investigate before proceeding. diff --git a/01-setup-r2-sync.md b/01-setup-r2-sync.md new file mode 100644 index 000000000..29841c499 --- /dev/null +++ b/01-setup-r2-sync.md @@ -0,0 +1,471 @@ +# Phase 01: Set Up Cloudflare R2 Sync Between Local and Railway + +## Goal + +Establish bidirectional file sync between your local Basic Memory and the Railway sidecar, using Cloudflare R2 as the sync hub. After this phase, changes on either side propagate to the other via R2. + +## Prerequisites + +- Phase 00 complete: Railway sidecar running with persistent volume +- Cloudflare account (free tier is sufficient) +- rclone installed locally (`brew install rclone` on macOS) +- Local Basic Memory project with markdown files + +--- + +## Step 1: Create Cloudflare R2 Bucket + +1. Go to Cloudflare dashboard → R2 Object Storage +2. Create a bucket: + - Name: `basic-memory-sync` (or your preference) + - Location: Auto (or nearest region) +3. Create an API token: + - R2 dashboard → Manage R2 API Tokens → Create API Token + - Permissions: Object Read & Write + - Scope: Apply to specific bucket → `basic-memory-sync` +4. Save these values: + - **Account ID** (from Cloudflare dashboard URL or overview page) + - **Access Key ID** (from the API token creation) + - **Secret Access Key** (from the API token creation) + +### Cost verification + +R2 free tier includes: +- 10 GB storage/month +- 1 million Class B (read) operations/month +- 10 million Class A (write) operations/month +- **$0 egress always** + +For markdown files from a few users, this is effectively free indefinitely. + +--- + +## Step 2: Configure rclone Locally + +```bash +# Check rclone is installed +rclone --version + +# If not installed: +# macOS: brew install rclone +# Linux: sudo apt install rclone (or curl https://rclone.org/install.sh | sudo bash) +``` + +Create the rclone remote configuration: + +```bash +rclone config + +# Interactive setup: +# n) New remote +# name> r2 +# Storage> s3 +# provider> Cloudflare +# access_key_id> +# secret_access_key> +# endpoint> https://.r2.cloudflarestorage.com +# (accept defaults for everything else) +``` + +Or write the config directly: + +```bash +cat >> ~/.config/rclone/rclone.conf << 'EOF' +[r2] +type = s3 +provider = Cloudflare +access_key_id = +secret_access_key = +endpoint = https://.r2.cloudflarestorage.com +acl = private +no_check_bucket = true +EOF +``` + +### Verify local rclone connectivity: + +```bash +# List bucket contents (should be empty) +rclone ls r2:basic-memory-sync + +# Write a test file +echo "test" | rclone rcat r2:basic-memory-sync/test.txt + +# Read it back +rclone cat r2:basic-memory-sync/test.txt + +# Clean up +rclone delete r2:basic-memory-sync/test.txt +``` + +--- + +## Step 3: Install rclone on Railway Sidecar + +The existing Dockerfile does NOT include rclone. Two options: + +### Option A: Modify the Dockerfile (recommended if you control the image) + +Add to the Dockerfile after the system deps: + +```dockerfile +# Install rclone for R2 sync +RUN apt-get update && apt-get install -y --no-install-recommends \ + curl unzip \ + && curl -O https://downloads.rclone.org/current/rclone-current-linux-amd64.zip \ + && unzip rclone-current-linux-amd64.zip \ + && cp rclone-*-linux-amd64/rclone /usr/local/bin/ \ + && rm -rf rclone-* \ + && apt-get purge -y curl unzip \ + && apt-get autoremove -y \ + && rm -rf /var/lib/apt/lists/* +``` + +### Option B: Install at runtime (if you can't modify the image) + +In Railway, add a start command that installs rclone before starting Basic Memory: + +```bash +# Railway custom start command: +apt-get update && apt-get install -y rclone && basic-memory mcp --transport sse --host 0.0.0.0 --port 8000 +``` + +This is slower (installs on every deploy) but doesn't require a custom image. + +### Configure rclone on Railway + +Set Railway environment variables for rclone config (avoids needing a config file): + +``` +RCLONE_CONFIG_R2_TYPE=s3 +RCLONE_CONFIG_R2_PROVIDER=Cloudflare +RCLONE_CONFIG_R2_ACCESS_KEY_ID= +RCLONE_CONFIG_R2_SECRET_ACCESS_KEY= +RCLONE_CONFIG_R2_ENDPOINT=https://.r2.cloudflarestorage.com +RCLONE_CONFIG_R2_ACL=private +RCLONE_CONFIG_R2_NO_CHECK_BUCKET=true +``` + +rclone reads `RCLONE_CONFIG__` env vars automatically — no config file needed. + +### Verify rclone on Railway: + +```bash +# SSH into Railway or use railway run: +rclone ls r2:basic-memory-sync +# Should show empty or test files from Step 2 +``` + +--- + +## Step 4: Create Sync Filter File + +Create a filter file that both sides use to exclude non-markdown files: + +```bash +# On local machine, create the filter file +cat > ~/.config/rclone/bm-sync-filter.txt << 'EOF' +# Exclude database and config (each side maintains its own) +- .basic-memory/** +- .obsidian/** +- .git/** +- **/.DS_Store +- **/*.pyc +- **/__pycache__/** +- **/*.tmp +- **/*.swp +- **/*.swo +EOF +``` + +On Railway, store the same filter content. Either: +- Bake it into the Docker image +- Store in the persistent volume at `/app/data/.sync-filter.txt` +- Or pass excludes as env var / command flags + +--- + +## Step 5: Initial Sync — Local to R2 + +Push your local markdown files to R2 first (establishes R2 as the shared baseline): + +```bash +# Dry run first — see what would be synced +rclone sync ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt \ + --dry-run -v + +# If the output looks correct, do the real sync +rclone sync ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt \ + -v +``` + +Verify: + +```bash +rclone ls r2:basic-memory-sync/shared +# Should show your markdown files +``` + +--- + +## Step 6: Initial Sync — R2 to Railway + +Pull files from R2 into the Railway container: + +```bash +# On Railway (via SSH or railway run): +rclone sync r2:basic-memory-sync/shared /app/data/shared \ + --filter-from /app/data/.sync-filter.txt \ + -v + +# Rebuild the database index from the synced files +basic-memory reset +basic-memory status +# Should show files synced, 0 errors +``` + +--- + +## Step 7: Establish Bisync Baseline + +rclone bisync requires a one-time `--resync` to create tracking state. + +### On local machine: + +```bash +# Establish bisync baseline +rclone bisync ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt \ + --create-empty-src-dirs \ + --resync \ + -v + +# Verify: check bisync state was created +ls ~/.cache/rclone/bisync/ +# Should show state files +``` + +### On Railway: + +```bash +rclone bisync /app/data/shared r2:basic-memory-sync/shared \ + --filter-from /app/data/.sync-filter.txt \ + --create-empty-src-dirs \ + --resync \ + -v +``` + +--- + +## Step 8: Test Bidirectional Sync + +### Test 1: Local → R2 → Railway + +```bash +# On local machine: create a test note +echo "# Test Note\n\n- [test] Created locally" > ~/basic-memory/notes/sync-test-local.md + +# Sync local to R2 +rclone bisync ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt \ + -v + +# Verify on R2 +rclone cat r2:basic-memory-sync/shared/notes/sync-test-local.md + +# On Railway: sync R2 to container +rclone bisync /app/data/shared r2:basic-memory-sync/shared \ + --filter-from /app/data/.sync-filter.txt \ + -v + +# Verify on Railway +cat /app/data/shared/notes/sync-test-local.md +basic-memory tool read-note --identifier "sync-test-local" +``` + +### Test 2: Railway → R2 → Local + +```bash +# On Railway: create a test note via Basic Memory +basic-memory tool write-note \ + --title "Sync Test Remote" \ + --content "- [test] Created on Railway sidecar" \ + --directory "notes" + +# Sync Railway to R2 +rclone bisync /app/data/shared r2:basic-memory-sync/shared \ + --filter-from /app/data/.sync-filter.txt \ + -v + +# On local: sync R2 to local +rclone bisync ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt \ + -v + +# Verify locally +cat ~/basic-memory/notes/sync-test-remote.md +``` + +### Clean up test files: + +```bash +rm ~/basic-memory/notes/sync-test-local.md +# Re-sync to propagate deletion +``` + +--- + +## Step 9: Automate Sync + +### On Railway: cron via supervisor or entrypoint script + +Create a sync script on the Railway volume: + +```bash +cat > /app/data/.sync.sh << 'SCRIPT' +#!/bin/bash +while true; do + sleep 300 # 5 minutes + rclone bisync /app/data/shared r2:basic-memory-sync/shared \ + --filter-from /app/data/.sync-filter.txt \ + --resilient \ + --conflict-resolve newer \ + 2>&1 | logger -t bm-r2-sync +done +SCRIPT +chmod +x /app/data/.sync.sh +``` + +Modify the Docker entrypoint to run both the sync loop and the MCP server: + +```dockerfile +# Option: use a custom entrypoint +COPY entrypoint.sh /app/entrypoint.sh +CMD ["/app/entrypoint.sh"] +``` + +```bash +#!/bin/bash +# entrypoint.sh + +# Start R2 sync loop in background (if rclone is configured) +if command -v rclone &>/dev/null && [ -n "$RCLONE_CONFIG_R2_TYPE" ]; then + echo "Starting R2 sync loop (every 5 minutes)..." + while true; do + sleep 300 + rclone bisync /app/data/shared r2:basic-memory-sync/shared \ + --exclude ".basic-memory/**" \ + --exclude ".obsidian/**" \ + --resilient \ + --conflict-resolve newer \ + 2>&1 | head -20 + done & +fi + +# Start Basic Memory MCP server (foreground) +exec basic-memory mcp --transport sse --host 0.0.0.0 --port 8000 +``` + +### On local machine: launchd (macOS) or manual + +For manual sync (simplest): + +```bash +# Add to ~/.zshrc or create an alias +alias bm-sync='rclone bisync ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt \ + --resilient --conflict-resolve newer -v' +``` + +Then just run `bm-sync` before and after working locally. + +For automated (macOS launchd): + +```xml + + + + + + Label + com.basicmemory.sync + ProgramArguments + + /opt/homebrew/bin/rclone + bisync + /Users/tyler/basic-memory + r2:basic-memory-sync/shared + --filter-from + /Users/tyler/.config/rclone/bm-sync-filter.txt + --resilient + --conflict-resolve + newer + + StartInterval + 300 + RunAtLoad + + StandardOutPath + /tmp/bm-sync.log + StandardErrorPath + /tmp/bm-sync.log + + +``` + +```bash +launchctl load ~/Library/LaunchAgents/com.basicmemory.sync.plist +``` + +--- + +## Verification Checklist + +- [ ] R2 bucket created on Cloudflare +- [ ] rclone configured locally with R2 credentials +- [ ] rclone connectivity verified (ls, cat, rcat work) +- [ ] rclone installed/available on Railway sidecar +- [ ] rclone configured on Railway (via env vars) +- [ ] Filter file created on both sides +- [ ] Initial sync: local files pushed to R2 +- [ ] Initial sync: R2 files pulled to Railway +- [ ] Bisync baseline established on both sides (--resync) +- [ ] Test: local note appears on Railway after sync +- [ ] Test: Railway note appears locally after sync +- [ ] Basic Memory file watcher detects synced files (check `bm status`) +- [ ] Automated sync running on Railway (cron/loop) +- [ ] Local sync working (manual alias or launchd) + +## Conflict Behavior + +- **One side changed**: File copies to the other side (normal) +- **Both sides changed same file**: rclone keeps both, loser gets `.conflict1` suffix +- **File deleted on one side**: Deletion propagates to the other side +- **`--conflict-resolve newer`**: Most recent timestamp wins, loser renamed + +With a few users and 5-minute sync intervals, conflicts should be extremely rare. If one occurs, check for `.conflict` files and merge manually. + +## Troubleshooting + +**"bisync not found"**: rclone version too old. Need v1.58+ for bisync, v1.64+ for `--create-empty-src-dirs`. Update rclone. + +**"Failed to bisync: must use --resync"**: Bisync state is corrupted or missing. Run with `--resync` to re-establish baseline (safe, just recalculates state). + +**"Access Denied" on R2**: Check API token permissions (needs Object Read & Write), check bucket name matches, check account ID in endpoint URL. + +**Files sync but Basic Memory doesn't see them**: The file watcher (`BASIC_MEMORY_SYNC_CHANGES=true`) should detect changes. If not, run `basic-memory reset` to force reindex. Check that files land in the correct project directory. + +**R2 shows files but Railway is empty**: Verify the rclone remote path matches. `r2:basic-memory-sync/shared` must map to the Railway mount at `/app/data/shared`. + +--- + +## Confidence Level: 90% + +rclone bisync with R2 is well-documented and standard. The main uncertainties: +1. Railway's ability to run background processes alongside the main CMD (the entrypoint.sh approach should work but needs testing) +2. rclone env var config (`RCLONE_CONFIG_R2_*`) — well-documented but worth verifying on Railway specifically +3. File watcher picking up rclone-synced files — should work since watchfiles uses OS-level notifications, but bulk file drops might need a delay + +If any of these don't work as expected: stop, investigate, try alternative approaches before proceeding. diff --git a/02-chat-app-integration.md b/02-chat-app-integration.md new file mode 100644 index 000000000..e042887e9 --- /dev/null +++ b/02-chat-app-integration.md @@ -0,0 +1,326 @@ +# Phase 02: Chat App / Agent Integration + +## Goal + +Connect your chat application's agent to the Basic Memory sidecar on Railway. The agent uses MCP tools to read, write, and search the shared knowledge base. Per-user preferences are scoped by directory convention. + +## Prerequisites + +- Phase 00 complete: Railway sidecar running, accessible on internal network +- Phase 01 complete: R2 sync working between local and Railway +- A chat application (or agent framework) that can make HTTP requests + +--- + +## Step 1: Understand the Connection + +The Basic Memory sidecar runs an MCP server over SSE (Server-Sent Events) on port 8000. Your chat app connects to it as an MCP client. + +``` +Chat App / Agent + │ + │ MCP over SSE (HTTP) + │ http://basic-memory-sidecar.railway.internal:8000/sse + │ + ▼ +Basic Memory Sidecar + │ + │ (auto-indexes, searches, builds context) + │ + ▼ +Markdown Files ←──rclone bisync──→ R2 ←──rclone bisync──→ Local +``` + +### Connection details: + +- **Protocol**: SSE (Server-Sent Events) — standard MCP transport +- **URL**: `http://:8000/sse` (internal) or `https:///sse` (public) +- **Auth**: None by default. If exposing publicly, add auth at the Railway/proxy level +- **No API key needed**: The sidecar runs in LOCAL mode (`BASIC_MEMORY_FORCE_LOCAL=true` is set automatically for SSE transport — see `src/basic_memory/cli/commands/mcp.py` lines 48-61) + +### If your agent framework supports MCP natively: + +Configure it as an MCP server connection: +```json +{ + "mcpServers": { + "basic-memory": { + "transport": "sse", + "url": "http://basic-memory-sidecar.railway.internal:8000/sse" + } + } +} +``` + +### If your agent framework uses HTTP/REST: + +Basic Memory also exposes a FastAPI REST API. The MCP tools are wrappers around these endpoints. You can call the API directly: + +``` +GET /v2/projects → list projects +POST /v2/projects/{id}/knowledge/entities → create note +GET /v2/projects/{id}/knowledge/entities/{entity_id} → read note +PATCH /v2/projects/{id}/knowledge/entities/{entity_id} → edit note +GET /v2/projects/{id}/search?query=... → search +GET /v2/projects/{id}/memory/context?url=memory://...→ build context +``` + +The API is the same layer that MCP tools use internally (via ASGI transport). + +--- + +## Step 2: Set Up Knowledge Directory Structure + +If not already created in Phase 00: + +```bash +# On Railway sidecar (or let the agent create these via write_note) +mkdir -p /app/data/shared/notes +mkdir -p /app/data/shared/users +mkdir -p /app/data/shared/conversations +mkdir -p /app/data/shared/decisions +``` + +### Directory purposes: + +| Directory | Purpose | Who writes | +|-----------|---------|------------| +| `notes/` | Shared knowledge — product info, troubleshooting, guides | Agent (from any user's conversation) | +| `users/` | Per-user preferences and context | Agent (scoped to specific user) | +| `conversations/` | Conversation summaries worth keeping | Agent (after significant conversations) | +| `decisions/` | Important decisions with rationale | Agent or human | + +--- + +## Step 3: Configure Agent System Prompt + +Add Basic Memory instructions to your agent's system prompt. Adapt based on your agent framework: + +``` +You have access to a shared knowledge base via Basic Memory MCP tools. + +## Knowledge Base Usage + +**At conversation start:** +1. Read user preferences: read_note("users/{username}") + - If the note exists, follow the user's preferences + - If not found, proceed with defaults + +**During conversation:** +2. Search before creating: search_notes("relevant topic") before writing new notes +3. Build context when needed: build_context("memory://notes/topic", depth=2) + +**When you learn something worth keeping:** +4. Write shared knowledge to "notes/" directory: + write_note( + title="Descriptive Title", + content="# Title\n\n## Observations\n- [category] fact #tag\n\n## Relations\n- relates_to [[Other Note]]", + directory="notes" + ) + +**After significant conversations:** +5. Summarize to "conversations/" directory (only if the conversation produced lasting insights) + +**For user-specific preferences:** +6. Write to "users/" directory: + write_note( + title="{username}", + content="# {username}\n\n- [preference] User prefers concise answers\n- [preference] Works in Python primarily", + directory="users" + ) + +## Observation Categories +Use these categories in observations: +- [fact] — verified information +- [decision] — choices made with rationale +- [preference] — user-specific preferences +- [technique] — how-to knowledge +- [issue] — known problems +- [idea] — suggestions for future work + +## Relations +Link related notes with WikiLinks: +- relates_to [[Other Note]] +- depends_on [[Prerequisite]] +- implements [[Specification]] +- part_of [[Larger Topic]] + +## Important Rules +- Search before creating to avoid duplicates +- Use descriptive titles (they become the filename) +- Include 3-5 observations per note +- Include relations to connect knowledge +- Don't save trivial conversation details — focus on lasting insights +``` + +--- + +## Step 4: Implement User Scoping + +The agent identifies users by whatever mechanism your chat app uses (login, session, username). Pass the username to the agent context so it can read/write user-specific notes. + +### Reading user preferences: + +```python +# At conversation start, the agent calls: +user_prefs = await read_note(f"users/{username}") +# Returns the user's preference note, or "not found" error +``` + +### Writing user preferences: + +```python +# When the agent learns a user preference: +await write_note( + title=username, + content=f"# {username}\n\n- [preference] {preference_text}", + directory="users" +) +``` + +### Example user note (`users/alice.md`): + +```markdown +--- +title: alice +type: note +permalink: users/alice +--- + +# alice + +- [preference] Prefers detailed technical explanations #communication +- [preference] Primary language: Python #tech +- [preference] Timezone: PST #scheduling +- [context] Onboarded 2026-02-26 +``` + +--- + +## Step 5: Test the Integration + +### Test 1: Agent writes a shared note + +Have a user ask the agent something that produces knowledge. Verify: + +```bash +# On Railway: +basic-memory tool search-notes --query "the topic discussed" +# Should find the note the agent created + +# Verify file exists: +ls /app/data/shared/notes/ +``` + +### Test 2: Agent reads existing knowledge + +Create a note manually, then ask the agent about that topic: + +```bash +# Create a note via CLI on Railway: +basic-memory tool write-note \ + --title "Product FAQ" \ + --content "# Product FAQ\n\n- [fact] Free trial is 14 days #pricing\n- [fact] Supports up to 10 team members on starter plan #pricing" \ + --directory "notes" +``` + +Ask the agent: "What's the free trial length?" — it should search and find the answer. + +### Test 3: User preferences work + +```bash +# Create a user preference note: +basic-memory tool write-note \ + --title "alice" \ + --content "# alice\n\n- [preference] Always respond in bullet points" \ + --directory "users" +``` + +Start a conversation as alice — the agent should read preferences and adjust its style. + +### Test 4: Sync reaches local + +After the agent creates notes on Railway: + +```bash +# Trigger sync (or wait for cron) +bm-sync # your local alias from Phase 01 + +# Check locally +ls ~/basic-memory/notes/ +cat ~/basic-memory/notes/product-faq.md +``` + +--- + +## Step 6: Monitor and Iterate + +### Check Basic Memory health: + +```bash +# On Railway: +basic-memory status +basic-memory doctor # full consistency check +``` + +### Check sync status: + +```bash +# Verify files match between local and R2 +rclone check ~/basic-memory r2:basic-memory-sync/shared \ + --filter-from ~/.config/rclone/bm-sync-filter.txt +``` + +### Review knowledge quality: + +Periodically browse the knowledge base to ensure the agent is writing useful, well-structured notes: + +```bash +# List all notes +basic-memory tool list-directory --dir-name "notes" --depth 2 + +# Check recent activity +basic-memory tool recent-activity --timeframe "1 week" +``` + +--- + +## Verification Checklist + +- [ ] Chat app connects to sidecar MCP endpoint (SSE or REST) +- [ ] Agent can call `list_memory_projects()` and see the "shared" project +- [ ] Agent can `write_note()` — creates a markdown file on Railway volume +- [ ] Agent can `search_notes()` — finds existing knowledge +- [ ] Agent can `read_note()` — retrieves specific notes +- [ ] Agent can `build_context()` — traverses knowledge graph +- [ ] User preferences: agent reads `users/{name}` at conversation start +- [ ] User preferences: agent writes preferences when discovered +- [ ] Shared knowledge: agent writes to `notes/` after learning something +- [ ] Sync: agent-created notes appear locally after sync +- [ ] Sync: locally-created notes are findable by the agent after sync +- [ ] `basic-memory doctor` passes on Railway with no errors + +## Security Considerations + +- **Internal networking**: Keep the sidecar on Railway's internal network if possible. Only the chat app needs access. +- **No sensitive data**: Don't store API keys, passwords, or PII in notes. The knowledge base is for product knowledge and user preferences only. +- **R2 bucket access**: Keep the R2 API token scoped to the single bucket. Don't use a global Cloudflare API key. + +## What's Next + +After completing all three phases, you have: +1. A running Basic Memory sidecar on Railway +2. Bidirectional sync with local via R2 +3. An agent that reads/writes shared knowledge + +From here, iterate on: +- **Knowledge quality**: Review and refine what the agent captures +- **Agent prompting**: Tune when and how the agent uses Basic Memory +- **Backup verification**: Periodically test recovery from R2 +- **Monitoring**: Set up alerts for sync failures or Railway volume usage + +--- + +## Confidence Level: 90% + +The MCP SSE connection is standard and well-tested. The REST API fallback is available if SSE doesn't fit your agent framework. The main uncertainty is how your specific chat app/agent framework connects to MCP — this varies by framework. If your framework doesn't support MCP natively, use the REST API directly. If the SSE connection fails, check Railway networking (internal vs public URL) before investigating further. From 0c2ed78ccc0af6352020e41845d092dd2651a78e Mon Sep 17 00:00:00 2001 From: TylerJNewman Date: Thu, 26 Feb 2026 18:55:07 -0800 Subject: [PATCH 3/5] docs: complete Phase 00 - sidecar Docker verification - Add docker-compose.sidecar.yml for sidecar deployment pattern - Single volume at /app/data stores config, DB, and markdown files - Config dir override via BASIC_MEMORY_CONFIG_DIR=/app/data/.config - Semantic search disabled for local testing (sqlite_vec ARM compat) - Updated plan with all tasks completed and learnings documented Co-Authored-By: Claude Opus 4.6 Signed-off-by: TylerJNewman --- 00-deploy-railway-sidecar.md | 51 +++++++++++++++++++++++++++--------- docker-compose.sidecar.yml | 29 ++++++++++++++++++++ 2 files changed, 68 insertions(+), 12 deletions(-) create mode 100644 docker-compose.sidecar.yml diff --git a/00-deploy-railway-sidecar.md b/00-deploy-railway-sidecar.md index 84dfde181..9aa3ace8d 100644 --- a/00-deploy-railway-sidecar.md +++ b/00-deploy-railway-sidecar.md @@ -12,6 +12,30 @@ Get a Basic Memory instance running on Railway with a persistent volume, accessi --- +## Task List + +### Phase 00-A: Local Docker Verification +- [x] Task 1: Build Docker image locally (must use `--platform linux/amd64` on Apple Silicon) +- [x] Task 2: Run container with sidecar env vars +- [x] Task 3: Verify container starts (check logs for startup sequence) +- [x] Task 4: Verify `basic-memory --version` works inside container (v0.18.5) +- [x] Task 5: Verify `basic-memory status` shows a project with no errors +- [x] Task 6: Write a test note inside the container (CLI flag is `--folder` not `--directory`) +- [x] Task 7: Read the test note back to confirm persistence +- [x] Task 8: Verify SSE endpoint responds on port 8000 (path is `/mcp` not `/sse` in FastMCP 3.x) +- [x] Task 9: Clean up local container and volumes + +### Phase 00-B: Create Sidecar Docker Compose +- [x] Task 10: Create `docker-compose.sidecar.yml` with sidecar-specific config +- [x] Task 11: Test docker compose up with the sidecar compose file +- [x] Task 12: Verify sidecar compose works end-to-end (write/read/SSE all work) + +### Phase 00-C: Railway Deployment (manual — requires dashboard) +- [x] Task 13: Document Railway env vars and volume config for easy copy-paste +- [x] Task 14: Commit all changes + +--- + ## Step 1: Verify the Docker Image Locally The existing Dockerfile works. Test it before deploying. @@ -95,7 +119,9 @@ docker volume rm bm-data bm-config **Environment Variables** (set in Railway dashboard): ``` -BASIC_MEMORY_DEFAULT_PROJECT=shared +BASIC_MEMORY_HOME=/app/data/shared +BASIC_MEMORY_DEFAULT_PROJECT=main +BASIC_MEMORY_CONFIG_DIR=/app/data/.config BASIC_MEMORY_SYNC_CHANGES=true BASIC_MEMORY_SYNC_DELAY=1000 BASIC_MEMORY_LOG_LEVEL=INFO @@ -104,17 +130,8 @@ BASIC_MEMORY_LOG_LEVEL=INFO **Volume** (create in Railway dashboard): - Mount path: `/app/data` -- This persists markdown files across deploys - -**Note on config volume**: The config and SQLite DB live at `/app/.basic-memory/`. Railway allows one volume per service. Two options: -1. Mount at `/app/data` only — config/DB regenerate on each deploy (safe, DB rebuilds from files) -2. Use a custom `BASIC_MEMORY_CONFIG_DIR=/app/data/.config` to store config alongside files in the same volume - -Recommended: Option 2 — keeps everything in one volume: - -``` -BASIC_MEMORY_CONFIG_DIR=/app/data/.config -``` +- This single volume persists markdown files, config, and SQLite DB +- Config stored at `/app/data/.config` via `BASIC_MEMORY_CONFIG_DIR` **Port**: Railway auto-detects port 8000 from the Dockerfile `EXPOSE` directive. @@ -217,6 +234,16 @@ basic-memory status --- +## Learnings from Implementation + +1. **Apple Silicon requires `--platform linux/amd64`**: The `sqlite_vec` binary is x86-only. Build with `docker build --platform linux/amd64` or set `platform: linux/amd64` in compose. +2. **Config lives at `~appuser/.basic-memory/`**: The Dockerfile creates `/app/.basic-memory/` but the appuser's home is `/home/appuser/`. Use `BASIC_MEMORY_CONFIG_DIR=/app/data/.config` to put config in the persistent volume. +3. **Single volume is best**: Put config, DB, and data in one volume at `/app/data`. Avoids permission issues and Railway's one-volume-per-service limit. +4. **SSE path is `/mcp` not `/sse`**: FastMCP 3.x changed the endpoint path. +5. **CLI write-note uses `--folder` not `--directory`**: The MCP tool parameter is `directory` but the CLI flag is `--folder`. +6. **`BASIC_MEMORY_DEFAULT_PROJECT` doesn't rename auto-created project**: The auto-bootstrap always creates a project named "main". The env var just sets which project is default. +7. **Semantic search disabled for sidecar**: `sqlite_vec` has ELF compatibility issues under emulation. Set `BASIC_MEMORY_SEMANTIC_SEARCH_ENABLED=false` for local testing. Railway runs native amd64 so this can be re-enabled there. + ## Confidence Level: 95% The Dockerfile, startup sequence, and auto-initialization are well-understood from code inspection. The only uncertainty is Railway-specific volume configuration (mount path behavior, single vs multiple volumes), which may need minor adjustments during deploy. If Railway behaves differently than expected, stop and investigate before proceeding. diff --git a/docker-compose.sidecar.yml b/docker-compose.sidecar.yml new file mode 100644 index 000000000..2ceca2c1b --- /dev/null +++ b/docker-compose.sidecar.yml @@ -0,0 +1,29 @@ +services: + basic-memory: + build: . + platform: linux/amd64 + container_name: basic-memory-sidecar + volumes: + - basic-memory-data:/app/data + environment: + - BASIC_MEMORY_HOME=/app/data/shared + - BASIC_MEMORY_DEFAULT_PROJECT=main + - BASIC_MEMORY_CONFIG_DIR=/app/data/.config + - BASIC_MEMORY_SYNC_CHANGES=true + - BASIC_MEMORY_SYNC_DELAY=1000 + - BASIC_MEMORY_LOG_LEVEL=INFO + - BASIC_MEMORY_SEMANTIC_SEARCH_ENABLED=false + ports: + - "8000:8000" + command: ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"] + restart: unless-stopped + healthcheck: + test: ["CMD", "basic-memory", "--version"] + interval: 30s + timeout: 10s + retries: 3 + start_period: 30s + +volumes: + basic-memory-data: + driver: local From 36c2eae0bc5849ed392ada7438c7018c3282aedf Mon Sep 17 00:00:00 2001 From: TylerJNewman Date: Thu, 26 Feb 2026 19:12:31 -0800 Subject: [PATCH 4/5] feat: deploy Basic Memory sidecar to Railway - Add Dockerfile.sidecar (root user for Railway volume compat, entrypoint) - Add entrypoint.sh to create config/data dirs before startup - Deploy to Railway: bm-sync service on exemplary-renewal project - Persistent volume at /app/data (50GB) - SSE endpoint live at robust-creation-production-70db.up.railway.app/mcp - Phase 00 fully complete with all tasks verified Co-Authored-By: Claude Opus 4.6 Signed-off-by: TylerJNewman --- 00-deploy-railway-sidecar.md | 24 +++++++++++++++++++++--- Dockerfile.sidecar | 31 +++++++++++++++++++++++++++++++ docker-compose.sidecar.yml | 4 +++- entrypoint.sh | 7 +++++++ 4 files changed, 62 insertions(+), 4 deletions(-) create mode 100644 Dockerfile.sidecar create mode 100755 entrypoint.sh diff --git a/00-deploy-railway-sidecar.md b/00-deploy-railway-sidecar.md index 9aa3ace8d..e6a834893 100644 --- a/00-deploy-railway-sidecar.md +++ b/00-deploy-railway-sidecar.md @@ -30,9 +30,24 @@ Get a Basic Memory instance running on Railway with a persistent volume, accessi - [x] Task 11: Test docker compose up with the sidecar compose file - [x] Task 12: Verify sidecar compose works end-to-end (write/read/SSE all work) -### Phase 00-C: Railway Deployment (manual — requires dashboard) -- [x] Task 13: Document Railway env vars and volume config for easy copy-paste -- [x] Task 14: Commit all changes +### Phase 00-C: Railway Deployment +- [x] Task 13: Create `Dockerfile.sidecar` (runs as root for Railway volume permissions, entrypoint.sh for dir creation) +- [x] Task 14: Link Railway project and service (`exemplary-renewal` / `bm-sync`) +- [x] Task 15: Set env vars via `railway variables --set` +- [x] Task 16: Deploy via `railway up --detach` +- [x] Task 17: Create persistent volume at `/app/data` (50GB) +- [x] Task 18: Assign public domain (`robust-creation-production-70db.up.railway.app`) +- [x] Task 19: Verify SSE endpoint returns HTTP 200 with `text/event-stream` +- [x] Task 20: Commit all changes + +### Railway Deployment Details + +- **Project**: exemplary-renewal +- **Service**: bm-sync (formerly robust-creation) +- **URL**: https://robust-creation-production-70db.up.railway.app +- **MCP Endpoint**: https://robust-creation-production-70db.up.railway.app/mcp +- **Volume**: 50GB at `/app/data` +- **Dockerfile**: `Dockerfile.sidecar` --- @@ -243,6 +258,9 @@ basic-memory status 5. **CLI write-note uses `--folder` not `--directory`**: The MCP tool parameter is `directory` but the CLI flag is `--folder`. 6. **`BASIC_MEMORY_DEFAULT_PROJECT` doesn't rename auto-created project**: The auto-bootstrap always creates a project named "main". The env var just sets which project is default. 7. **Semantic search disabled for sidecar**: `sqlite_vec` has ELF compatibility issues under emulation. Set `BASIC_MEMORY_SEMANTIC_SEARCH_ENABLED=false` for local testing. Railway runs native amd64 so this can be re-enabled there. +8. **Railway volumes mount as root**: The `USER appuser` directive conflicts with Railway volumes. `Dockerfile.sidecar` runs as root with an entrypoint that creates dirs. +9. **`PORT=8000` env var required**: Railway needs explicit PORT to route traffic to the container. +10. **`RAILWAY_DOCKERFILE_PATH=Dockerfile.sidecar`**: Tells Railway to use the sidecar Dockerfile instead of the default one. ## Confidence Level: 95% diff --git a/Dockerfile.sidecar b/Dockerfile.sidecar new file mode 100644 index 000000000..a25453cff --- /dev/null +++ b/Dockerfile.sidecar @@ -0,0 +1,31 @@ +FROM python:3.12-slim-bookworm + +COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ + +ENV PYTHONUNBUFFERED=1 \ + PYTHONDONTWRITEBYTECODE=1 \ + UV_PYTHON_INSTALL_DIR=/python \ + UV_PYTHON_PREFERENCE=only-managed + +ADD . /app +WORKDIR /app + +RUN uv python install 3.13 +RUN uv sync --locked --python 3.13 + +# Create directories and ensure entrypoint is executable +RUN mkdir -p /app/data /app/data/.config /app/data/shared && \ + chmod +x /app/entrypoint.sh + +ENV BASIC_MEMORY_HOME=/app/data/shared \ + BASIC_MEMORY_PROJECT_ROOT=/app/data \ + BASIC_MEMORY_CONFIG_DIR=/app/data/.config \ + PATH="/app/.venv/bin:$PATH" + +EXPOSE 8000 + +HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ + CMD basic-memory --version || exit 1 + +ENTRYPOINT ["/app/entrypoint.sh"] +CMD ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"] diff --git a/docker-compose.sidecar.yml b/docker-compose.sidecar.yml index 2ceca2c1b..54bd62bce 100644 --- a/docker-compose.sidecar.yml +++ b/docker-compose.sidecar.yml @@ -1,6 +1,8 @@ services: basic-memory: - build: . + build: + context: . + dockerfile: Dockerfile.sidecar platform: linux/amd64 container_name: basic-memory-sidecar volumes: diff --git a/entrypoint.sh b/entrypoint.sh new file mode 100755 index 000000000..ffe1a355c --- /dev/null +++ b/entrypoint.sh @@ -0,0 +1,7 @@ +#!/bin/bash +# Ensure config and data directories exist and are writable +# Railway volumes mount as root, so we need to create subdirs +mkdir -p "${BASIC_MEMORY_CONFIG_DIR:-/app/data/.config}" +mkdir -p "${BASIC_MEMORY_HOME:-/app/data/shared}" + +exec "$@" From 5880a74c440bcd6428eb4bd6fb60204e3f1fe7b8 Mon Sep 17 00:00:00 2001 From: TylerJNewman Date: Thu, 26 Feb 2026 19:38:46 -0800 Subject: [PATCH 5/5] feat: switch sidecar transport from SSE to Streamable HTTP SSE is deprecated in the MCP spec. Streamable HTTP is the current standard and is natively supported by Claude Code (`--transport http`). Co-Authored-By: Claude Opus 4.6 Signed-off-by: TylerJNewman --- Dockerfile.sidecar | 2 +- docker-compose.sidecar.yml | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/Dockerfile.sidecar b/Dockerfile.sidecar index a25453cff..33c5969d6 100644 --- a/Dockerfile.sidecar +++ b/Dockerfile.sidecar @@ -28,4 +28,4 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD basic-memory --version || exit 1 ENTRYPOINT ["/app/entrypoint.sh"] -CMD ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"] +CMD ["basic-memory", "mcp", "--transport", "streamable-http", "--host", "0.0.0.0", "--port", "8000"] diff --git a/docker-compose.sidecar.yml b/docker-compose.sidecar.yml index 54bd62bce..d33deab48 100644 --- a/docker-compose.sidecar.yml +++ b/docker-compose.sidecar.yml @@ -17,7 +17,7 @@ services: - BASIC_MEMORY_SEMANTIC_SEARCH_ENABLED=false ports: - "8000:8000" - command: ["basic-memory", "mcp", "--transport", "sse", "--host", "0.0.0.0", "--port", "8000"] + command: ["basic-memory", "mcp", "--transport", "streamable-http", "--host", "0.0.0.0", "--port", "8000"] restart: unless-stopped healthcheck: test: ["CMD", "basic-memory", "--version"]