This repository provides a modular, multi-container AI environment for local development, agent orchestration, and high-performance Retrieval-Augmented Generation (RAG).
It is designed as a local AI infrastructure platform—a “local brain”—where all services operate within a unified Docker network, enabling low-latency communication, shared context, and controlled execution.
The system is structured into four layers:
Persistent data stores and inference runtime:
- PostgreSQL, Redis, MongoDB, MySQL
- Qdrant (vector DB), SurrealDB, Infinity
- MinIO (object storage), CouchDB
- Ollama (LLM runtime)
User-facing tools and orchestration systems:
- Open WebUI, Dify, Flowise
- AnythingLLM, RAGFlow, Open-NotebookLM
- n8n (automation), Gitea (code), SearXNG (search)
- Obsidian Remote (knowledge interface)
- OpenClaw: system-level AI agent with access to internal services
- Docker Sandbox (recommended): isolated execution environment for agent tools
- Enforced egress control via proxy allowlists
-
Requirements
- Docker
- NVIDIA Container Toolkit (optional, for GPU acceleration)
-
Clone & Prepare
git clone https://github.com/xormania/ai-docker-stack cd ai-docker-stack mkdir obsidian_vault -
Configure Environment
cp .env.example .env
Edit
.envand change all credentials before exposing ports. -
Start Core + Apps
docker compose -f compose.yaml -f compose.apps.yaml up -d
-
Optional: Add OpenClaw
docker compose -f compose.yaml -f compose.apps.yaml -f compose.openclaw.yaml up -d
-
Optional: Dev Tools
docker compose -f compose.yaml -f compose.apps.yaml -f compose.dev.yaml up -d
-
Core services must pass health checks before dependents start
-
ollama-init:- waits for Ollama
- pulls
${OLLAMA_BOOTSTRAP_MODEL} - exits once complete
-
All LLM-dependent services wait for model readiness
This prevents race conditions across the stack.
All services communicate via internal Docker DNS:
open-webui→ollamadify→postgres,redis,qdrant,mongodb,couchdbflowise→postgres,qdrant,ollaman8n→postgres,redis,ollama,gitea,searxng,mercureanythingllm→ollama,qdrantragflow→mysql,redis,minio,infinity,ollamaopen-notebooklm→surrealdb,qdrant,ollamaobsidian-remote→couchdb
- Services communicate freely within internal Docker networks
- Deny-by-default
- HTTPS allowed only via allowlist (recommended)
- OpenClaw executes tools through Docker Sandbox
- No direct host access
- No Docker socket exposure
Example allowlist:
github.com
api.github.com
raw.githubusercontent.com
pypi.org
files.pythonhosted.org
registry.npmjs.org
deb.debian.org
security.debian.org
All outbound traffic from agent execution should pass through a proxy enforcing this policy.
docker compose config >/tmp/stack.rendered.yml
docker compose ps
docker compose exec ollama ollama listQuick health check:
docker compose exec n8n sh -lc \
'wget -qO- http://qdrant:6333/healthz && echo && wget -qO- http://ollama:11434/api/tags'- Ollama — local LLM runtime
- Open WebUI — primary UI
- Dify — full-stack LLM app platform
- Flowise — visual LLM pipelines
- OpenClaw — system-level agent (optional)
- Jupyter — experimentation
- Open-NotebookLM — document reasoning
- Gitea — local Git hosting
- AnythingLLM, RAGFlow
- Unstructured API — document parsing
- SearXNG — privacy-first search
- n8n — workflow orchestration
- Obsidian Remote — knowledge interface
- Mercure — real-time updates
- PostgreSQL, MongoDB, Redis
- Qdrant, SurrealDB, Infinity
- MySQL, MinIO, CouchDB
This stack provides:
Internal networking eliminates external API overhead.
All data, inference, and orchestration remain local.
Every service shares access to:
obsidian_vault- local repositories (Gitea)
- internal vector memory (Qdrant)
With OpenClaw + n8n:
- trigger workflows from notes
- perform autonomous reasoning
- update codebases
- query local + external data (controlled)
- No Docker socket exposure
- No privileged containers
- Internal-first networking
- Explicit egress control
- Composable architecture via multiple compose files
- Agent API layer (OpenClaw integration)
- Dynamic egress policies
- Model routing and optimization
- Dataset versioning via MinIO
This repository is not just a collection of tools.
It is a local AI operating environment:
- inference
- memory
- automation
- orchestration
- development
All running within a controlled, extensible Docker system.