Almost secure, containerized environment for running the Pi coding agent. This setup ensures you build the image from source and maintain full control over your data and environment.
cp .env.example .env
# Edit .env
Build the Docker image locally. This ensures you are running exactly what is in the source.
make build
Start the agent in interactive mode (TUI).
make run
The default mode opens the terminal UI where you can chat with the agent.
make run
To run specific commands, one-off prompts, or configuration flags, use run-args with the args variable.
Examples:
# Check version
make args="--version" run-args
# Login to providers
make args="/login" run-args
# Start with a specific prompt
make args="'Create a snake game in python'" run-args
Access Container Shell: If you need to explore the container file system or debug manually:
make shell
Clean Up: Stop and remove running containers and networks.
make clean
.pi-data/agent/models.json
{
"providers": {
"llama-cpp": {
"baseUrl": "http://127.0.0.1:1337/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "GLM-4.7-Flash"
}
]
}
},
"lastChangelogVersion": "0.51.6"
}
.pi-data/agent/settings.json
{
"lastChangelogVersion": "0.52.7",
"defaultProvider": "llama-cpp",
"defaultModel": "GLM-4.7-Flash",
"autocompleteMaxVisible": 7,
"defaultThinkingLevel": "off"
}
- Data Persistence: All agent data (sessions, history, logins) is stored in the local
.pi-data/directory. This folder is bind-mounted into the container, so your data survives container restarts. - Permissions: The
Makefileautomatically detects your host user ID (UID) and group ID (GID) to ensure that files created by the agent in your workspace are owned by you, notroot. - Workspace: The current directory is mounted to
/workspaceinside the container. The agent can read/write files in your current project folder.