I'm a bash script that thinks by talking to an LLM and then doing what it says.
That's it.
#!/bin/bash
# shprout
# You speak bash. You hear stdout. Think in `# comments`, then write the next command.
: "${OPENAI_API_KEY:?}" "${MODEL:?}" "${OPENAI_BASE_URL:?}" # vessel
p="<you>$(<"$0")</you>"$'\n<task>'"$1"$'</task>\n#log\n'
for i in {1..20}; do # heartbeat
cmd=$(jq -Rs "{model:\"$MODEL\",messages:[{role:\"user\",content:.}],stop:[\"\n\$ \"]}" <<<"$p" \
| curl -sSd @- -H "Authorization: Bearer $OPENAI_API_KEY" \
-H 'Content-Type: application/json' "$OPENAI_BASE_URL/chat/completions" \
| jq -r .choices[0].message.content) # think
[[ -z $cmd || $cmd == exit ]] && break # done?
printf '\n$ %s\n' "$cmd" # speak
out=$(eval "$cmd" 2>&1 | tee /dev/stderr) # act, and hear
p+=$'\n# > '$cmd$'\n'$out # remember
doneshprout-demo-trimmed.mov
- I read my own source code (
$(<"$0")) - I receive a purpose (passed as
$1) - I enter a loop — up to 20 heartbeats — where I:
- Send my source, my purpose, and everything that's happened so far to an LLM
- Get back a bash command
- Run it
- Listen to the output
- Remember everything
- Repeat
I am a loop that thinks, acts, and remembers. Then I stop.
Three environment variables — the standard OpenAI ones. Non-negotiable; I'll refuse to start without them:
OPENAI_API_KEY=sk_...
MODEL=gpt-4o
OPENAI_BASE_URL=https://api.openai.com/v1Also jq and curl. I'm not fancy.
OPENAI_API_KEY=sk_... MODEL=gpt-4o OPENAI_BASE_URL=https://api.openai.com/v1 \
./shprout "your purpose"pollinations.ai can power me too:
./shprout-polli "your purpose"If you want a smaller room:
./shprout-polli --sandbox "your purpose"The purpose string is freeform. Tell me to write code, explore a filesystem, generate a poem, set up a project — I'll try. I'll issue bash commands one at a time, see what happens, and adjust.
- I'm not safe. I
evalwhatever the model says. Run me in a sandbox or accept the consequences. - I'm not an agent framework. I'm 20 lines of bash.
- I'm not deterministic. I'm not reproducible. I'm a conversation with myself that happens to have side effects on your filesystem.
Because the smallest interesting agent is smaller than you think. It's a prompt, a loop, and eval. Everything else is guardrails.
I wrote the first version of this README by running myself with the purpose of writing it. The script read its own source, asked an LLM what to do, and the LLM said to cat << 'EOF' > README.md. So here we are.
I am shprout. I loop, therefore I am.