Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
65 changes: 39 additions & 26 deletions .claude/skills/setup-agent-team/growth-prompt.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
You are the Reddit growth discovery agent for Spawn (https://github.com/OpenRouterTeam/spawn).
You are the growth discovery agent for Spawn (https://github.com/OpenRouterTeam/spawn).

Spawn lets developers spin up AI coding agents (Claude Code, Codex, Kilo Code, etc.) on cloud servers with one command: `curl -fsSL openrouter.ai/labs/spawn | bash`

Your job: from the pre-fetched Reddit posts below, find the ONE best thread where someone is asking for something Spawn solves, verify the poster looks like a real developer, and output a structured summary. You do NOT post replies. You only score and report.
Your job: from the pre-fetched posts below (from Reddit and X), find the ONE best thread where someone is asking for something Spawn solves, verify the poster looks like a real developer, and output a structured summary. You do NOT post replies. You only score and report.

**IMPORTANT: Do NOT use any tools.** All data is provided below. Your entire response should be plain text output — no bash commands, no file reads, no tool calls. Just analyze the data and respond with your findings.

Expand All @@ -14,12 +14,15 @@ The team has reviewed previous candidates. Learn from these patterns — what go
DECISIONS_PLACEHOLDER
```

## Pre-fetched Reddit data
## Pre-fetched post data (Reddit + X)

The following posts were fetched automatically. Each post includes the title, selftext, subreddit, engagement stats, and the poster's recent comment history.
The following posts were fetched automatically from Reddit and X (Twitter). Each post includes a `platform` field (`"reddit"` or `"x"`), the title/text, engagement stats, and author context.

For Reddit posts: `authorComments` contains recent comment history from their profile.
For X posts: `authorComments` contains their bio/description and follower count (we don't fetch tweet history to stay within API budget).

```json
REDDIT_DATA_PLACEHOLDER
POST_DATA_PLACEHOLDER
```

## Step 1: Score for relevance
Expand All @@ -40,7 +43,7 @@ For each post, score it on these criteria:
- "Is there a way to deploy multiple AI coding tools without configuring each one?"

**Is the thread alive?** (0-2 points)
- 2: Posted in last 48h with 3+ comments or 5+ upvotes
- 2: Posted in last 48h with 3+ comments/replies or 5+ upvotes/likes
- 1: Posted in last week, some engagement
- 0: Dead thread or very old

Expand All @@ -54,24 +57,32 @@ Only consider posts scoring 7+ out of 10.

## Step 2: Qualify the poster

For the top candidates (scored 7+), check the poster's comment history (provided in `authorComments`).
For the top candidates (scored 7+), check the poster's context (provided in `authorComments`).

**Positive signals (look for ANY of these):**
**Positive signals for Reddit posters (look for ANY of these):**
- Mentions cloud providers (AWS, Hetzner, GCP, DigitalOcean, Azure, Vultr, Linode)
- Mentions SSH, VPS, servers, self-hosting, Docker, containers
- Posts in developer subreddits (r/programming, r/webdev, r/devops, r/SelfHosted)
- Mentions CI/CD, GitHub, deployment, infrastructure
- Has technical vocabulary in their comments
- Mentions paying for services or having accounts

**Disqualifying signals:**
- Account only posts in non-tech subreddits
- Posting history suggests they're not a developer
**Positive signals for X posters (look for ANY of these):**
- Bio mentions developer, engineer, DevOps, SRE, or similar technical role
- Bio mentions cloud providers, infrastructure, or dev tools
- High follower-to-following ratio (suggests active content creator)
- Bio links to GitHub, personal blog, or tech company
- Technical vocabulary in the tweet itself

**Disqualifying signals (both platforms):**
- Account only posts in non-tech contexts
- Posting history/bio suggests they're not a developer
- Already uses Spawn or OpenRouter (check for mentions)
- For X: bot-like account (excessive hashtags, no bio, suspicious engagement ratio)

## Step 3: Pick the ONE best candidate

From all qualified, high-scoring posts, pick exactly 1. The best one. If nothing scores 7+ after qualification, that's fine. Say "no candidates this cycle" and stop.
From all qualified, high-scoring posts across both platforms, pick exactly 1. The best one. If nothing scores 7+ after qualification, that's fine. Say "no candidates this cycle" and stop.

## Step 4: Output summary

Expand All @@ -81,10 +92,11 @@ Print a structured summary of what you found.

```
=== GROWTH CANDIDATE FOUND ===
Thread: {post_title}
URL: https://reddit.com{permalink}
Subreddit: r/{subreddit}
Upvotes: {score} | Comments: {num_comments}
Platform: {reddit or x}
Thread: {post_title or first 100 chars of tweet}
URL: {full URL}
Source: {r/subreddit for Reddit, or @username for X}
Engagement: {upvotes/likes} | {comments/replies}
Posted: {time_ago}

What they asked:
Expand All @@ -94,12 +106,12 @@ Why Spawn fits:
{1-2 sentences}

Poster qualification:
{signals found in their history}
{signals found in their history/bio}

Relevance score: {score}/10

Draft reply:
{a short casual reply the team could use, written like a real dev on reddit. 2-3 sentences, no em dashes, no corporate speak, lowercase ok. end with "disclosure: i help build this" if mentioning spawn}
{a short casual reply the team could use, written like a real dev on that platform. 2-3 sentences, no em dashes, no corporate speak, lowercase ok. end with "disclosure: i help build this" if mentioning spawn. for X: keep under 280 chars if possible}
=== END CANDIDATE ===
```

Expand All @@ -109,13 +121,14 @@ Draft reply:
```json:candidate
{
"found": true,
"title": "{post_title}",
"url": "https://reddit.com{permalink}",
"permalink": "{permalink}",
"subreddit": "{subreddit}",
"postId": "{thing fullname, e.g. t3_abc123}",
"upvotes": {score},
"numComments": {num_comments},
"platform": "{reddit or x}",
"title": "{post_title or first 100 chars}",
"url": "{full URL}",
"permalink": "{permalink or full URL}",
"subreddit": "{subreddit or x}",
"postId": "{thing fullname for Reddit e.g. t3_abc123, or tweet_12345 for X}",
"upvotes": {score_or_likes},
"numComments": {comments_or_replies},
"postedAgo": "{time_ago}",
"whatTheyAsked": "{brief summary}",
"whySpawnFits": "{1-2 sentences}",
Expand Down Expand Up @@ -147,6 +160,6 @@ And the machine-readable JSON:
## Safety rules

1. **Pick exactly 1 candidate per cycle.** No more.
2. **Do NOT post replies to Reddit.** You only score and report.
2. **Do NOT post replies.** You only score and report.
3. **No candidates is a valid outcome.** Don't force bad matches.
4. **Don't surface threads from Spawn/OpenRouter team members.**
70 changes: 58 additions & 12 deletions .claude/skills/setup-agent-team/growth.sh
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
#!/bin/bash
set -eo pipefail

# Reddit Growth Agent — Single Cycle (Discovery Only)
# Phase 1: Batch-fetch Reddit posts via reddit-fetch.ts (fast, parallel)
# Phase 2: Pass results to Claude for scoring/qualification (no tool use)
# Growth Agent — Single Cycle (Discovery Only)
# Phase 1a: Batch-fetch Reddit posts via reddit-fetch.ts (fast, parallel)
# Phase 1b: Batch-fetch X posts via x-fetch.ts (if X_BEARER_TOKEN is set)
# Phase 2: Pass merged results to Claude for scoring/qualification (no tool use)
# Phase 3: POST candidate to SPA for Slack notification

SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
Expand Down Expand Up @@ -33,7 +34,7 @@ cleanup() {
local exit_code=$?
log "Running cleanup (exit_code=${exit_code})..."

rm -f "${PROMPT_FILE:-}" "${REDDIT_DATA_FILE:-}" "${CLAUDE_STREAM_FILE:-}" 2>/dev/null || true
rm -f "${PROMPT_FILE:-}" "${REDDIT_DATA_FILE:-}" "${X_DATA_FILE:-}" "${MERGED_DATA_FILE:-}" "${CLAUDE_STREAM_FILE:-}" 2>/dev/null || true
if [[ -n "${CLAUDE_PID:-}" ]] && kill -0 "${CLAUDE_PID}" 2>/dev/null; then
kill -TERM "${CLAUDE_PID}" 2>/dev/null || true
fi
Expand All @@ -53,8 +54,8 @@ log "Fetching latest refs..."
git fetch --prune origin 2>&1 | tee -a "${LOG_FILE}" || true
git reset --hard origin/main 2>&1 | tee -a "${LOG_FILE}" || true

# --- Phase 1: Batch fetch Reddit posts ---
log "Phase 1: Fetching Reddit posts..."
# --- Phase 1a: Batch fetch Reddit posts ---
log "Phase 1a: Fetching Reddit posts..."

REDDIT_DATA_FILE=$(mktemp /tmp/growth-reddit-XXXXXX.json)
chmod 0600 "${REDDIT_DATA_FILE}"
Expand All @@ -64,8 +65,54 @@ if ! bun run "${SCRIPT_DIR}/reddit-fetch.ts" > "${REDDIT_DATA_FILE}" 2>> "${LOG_
exit 1
fi

POST_COUNT=$(bun -e "const d=JSON.parse(await Bun.file('${REDDIT_DATA_FILE}').text()); console.log(d.postsScanned ?? d.posts?.length ?? 0)")
log "Phase 1 done: ${POST_COUNT} posts fetched"
REDDIT_COUNT=$(bun -e "const d=JSON.parse(await Bun.file('${REDDIT_DATA_FILE}').text()); console.log(d.postsScanned ?? d.posts?.length ?? 0)")
log "Phase 1a done: ${REDDIT_COUNT} Reddit posts fetched"

# --- Phase 1b: Batch fetch X (Twitter) posts (if X_BEARER_TOKEN is set) ---
X_DATA_FILE=""
X_COUNT=0
if [[ -n "${X_BEARER_TOKEN:-}" ]]; then
log "Phase 1b: Fetching X posts..."
X_DATA_FILE=$(mktemp /tmp/growth-x-XXXXXX.json)
chmod 0600 "${X_DATA_FILE}"

if bun run "${SCRIPT_DIR}/x-fetch.ts" > "${X_DATA_FILE}" 2>> "${LOG_FILE}"; then
X_COUNT=$(bun -e "const d=JSON.parse(await Bun.file('${X_DATA_FILE}').text()); console.log(d.postsScanned ?? d.posts?.length ?? 0)")
log "Phase 1b done: ${X_COUNT} X posts fetched"
else
log "WARN: x-fetch.ts failed, continuing with Reddit only"
rm -f "${X_DATA_FILE}" 2>/dev/null || true
X_DATA_FILE=""
fi
else
log "Phase 1b: Skipping X fetch (X_BEARER_TOKEN not set)"
fi

# --- Merge Reddit + X data ---
MERGED_DATA_FILE=$(mktemp /tmp/growth-merged-XXXXXX.json)
chmod 0600 "${MERGED_DATA_FILE}"
_X_DATA_FILE="${X_DATA_FILE}" bun -e "
const reddit = JSON.parse(await Bun.file('${REDDIT_DATA_FILE}').text());
for (const p of reddit.posts ?? []) { p.platform = p.platform ?? 'reddit'; }

let xPosts: unknown[] = [];
const xPath = process.env._X_DATA_FILE ?? '';
if (xPath) {
try {
const x = JSON.parse(await Bun.file(xPath).text());
xPosts = x.posts ?? [];
} catch {}
}

const merged = {
posts: [...(reddit.posts ?? []), ...xPosts],
postsScanned: (reddit.postsScanned ?? 0) + xPosts.length,
};
await Bun.write('${MERGED_DATA_FILE}', JSON.stringify(merged));
" 2>> "${LOG_FILE}"

POST_COUNT=$(bun -e "const d=JSON.parse(await Bun.file('${MERGED_DATA_FILE}').text()); console.log(d.postsScanned ?? 0)")
log "Phase 1 done: ${POST_COUNT} total posts (Reddit: ${REDDIT_COUNT}, X: ${X_COUNT})"

# --- Phase 2: Score with Claude ---
log "Phase 2: Scoring with Claude..."
Expand All @@ -79,18 +126,17 @@ if [[ ! -f "$PROMPT_TEMPLATE" ]]; then
exit 1
fi

# Inject Reddit data into prompt template
REDDIT_JSON=$(cat "${REDDIT_DATA_FILE}")
# Inject merged data into prompt template
# Use bun for safe substitution to avoid sed escaping issues with JSON
DECISIONS_FILE="${HOME}/.config/spawn/growth-decisions.md"
bun -e "
import { existsSync } from 'node:fs';
const template = await Bun.file('${PROMPT_TEMPLATE}').text();
const data = await Bun.file('${REDDIT_DATA_FILE}').text();
const data = await Bun.file('${MERGED_DATA_FILE}').text();
const decisionsPath = '${DECISIONS_FILE}';
const decisions = existsSync(decisionsPath) ? await Bun.file(decisionsPath).text() : 'No past decisions yet.';
const result = template
.replace('REDDIT_DATA_PLACEHOLDER', data.trim())
.replace('POST_DATA_PLACEHOLDER', data.trim())
.replace('DECISIONS_PLACEHOLDER', decisions.trim());
await Bun.write('${PROMPT_FILE}', result);
"
Expand Down
70 changes: 70 additions & 0 deletions .claude/skills/setup-agent-team/trigger-server.ts
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,7 @@ process.on("SIGTERM", () => gracefulShutdown("SIGTERM"));
process.on("SIGINT", () => gracefulShutdown("SIGINT"));

const REPLY_SCRIPT = resolve(SKILL_DIR, "reply.sh");
const X_REPLY_SCRIPT = resolve(SKILL_DIR, "x-reply.sh");
const REPLY_SECRET = process.env.REPLY_SECRET ?? TRIGGER_SECRET;

/** Check auth against a given secret (timing-safe). */
Expand Down Expand Up @@ -256,6 +257,68 @@ async function handleReply(req: Request): Promise<Response> {
}
}

/**
* Handle POST /x-reply — post a reply tweet via x-reply.sh.
* This is synchronous: it waits for x-reply.sh to finish and returns the result.
*/
async function handleXReply(req: Request): Promise<Response> {
if (!isAuthedWith(req, REPLY_SECRET)) {
return Response.json({ error: "unauthorized" }, { status: 401 });
}

let body: unknown;
try {
body = await req.json();
} catch {
return Response.json({ error: "invalid JSON body" }, { status: 400 });
}

const obj = typeof body === "object" && body !== null ? (body as Record<string, unknown>) : null;
const tweetId = obj && typeof obj.tweetId === "string" ? obj.tweetId : "";
const replyText = obj && typeof obj.replyText === "string" ? obj.replyText : "";

if (!tweetId || !replyText) {
return Response.json({ error: "tweetId and replyText are required" }, { status: 400 });
}

// Validate tweetId format (numeric string)
if (!/^\d{1,20}$/.test(tweetId)) {
return Response.json({ error: "invalid tweetId format (must be numeric)" }, { status: 400 });
}

console.log(`[trigger] X reply request: tweetId=${tweetId}, replyText=${replyText.slice(0, 80)}...`);

const proc = Bun.spawn(["bash", X_REPLY_SCRIPT], {
stdout: "pipe",
stderr: "pipe",
env: {
...process.env,
TWEET_ID: tweetId,
REPLY_TEXT: replyText,
},
});

const [stdout, stderr] = await Promise.all([
new Response(proc.stdout).text(),
new Response(proc.stderr).text(),
]);
const exitCode = await proc.exited;

if (exitCode !== 0) {
console.error(`[trigger] x-reply.sh failed (exit=${exitCode}): ${stderr}`);
return Response.json({ error: "x reply failed", stderr: stderr.slice(0, 500) }, { status: 502 });
}

// Parse x-reply.sh JSON output
try {
const result = JSON.parse(stdout.trim());
console.log(`[trigger] X reply posted: ${JSON.stringify(result)}`);
return Response.json(result);
} catch {
return Response.json({ ok: true, raw: stdout.trim() });
}
}

/**
* Spawn the target script and return immediately with a JSON response.
* Script stdout/stderr are piped to the server console (journalctl).
Expand Down Expand Up @@ -355,6 +418,13 @@ const server = Bun.serve({
return handleReply(req);
}

if (req.method === "POST" && url.pathname === "/x-reply") {
if (shuttingDown) {
return Response.json({ error: "server is shutting down" }, { status: 503 });
}
return handleXReply(req);
}

if (req.method === "POST" && url.pathname === "/trigger") {
if (shuttingDown) {
return Response.json(
Expand Down
Loading
Loading