From 6352c602430da14682171a04c0621c20a9a29d72 Mon Sep 17 00:00:00 2001 From: Mohammed Aqib Date: Wed, 15 Apr 2026 19:36:50 +0530 Subject: [PATCH 01/22] Revise README for PipelineMedic project Updated the README to reflect the new project name and details about PipelineMedic, including problem statement, proposed solution, features, and tech stack. --- README.md | 117 +++++++++++++++++++++++++++++------------------------- 1 file changed, 64 insertions(+), 53 deletions(-) diff --git a/README.md b/README.md index c5c886b3e..3c33cd851 100644 --- a/README.md +++ b/README.md @@ -1,86 +1,97 @@ -# HackToFuture 4.0 β€” Template +# πŸš€ PipelineMedic +### Self-Learning AI DevOps Agent for CI/CD Failure Diagnosis and Auto-Healing -Welcome to your official HackToFuture 4 repository. +--- -This repository template will be used for development, tracking progress, and final submission of your project. Ensure that all work is committed here within the allowed hackathon duration. +## 🧩 Problem Statement / Idea ---- +### What is the problem? +Modern CI/CD pipelines frequently fail due to dependency issues, configuration errors, missing environment variables, and failing tests. Developers must manually inspect logs and debug issues, which is time-consuming and repetitive. -### Instructions for the teams: +### Why is it important? +- Debugging CI/CD failures takes **30–60 minutes per issue** +- Slows down development and deployment cycles +- Requires experienced DevOps knowledge +- Increases operational overhead -- Fork the Repository and name the forked repo in this convention: hacktofuture4-team_id (for eg: hacktofuture4-A01) +### Who are the target users? +- Software Developers +- DevOps Engineers +- Site Reliability Engineers (SREs) +- Engineering teams using CI/CD pipelines --- -## Rules +## πŸ’‘ Proposed Solution -- Work must be done ONLY in the forked repository -- Only Four Contributors are allowed. -- After 36 hours, Please make PR to the Main Repository. A Form will be sent to fill the required information. -- Do not copy code from other teams -- All commits must be from individual GitHub accounts -- Please provide meaningful commits for tracking. -- Do not share your repository with other teams -- Final submission must be pushed before the deadline -- Any violation may lead to disqualification +### What are we building? +PipelineMedic is an **AI-powered DevOps agent** that automatically detects CI/CD failures, analyzes logs, generates fixes, and creates pull requests. ---- +### How does it solve the problem? +- Detects pipeline failures via GitHub Webhooks +- Fetches logs and analyzes them using AI (Llama3 / Mixtral) +- Identifies root cause and generates fix +- Creates pull request automatically +- Re-runs pipeline to validate fix -# The Final README Template +### What makes it unique? -## Problem Statement / Idea +πŸ”₯ **Self-Learning Failure Memory** +- Stores past failures and fixes +- Reuses solutions instantly -Clearly describe the problem you are solving. +⚑ **Auto-Heal Mode** +- Automatically resolves repeated failures -- What is the problem? -- Why is it important? -- Who are the target users? +🧠 **Confidence Scoring** +- Determines reliability of fixes ---- +πŸ›‘οΈ **Risk-Based Automation** +- Ensures safe deployment decisions -## Proposed Solution +--- -Explain your approach: +## βš™οΈ Features -- What are you building? -- How does it solve the problem? -- What makes your solution unique? +- πŸ” Automatic CI/CD failure detection +- 🧠 AI-based log analysis and root cause detection +- πŸ”§ Automated fix generation +- πŸ” Pull request creation +- ♻️ Self-learning failure memory *(unique)* +- ⚑ Auto-heal for repeated issues *(unique)* +- πŸ“Š Confidence scoring +- πŸ›‘οΈ Risk-based governance --- -## Features +## 🧰 Tech Stack -List the core features of your project: +### Frontend +- (Optional) React / Next.js -- Feature 1 -- Feature 2 -- Feature 3 +### Backend +- Python +- FastAPI ---- - -## Tech Stack +### Database +- SQLite / JSON -Mention all technologies used: +### APIs / Services +- Groq API (Llama3 / Mixtral) +- GitHub REST API +- GitHub Webhooks -- Frontend: -- Backend: -- Database: -- APIs / Services: -- Tools / Libraries: +### Tools / Libraries +- Uvicorn +- Requests +- Ngrok (for webhook testing) --- -## Project Setup Instructions +## ⚑ Project Setup Instructions -Provide clear steps to run your project: +### 1. Clone the repository ```bash -# Clone the repository git clone - -# Install dependencies -... - -# Run the project -... -``` +cd pipeline-medic From febf10312711f67cf270f858057f4b2f0accc8f0 Mon Sep 17 00:00:00 2001 From: aqib053 Date: Thu, 16 Apr 2026 02:45:40 +0530 Subject: [PATCH 02/22] Add PipelineMedic MVP: FastAPI webhook, Groq analysis, Telegram, optional GitHub PR Made-with: Cursor --- .env.example | 15 + .gitignore | 11 + README.md | 46 ++ app.py | 5 + demo.sh | 31 + examples/demo-repo/.github/workflows/ci.yml | 56 ++ examples/demo-repo/README.md | 22 + examples/demo-repo/app.py | 4 + examples/demo-repo/requirements.txt | 1 + main.py | 728 ++++++++++++++++++++ requirements.txt | 4 + vercel.json | 4 + 12 files changed, 927 insertions(+) create mode 100644 .env.example create mode 100644 .gitignore create mode 100644 README.md create mode 100644 app.py create mode 100755 demo.sh create mode 100644 examples/demo-repo/.github/workflows/ci.yml create mode 100644 examples/demo-repo/README.md create mode 100644 examples/demo-repo/app.py create mode 100644 examples/demo-repo/requirements.txt create mode 100644 main.py create mode 100644 requirements.txt create mode 100644 vercel.json diff --git a/.env.example b/.env.example new file mode 100644 index 000000000..375aa9343 --- /dev/null +++ b/.env.example @@ -0,0 +1,15 @@ +# Copy to .env and fill in. Hackathon MVP: at least Groq + Telegram for the full demo. + +# --- MVP (recommended for judges) --- +GROQ_API_KEY= +GROQ_MODEL=llama-3.3-70b-versatile +TELEGRAM_BOT_TOKEN= +TELEGRAM_CHAT_ID= +TELEGRAM_ENABLED=true + +# --- Optional stretch goals --- +# GitHub autofix PR (leave blank to skip) +GITHUB_TOKEN= +GITHUB_DEFAULT_OWNER= +GITHUB_BASE_BRANCH=main +GITHUB_PR_REVIEWERS= diff --git a/.gitignore b/.gitignore new file mode 100644 index 000000000..b29203098 --- /dev/null +++ b/.gitignore @@ -0,0 +1,11 @@ +.env +.env.* +!.env.example +__pycache__/ +*.py[cod] +.pytest_cache/ +.venv/ +venv/ +data/ +*.egg-info/ +.DS_Store diff --git a/README.md b/README.md new file mode 100644 index 000000000..2ac688c68 --- /dev/null +++ b/README.md @@ -0,0 +1,46 @@ +# PipelineMedic (Hackathon MVP) + +**Push β†’ CI fails β†’ POST log to this API β†’ Groq explains the failure β†’ Telegram alerts your team.** + +## What ships in the MVP + +| Layer | What | +|--------|------| +| **Core** | `POST /webhook` with `repository` + `log` (or `log_text`) β†’ JSON with diagnosis + suggested fix | +| **LLM** | Groq when `GROQ_API_KEY` is set; otherwise fast **rule-based** fallback (good for offline demos) | +| **Alerts** | Structured message to **Telegram** (plus console); set `TELEGRAM_*` | +| **Autofix PR** | On **auto_fix** (fixable + confidence > 0.7): commits a patch and opens a **PR** when `GITHUB_TOKEN` is set; **Telegram** includes the PR link after the PR step | +| **Extra** | JSON memory under `data/` (optional for demos) | + +## Quick run (judges / local) + +```bash +python3 -m venv .venv && source .venv/bin/activate +pip install -r requirements.txt +cp .env.example .env # fill GROQ + Telegram at minimum +python main.py # http://127.0.0.1:8000 +``` + +In another terminal: + +```bash +chmod +x demo.sh && ./demo.sh +``` + +Open `GET http://127.0.0.1:8000/` β€” should return `{"status":"ok",...}`. + +## Deploy (public webhook) + +Use **Vercel**: set the same env vars in the project dashboard, deploy this repo, then call `https://.vercel.app/webhook` from GitHub Actions (store URL in `PIPELINEMEDIC_WEBHOOK_URL`). + +## Full judge demo (push β†’ fail β†’ rectify β†’ Telegram β†’ PR) + +Use the sample app in **`examples/demo-repo/`**: copy it into a **separate** GitHub repo, add the webhook secret, push. See that folder’s `README.md`. For **PR creation**, add `GITHUB_TOKEN` on Vercel with access to that demo repo. + +## Environment variables + +See `.env.example`. **LLM + alerts:** `GROQ_API_KEY`, `TELEGRAM_BOT_TOKEN`, `TELEGRAM_CHAT_ID`. **Real PRs:** add `GITHUB_TOKEN` (classic PAT or fine-grained with Contents + PRs on the target repo) and ensure `repository` is `owner/repo` (or set `GITHUB_DEFAULT_OWNER` + short name). + +## Out of scope (MVP) + +Auto-merge, Slack/Teams, multi-tenant DB, guaranteed correct fixes for every error type. diff --git a/app.py b/app.py new file mode 100644 index 000000000..05db44c36 --- /dev/null +++ b/app.py @@ -0,0 +1,5 @@ +"""Vercel entrypoint β€” re-exports ASGI app.""" + +from main import app + +__all__ = ["app"] diff --git a/demo.sh b/demo.sh new file mode 100755 index 000000000..ecf518042 --- /dev/null +++ b/demo.sh @@ -0,0 +1,31 @@ +#!/usr/bin/env bash +# Hackathon demo: POST a sample failing CI log to the local webhook. +# Usage: ./demo.sh [webhook_url] +# PIPELINEMEDIC_URL=http://127.0.0.1:8000 ./demo.sh + +set -euo pipefail +URL="${1:-${PIPELINEMEDIC_URL:-http://127.0.0.1:8000}/webhook}" + +exec python3 - "$URL" <<'PY' +import json, sys, urllib.request + +url = sys.argv[1] +body = json.dumps( + { + "repository": "demo/hackathon", + "log": "ModuleNotFoundError: No module named 'requests'", + } +).encode("utf-8") +req = urllib.request.Request( + url, + data=body, + headers={"Content-Type": "application/json"}, + method="POST", +) +with urllib.request.urlopen(req, timeout=30) as resp: + raw = resp.read().decode("utf-8", errors="replace") +try: + print(json.dumps(json.loads(raw), indent=2)) +except json.JSONDecodeError: + print(raw) +PY diff --git a/examples/demo-repo/.github/workflows/ci.yml b/examples/demo-repo/.github/workflows/ci.yml new file mode 100644 index 000000000..54adb4250 --- /dev/null +++ b/examples/demo-repo/.github/workflows/ci.yml @@ -0,0 +1,56 @@ +# Copy this folder into a GitHub repo, add secret PIPELINEMEDIC_WEBHOOK_URL, push β†’ fail β†’ webhook β†’ Telegram (+ PR if token set on Vercel). + +name: demo-ci-pipelinemedic + +on: + push: + branches: [main, master] + workflow_dispatch: + +jobs: + demo: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v4 + + - uses: actions/setup-python@v5 + with: + python-version: "3.12" + + - name: Install deps + run: pip install -r requirements.txt + + - name: Run app (expected to fail until requests is added) + id: runapp + continue-on-error: true + run: python app.py 2>&1 | tee run.log + + - name: Notify PipelineMedic on failure + if: steps.runapp.outcome == 'failure' + env: + WEBHOOK_URL: ${{ secrets.PIPELINEMEDIC_WEBHOOK_URL }} + run: | + python3 << 'PY' + import json, os, urllib.request, pathlib + url = (os.environ.get("WEBHOOK_URL") or "").strip() + if not url: + print("Skip: set repo secret PIPELINEMEDIC_WEBHOOK_URL") + raise SystemExit(0) + repo = os.environ["GITHUB_REPOSITORY"] + log = pathlib.Path("run.log").read_text(encoding="utf-8", errors="replace") + if len(log) < 50: + log = "ModuleNotFoundError: No module named 'requests'" + body = json.dumps({"repository": repo, "log_text": log}).encode("utf-8") + req = urllib.request.Request( + url, + data=body, + headers={"Content-Type": "application/json"}, + method="POST", + ) + with urllib.request.urlopen(req, timeout=60) as resp: + print(resp.read().decode("utf-8", errors="replace")) + PY + + - name: Mark job failed (red X) if app failed + if: steps.runapp.outcome == 'failure' + run: exit 1 diff --git a/examples/demo-repo/README.md b/examples/demo-repo/README.md new file mode 100644 index 000000000..fd5a59d75 --- /dev/null +++ b/examples/demo-repo/README.md @@ -0,0 +1,22 @@ +# PipelineMedic demo app + +This mini repo is meant to be **copied into its own GitHub repository** (not necessarily the PipelineMedic service repo). + +## What happens + +1. `app.py` imports `requests`, but `requirements.txt` does not list it β†’ **CI fails** with `ModuleNotFoundError`. +2. The workflow POSTs the **real log** to your **PipelineMedic** `/webhook`. +3. PipelineMedic analyzes (Groq), may open a **PR** adding `requests` if `GITHUB_TOKEN` is set on the server, and sends **Telegram**. + +## Setup + +1. Create a new GitHub repo (e.g. `yourname/pipelinemedic-demo`). +2. Copy **these files** into it (same paths), commit, push. +3. In that repo: **Settings β†’ Secrets and variables β†’ Actions β†’ New repository secret** + - Name: `PIPELINEMEDIC_WEBHOOK_URL` + - Value: `https://.vercel.app/webhook` +4. On **Vercel** (PipelineMedic project): set `GITHUB_TOKEN` to a PAT that can push branches + open PRs **on this demo repo** (and redeploy). + +## After the first PR merges + +Add `requests` to `requirements.txt` (or merge PipelineMedic’s PR). The next push should go **green**. diff --git a/examples/demo-repo/app.py b/examples/demo-repo/app.py new file mode 100644 index 000000000..b6bd31bfd --- /dev/null +++ b/examples/demo-repo/app.py @@ -0,0 +1,4 @@ +# Demo: CI fails until `requests` is listed in requirements.txt (PipelineMedic can suggest a PR). +import requests + +print("ok", requests.__version__) diff --git a/examples/demo-repo/requirements.txt b/examples/demo-repo/requirements.txt new file mode 100644 index 000000000..ab0ae2199 --- /dev/null +++ b/examples/demo-repo/requirements.txt @@ -0,0 +1 @@ +# Demo: intentionally missing `requests` so the first CI run fails with ModuleNotFoundError. diff --git a/main.py b/main.py new file mode 100644 index 000000000..88bb5a2ce --- /dev/null +++ b/main.py @@ -0,0 +1,728 @@ +""" +PipelineMedic β€” intended flow: + + CI fails β†’ POST log here β†’ LLM/rules analyze β†’ if high-confidence fixable (auto_fix), + apply patch + open GitHub PR when GITHUB_TOKEN is set β†’ Telegram + console notify + (message includes PR link when a PR was opened). + +Env: GROQ_API_KEY, TELEGRAM_*, GITHUB_* (token required for real PRs). +CI: POST { "repository", "log" | "log_text" } to /webhook. +""" + +from __future__ import annotations + +import base64 +import json +import os +import re +from datetime import datetime, timezone +from pathlib import Path +from typing import Any, Literal + +import requests +from dotenv import load_dotenv +from fastapi import FastAPI, Request +from fastapi.middleware.cors import CORSMiddleware +from fastapi.responses import JSONResponse + +load_dotenv() + +# --- Groq + rule-based analysis ------------------------------------------------- + +DEFAULT_GROQ_MODEL = "llama-3.3-70b-versatile" +GROQ_URL = "https://api.groq.com/openai/v1/chat/completions" +SYSTEM_PROMPT = """You are a senior engineer diagnosing a failed CI run after a code push. +Respond with ONLY valid JSON, no markdown, with keys: +root_cause (string): one or two sentences β€” what failed and why. +fix (string): concrete steps the developer should take (commands, file edits, or checks). +confidence (number 0-1), risk ("LOW" or "HIGH"), +fixable (boolean): true only if a safe automated patch is realistic (e.g. missing dep in requirements.txt). +file (string): primary file to change, or empty string. + +Be conservative: fixable true mainly for clear dependency / import / manifest issues.""" + + +def _rule_based_analysis(log_text: str) -> dict[str, Any]: + lower = log_text.lower() + m = re.search(r"No module named ['\"]([^'\"]+)['\"]", log_text, re.IGNORECASE) + if not m: + m = re.search( + r"ModuleNotFoundError:\s*No module named\s+([A-Za-z_][A-Za-z0-9_.]*)", + log_text, + re.IGNORECASE, + ) + mod = m.group(1).strip() if m else None + if mod: + pkg = mod.split(".")[0] + return { + "root_cause": f"Missing Python package/module: {mod}", + "fix": f"Add `{pkg}` to requirements.txt (or install in CI) and pin a version if needed.", + "confidence": 0.75, + "risk": "LOW", + "fixable": True, + "file": "requirements.txt", + } + if "pip: command not found" in lower or "pip: not found" in lower: + return { + "root_cause": "pip not available in CI environment", + "fix": "Ensure Python/pip is installed in the workflow or use a setup action.", + "confidence": 0.55, + "risk": "HIGH", + "fixable": False, + "file": "", + } + if "error: failed to solve" in lower or "could not find a version" in lower: + return { + "root_cause": "Dependency resolution failure", + "fix": "Check version constraints in requirements.txt / lockfile.", + "confidence": 0.6, + "risk": "HIGH", + "fixable": True, + "file": "requirements.txt", + } + return { + "root_cause": "Unclassified CI failure (rule-based fallback)", + "fix": "Inspect the full log near the error lines and reproduce locally.", + "confidence": 0.35, + "risk": "HIGH", + "fixable": False, + "file": "", + } + + +def _normalize_analysis(raw: dict[str, Any]) -> dict[str, Any]: + risk = str(raw.get("risk", "HIGH")).upper() + if risk not in ("LOW", "HIGH"): + risk = "HIGH" + try: + conf = float(raw.get("confidence", 0)) + except (TypeError, ValueError): + conf = 0.0 + conf = max(0.0, min(1.0, conf)) + return { + "root_cause": str(raw.get("root_cause", "")).strip() or "Unknown", + "fix": str(raw.get("fix", "")).strip() or "No suggestion", + "confidence": conf, + "risk": risk, + "fixable": bool(raw.get("fixable", False)), + "file": str(raw.get("file", "") or ""), + } + + +def _parse_json_content(content: str) -> dict[str, Any]: + content = content.strip() + if content.startswith("```"): + content = re.sub(r"^```[a-zA-Z]*\s*", "", content) + content = re.sub(r"\s*```$", "", content) + return json.loads(content) + + +def analyze_log(log_text: str) -> tuple[dict[str, Any], str]: + key = os.getenv("GROQ_API_KEY", "").strip() + if not key: + return _normalize_analysis(_rule_based_analysis(log_text)), "rules" + + model = os.getenv("GROQ_MODEL", DEFAULT_GROQ_MODEL).strip() + payload = { + "model": model, + "messages": [ + {"role": "system", "content": SYSTEM_PROMPT}, + {"role": "user", "content": f"Analyze this CI log excerpt:\n\n{log_text[:12000]}"}, + ], + "temperature": 0.2, + "response_format": {"type": "json_object"}, + } + try: + r = requests.post( + GROQ_URL, + headers={"Authorization": f"Bearer {key}", "Content-Type": "application/json"}, + json=payload, + timeout=60, + ) + r.raise_for_status() + data = r.json() + msg = data["choices"][0]["message"]["content"] + parsed = _parse_json_content(msg) + return _normalize_analysis(parsed), "groq" + except Exception: + return _normalize_analysis(_rule_based_analysis(log_text)), "rules" + + +# --- Decision ------------------------------------------------------------------- + +DecisionPath = Literal["auto_fix", "notify_only"] + + +def extract_error_line(log_text: str, max_len: int = 500) -> str: + lines = log_text.strip().splitlines() + patterns = ( + r"No module named", + r"ModuleNotFoundError", + r"ImportError", + r"cannot import name", + r"pip install", + r"ERROR", + r"Error:", + ) + for line in lines: + s = line.strip() + if not s: + continue + for p in patterns: + if re.search(p, s, re.IGNORECASE): + return s[:max_len] + for line in reversed(lines): + s = line.strip() + if s and ("error" in s.lower() or "failed" in s.lower()): + return s[:max_len] + joined = " ".join(lines[-5:]) if lines else "" + return (joined or log_text)[:max_len] + + +def decide(analysis: dict[str, Any]) -> DecisionPath: + fixable = bool(analysis.get("fixable")) + try: + conf = float(analysis.get("confidence", 0)) + except (TypeError, ValueError): + conf = 0.0 + if fixable and conf > 0.7: + return "auto_fix" + return "notify_only" + + +# --- Notifications -------------------------------------------------------------- + +def _telegram_configured() -> bool: + token = os.getenv("TELEGRAM_BOT_TOKEN", "").strip() + chat = os.getenv("TELEGRAM_CHAT_ID", "").strip() + enabled = os.getenv("TELEGRAM_ENABLED", "true").strip().lower() == "true" + return bool(token and chat and enabled) + + +def _clip(text: str, max_len: int) -> str: + t = text.strip() + if len(t) <= max_len: + return t + return t[: max_len - 1].rstrip() + "…" + + +def _github_notify_block(github_info: dict[str, Any] | None, decision: str) -> str: + """Human-readable PR outcome for Telegram/console.""" + if decision != "auto_fix": + return ( + "β€” Automated fix / PR β€”\n" + "Skipped (notify_only): confidence too low or not safely auto-fixable.\n\n" + ) + if not github_info: + return "β€” Automated fix / PR β€”\nNot attempted.\n\n" + if github_info.get("ok") and github_info.get("mode") == "github": + url = str(github_info.get("html_url", "")) + num = github_info.get("pull_number") + branch = github_info.get("branch", "") + return ( + "β€” Automated fix / PR β€”\n" + f"Opened PR #{num} on branch `{branch}`\n" + f"{url}\n" + "Review and merge manually (no auto-merge).\n\n" + ) + if github_info.get("mode") == "mock": + msg = github_info.get("message", "skipped") + return ( + "β€” Automated fix / PR β€”\n" + f"Not created: {msg}\n" + "Set GITHUB_TOKEN (repo scope) + target repo to open a real PR.\n\n" + ) + err = github_info.get("error", str(github_info)) + return f"β€” Automated fix / PR β€”\nFailed: {err}\n\n" + + +def build_notification_message( + repository: str, + decision: str, + analysis: dict[str, Any], + source: str, + log_excerpt: str, + github_info: dict[str, Any] | None = None, +) -> str: + """Structured alert: error β†’ diagnosis β†’ fix β†’ PR outcome β†’ meta.""" + src = "Groq LLM" if source == "groq" else "rule-based fallback (no GROQ_API_KEY or API error)" + route = ( + "auto_fix β€” patch proposed; PR opened when GitHub is configured." + if decision == "auto_fix" + else "notify_only β€” review diagnosis and fix manually." + ) + target = (analysis.get("file") or "").strip() + target_line = f"Likely file: {target}\n\n" if target else "" + + return ( + "PipelineMedic Β· CI failed after a push\n\n" + f"Repository: {repository}\n\n" + "β€” Error signal (from CI log) β€”\n" + f"{_clip(log_excerpt, 900)}\n\n" + "β€” Diagnosis β€”\n" + f"{_clip(str(analysis.get('root_cause', '')), 1200)}\n\n" + "β€” Suggested fix β€”\n" + f"{_clip(str(analysis.get('fix', '')), 1200)}\n\n" + f"{_github_notify_block(github_info, decision)}" + f"{target_line}" + "β€” Routing β€”\n" + f"{route}\n\n" + "β€” Meta β€”\n" + f"Decision: {decision}\n" + f"Confidence: {analysis.get('confidence')} Β· Risk: {analysis.get('risk')}\n" + f"Analysis source: {src}" + ) + + +def notify_console_mock_slack( + repository: str, + decision: str, + analysis: dict[str, Any], + source: str, + log_excerpt: str, + github_info: dict[str, Any] | None = None, +) -> None: + print("\n--- Mock Slack block ---") + print( + build_notification_message( + repository, decision, analysis, source, log_excerpt, github_info + ) + ) + print("--- End mock Slack ---\n") + + +def notify_telegram( + repository: str, + decision: str, + analysis: dict[str, Any], + source: str, + log_excerpt: str, + github_info: dict[str, Any] | None = None, +) -> None: + if not _telegram_configured(): + return + token = os.getenv("TELEGRAM_BOT_TOKEN", "").strip() + chat_id = os.getenv("TELEGRAM_CHAT_ID", "").strip() + text = build_notification_message( + repository, decision, analysis, source, log_excerpt, github_info + ) + # Telegram hard limit 4096 characters for a single message + text = _clip(text, 4000) + url = f"https://api.telegram.org/bot{token}/sendMessage" + try: + r = requests.post( + url, + json={ + "chat_id": chat_id, + "text": text, + "disable_web_page_preview": True, + }, + timeout=15, + ) + r.raise_for_status() + except Exception as e: + print(f"[PipelineMedic] Telegram send failed: {e}") + + +def mock_pipeline_rerun(decision: str) -> None: + if decision == "auto_fix": + print( + "[PipelineMedic] Mock: triggering pipeline re-run (print only) β€” would POST to CI provider" + ) + + +# --- Memory --------------------------------------------------------------------- + +MAX_ENTRIES = 200 + + +def append_incident( + repository: str, + log_excerpt: str, + analysis: dict[str, Any], + decision_path: str, +) -> None: + record = { + "ts": datetime.now(timezone.utc).isoformat(), + "repository": repository, + "log_excerpt": log_excerpt, + "analysis": analysis, + "decision_path": decision_path, + } + for path in (Path("data") / "failures.json", Path("/tmp") / "pipelinemedic_failures.json"): + try: + path.parent.mkdir(parents=True, exist_ok=True) + entries: list[Any] = [] + if path.exists(): + try: + raw = path.read_text(encoding="utf-8") + entries = json.loads(raw) if raw.strip() else [] + except (json.JSONDecodeError, OSError): + entries = [] + if not isinstance(entries, list): + entries = [] + entries.append(record) + if len(entries) > MAX_ENTRIES: + entries = entries[-MAX_ENTRIES:] + path.write_text(json.dumps(entries, indent=2), encoding="utf-8") + return + except OSError: + continue + + +# --- GitHub --------------------------------------------------------------------- + +GITHUB_API = "https://api.github.com" + + +def _gh_headers(token: str) -> dict[str, str]: + return { + "Authorization": f"Bearer {token}", + "Accept": "application/vnd.github+json", + "X-GitHub-Api-Version": "2022-11-28", + } + + +def resolve_repo_slug( + repository: str, + repository_full_name: str | None, +) -> tuple[str | None, str | None]: + if repository_full_name and "/" in repository_full_name.strip(): + parts = repository_full_name.strip().split("/", 1) + return parts[0], parts[1] + r = repository.strip() + if "/" in r: + a, b = r.split("/", 1) + return a, b + owner = os.getenv("GITHUB_DEFAULT_OWNER", "").strip() + if owner: + return owner, r + return None, None + + +def _get_default_branch(token: str, owner: str, repo: str) -> str | None: + r = requests.get(f"{GITHUB_API}/repos/{owner}/{repo}", headers=_gh_headers(token), timeout=30) + if r.status_code != 200: + return None + return r.json().get("default_branch") + + +def _get_file_sha(token: str, owner: str, repo: str, path: str, ref: str) -> str | None: + r = requests.get( + f"{GITHUB_API}/repos/{owner}/{repo}/contents/{path}", + headers=_gh_headers(token), + params={"ref": ref}, + timeout=30, + ) + if r.status_code != 200: + return None + return r.json().get("sha") + + +def _create_branch( + token: str, owner: str, repo: str, base_branch: str, new_branch: str +) -> tuple[bool, str | None]: + ref_url = f"{GITHUB_API}/repos/{owner}/{repo}/git/ref/heads/{base_branch}" + r = requests.get(ref_url, headers=_gh_headers(token), timeout=30) + if r.status_code != 200: + return False, f"Could not read base ref {base_branch}: {r.status_code}" + sha = r.json().get("object", {}).get("sha") + if not sha: + return False, "Missing base commit SHA" + create = requests.post( + f"{GITHUB_API}/repos/{owner}/{repo}/git/refs", + headers=_gh_headers(token), + json={"ref": f"refs/heads/{new_branch}", "sha": sha}, + timeout=30, + ) + if create.status_code == 201: + return True, None + if create.status_code == 422 and "already exists" in (create.text or "").lower(): + return True, None + return False, f"Create branch failed: {create.status_code} {create.text}" + + +def _put_file( + token: str, + owner: str, + repo: str, + path: str, + branch: str, + content: str, + message: str, + file_sha: str | None, +) -> tuple[bool, str | None]: + body: dict[str, Any] = { + "message": message, + "content": base64.b64encode(content.encode("utf-8")).decode("ascii"), + "branch": branch, + } + if file_sha: + body["sha"] = file_sha + r = requests.put( + f"{GITHUB_API}/repos/{owner}/{repo}/contents/{path}", + headers=_gh_headers(token), + json=body, + timeout=30, + ) + if r.status_code in (200, 201): + return True, None + return False, f"Update file failed: {r.status_code} {r.text}" + + +def _open_pr( + token: str, + owner: str, + repo: str, + title: str, + body: str, + head: str, + base: str, +) -> tuple[int | None, str | None]: + r = requests.post( + f"{GITHUB_API}/repos/{owner}/{repo}/pulls", + headers=_gh_headers(token), + json={"title": title, "body": body, "head": head, "base": base}, + timeout=30, + ) + if r.status_code == 201: + num = r.json().get("number") + return int(num) if num is not None else None, None + return None, f"Open PR failed: {r.status_code} {r.text}" + + +def _request_reviewers( + token: str, owner: str, repo: str, pr_number: int, reviewers: list[str] +) -> None: + if not reviewers: + return + requests.post( + f"{GITHUB_API}/repos/{owner}/{repo}/pulls/{pr_number}/requested_reviewers", + headers=_gh_headers(token), + json={"reviewers": reviewers}, + timeout=30, + ) + + +def maybe_create_autofix_pr( + repository: str, + repository_full_name: str | None, + analysis: dict[str, Any], + decision: str, +) -> dict[str, Any]: + token = os.getenv("GITHUB_TOKEN", "").strip() + base_branch = os.getenv("GITHUB_BASE_BRANCH", "main").strip() or "main" + + if not token: + return { + "ok": False, + "mode": "mock", + "message": "GITHUB_TOKEN not set; skipping real PR", + } + + owner, repo = resolve_repo_slug(repository, repository_full_name) + if not owner or not repo: + return { + "ok": False, + "mode": "error", + "error": "Could not resolve owner/repo (set GITHUB_DEFAULT_OWNER for short repo names)", + } + + if decision != "auto_fix": + return {"ok": False, "mode": "mock", "message": "notify_only path β€” no PR"} + + target_file = (analysis.get("file") or "").strip() or "requirements.txt" + fix_text = str(analysis.get("fix", "")) + if target_file.endswith("requirements.txt") or target_file.endswith(".txt"): + pkg_line = "# pipelinemedic autofix β€” review before merge\n" + m = re.search(r"`([^`]+)`", fix_text) + extra = f"{m.group(1)}\n" if m else "" + new_content = pkg_line + extra + else: + new_content = f"# pipelinemedic autofix\n# {fix_text[:500]}\n" + + ts = datetime.now(timezone.utc).strftime("%Y%m%d%H%M%S") + branch_name = f"pipelinemedic/autofix-{ts}" + + def_branch = _get_default_branch(token, owner, repo) + if not def_branch: + return { + "ok": False, + "mode": "error", + "error": "Repository not accessible or empty (needs initial commit)", + } + use_base = base_branch if base_branch else def_branch + + ok_b, err_b = _create_branch(token, owner, repo, use_base, branch_name) + if not ok_b: + return {"ok": False, "mode": "error", "error": err_b or "branch error"} + + file_sha = _get_file_sha(token, owner, repo, target_file, branch_name) + if file_sha: + get_c = requests.get( + f"{GITHUB_API}/repos/{owner}/{repo}/contents/{target_file}", + headers=_gh_headers(token), + params={"ref": branch_name}, + timeout=30, + ) + if get_c.status_code == 200: + cur = base64.b64decode(get_c.json()["content"]).decode("utf-8", errors="replace") + new_content = cur.rstrip() + "\n" + new_content + + ok_f, err_f = _put_file( + token, + owner, + repo, + target_file, + branch_name, + new_content, + f"chore: PipelineMedic autofix ({target_file})", + file_sha, + ) + if not ok_f: + return {"ok": False, "mode": "error", "error": err_f or "file update error"} + + title = f"PipelineMedic autofix: {analysis.get('root_cause', 'CI fix')[:80]}" + pr_body = ( + f"Automated suggestion (review required).\n\n" + f"**Root cause:** {analysis.get('root_cause')}\n\n" + f"**Fix:** {analysis.get('fix')}\n" + ) + pr_num, err_p = _open_pr(token, owner, repo, title, pr_body, branch_name, use_base) + if pr_num is None: + return {"ok": False, "mode": "error", "error": err_p or "PR error"} + + reviewers_raw = os.getenv("GITHUB_PR_REVIEWERS", "").strip() + reviewers = [x.strip() for x in reviewers_raw.split(",") if x.strip()] + _request_reviewers(token, owner, repo, pr_num, reviewers) + + html_url = f"https://github.com/{owner}/{repo}/pull/{pr_num}" + return { + "ok": True, + "mode": "github", + "pull_number": pr_num, + "html_url": html_url, + "branch": branch_name, + "base": use_base, + } + + +# --- FastAPI -------------------------------------------------------------------- + +app = FastAPI(title="PipelineMedic", version="1.0.0") + +app.add_middleware( + CORSMiddleware, + allow_origins=["*"], + allow_credentials=True, + allow_methods=["*"], + allow_headers=["*"], +) + + +def health_payload() -> dict[str, Any]: + return {"status": "ok", "service": "pipelinemedic", "version": "1.0.0"} + + +@app.get("/") +async def root_get(): + return health_payload() + + +@app.get("/webhook") +@app.get("/api/webhook") +async def webhook_get(): + return health_payload() + + +async def _parse_body(request: Request) -> dict[str, Any]: + try: + return await request.json() + except Exception: + return {} + + +def _get_log_text(body: dict[str, Any]) -> str | None: + if "log" in body and body["log"] is not None: + return str(body["log"]) + if "log_text" in body and body["log_text"] is not None: + return str(body["log_text"]) + return None + + +async def process_webhook(request: Request) -> JSONResponse: + body = await _parse_body(request) + repository = body.get("repository") + log_text = _get_log_text(body) + repository_full_name = body.get("repository_full_name") + + if not repository: + return JSONResponse( + status_code=400, + content={"detail": "Missing required field: repository"}, + ) + if log_text is None or not str(log_text).strip(): + return JSONResponse( + status_code=400, + content={"detail": "Missing required field: log or log_text"}, + ) + + repo_str = str(repository).strip() + log_str = str(log_text) + + analysis, source = analyze_log(log_str) + decision = decide(analysis) + log_excerpt = extract_error_line(log_str) + + # Error β†’ (if auto_fix) apply patch + open PR on GitHub β†’ then notify with PR link + github_info: dict[str, Any] = {} + try: + github_info = maybe_create_autofix_pr( + repo_str, + str(repository_full_name).strip() if repository_full_name else None, + analysis, + decision, + ) + except Exception as e: + github_info = {"ok": False, "mode": "error", "error": str(e)} + + if decision == "auto_fix": + mock_pipeline_rerun(decision) + + notify_console_mock_slack( + repo_str, decision, analysis, source, log_excerpt, github_info + ) + notify_telegram(repo_str, decision, analysis, source, log_excerpt, github_info) + + try: + append_incident(repo_str, log_excerpt, analysis, decision) + except Exception: + pass + + out: dict[str, Any] = { + "status": "processed", + "repository": repo_str, + "decision": decision, + "analysis": analysis, + "analysis_source": source, + } + if github_info: + out["github"] = github_info + return JSONResponse(status_code=200, content=out) + + +@app.post("/webhook") +@app.post("/api/webhook") +async def webhook_post(request: Request): + return await process_webhook(request) + + +@app.post("/") +async def root_post(request: Request): + return await process_webhook(request) + + +if __name__ == "__main__": + import uvicorn + + uvicorn.run("main:app", host="127.0.0.1", port=8000) diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 000000000..fcdf6adee --- /dev/null +++ b/requirements.txt @@ -0,0 +1,4 @@ +fastapi==0.115.12 +uvicorn[standard]==0.34.0 +requests==2.32.3 +python-dotenv==1.1.0 diff --git a/vercel.json b/vercel.json new file mode 100644 index 000000000..ff797e004 --- /dev/null +++ b/vercel.json @@ -0,0 +1,4 @@ +{ + "builds": [{ "src": "app.py", "use": "@vercel/python" }], + "routes": [{ "src": "/(.*)", "dest": "app.py" }] +} From 3e808bba03a189ee7de590e0aec07e9bec1b0e95 Mon Sep 17 00:00:00 2001 From: aqib053 Date: Thu, 16 Apr 2026 15:40:34 +0530 Subject: [PATCH 03/22] Finalize MVP: Groq retry/logging, multi-chat Telegram, Telegram template, demo CI pipefail, health flags Made-with: Cursor --- .env.example | 1 + examples/demo-repo/.github/workflows/ci.yml | 5 +- main.py | 131 ++++++++++++++------ start-presentation.sh | 108 ++++++++++++++++ 4 files changed, 209 insertions(+), 36 deletions(-) create mode 100755 start-presentation.sh diff --git a/.env.example b/.env.example index 375aa9343..e85baa032 100644 --- a/.env.example +++ b/.env.example @@ -4,6 +4,7 @@ GROQ_API_KEY= GROQ_MODEL=llama-3.3-70b-versatile TELEGRAM_BOT_TOKEN= +# Comma-separated: your DM id, group id (add bot to group first). Groups often use -100... TELEGRAM_CHAT_ID= TELEGRAM_ENABLED=true diff --git a/examples/demo-repo/.github/workflows/ci.yml b/examples/demo-repo/.github/workflows/ci.yml index 54adb4250..11eb38470 100644 --- a/examples/demo-repo/.github/workflows/ci.yml +++ b/examples/demo-repo/.github/workflows/ci.yml @@ -20,10 +20,13 @@ jobs: - name: Install deps run: pip install -r requirements.txt + # pipefail: without it, `python | tee` exits 0 (tee's code) so CI looks green even when app crashes - name: Run app (expected to fail until requests is added) id: runapp continue-on-error: true - run: python app.py 2>&1 | tee run.log + run: | + set -o pipefail + python app.py 2>&1 | tee run.log - name: Notify PipelineMedic on failure if: steps.runapp.outcome == 'failure' diff --git a/main.py b/main.py index 88bb5a2ce..70201c31b 100644 --- a/main.py +++ b/main.py @@ -120,32 +120,42 @@ def _parse_json_content(content: str) -> dict[str, Any]: def analyze_log(log_text: str) -> tuple[dict[str, Any], str]: key = os.getenv("GROQ_API_KEY", "").strip() if not key: + print("[PipelineMedic] GROQ_API_KEY is empty β€” using rule-based analysis", flush=True) return _normalize_analysis(_rule_based_analysis(log_text)), "rules" model = os.getenv("GROQ_MODEL", DEFAULT_GROQ_MODEL).strip() - payload = { + headers = {"Authorization": f"Bearer {key}", "Content-Type": "application/json"} + base_body: dict[str, Any] = { "model": model, "messages": [ {"role": "system", "content": SYSTEM_PROMPT}, {"role": "user", "content": f"Analyze this CI log excerpt:\n\n{log_text[:12000]}"}, ], "temperature": 0.2, - "response_format": {"type": "json_object"}, } - try: - r = requests.post( - GROQ_URL, - headers={"Authorization": f"Bearer {key}", "Content-Type": "application/json"}, - json=payload, - timeout=60, - ) + + def _call_groq(with_json_object: bool) -> tuple[dict[str, Any], str]: + body = {**base_body} + if with_json_object: + body["response_format"] = {"type": "json_object"} + r = requests.post(GROQ_URL, headers=headers, json=body, timeout=60) r.raise_for_status() data = r.json() msg = data["choices"][0]["message"]["content"] parsed = _parse_json_content(msg) return _normalize_analysis(parsed), "groq" - except Exception: - return _normalize_analysis(_rule_based_analysis(log_text)), "rules" + + try: + return _call_groq(True) + except Exception as e1: + try: + return _call_groq(False) + except Exception as e2: + print( + f"[PipelineMedic] Groq failed (json_object: {e1!r}; retry: {e2!r}) β€” using rules", + flush=True, + ) + return _normalize_analysis(_rule_based_analysis(log_text)), "rules" # --- Decision ------------------------------------------------------------------- @@ -192,11 +202,27 @@ def decide(analysis: dict[str, Any]) -> DecisionPath: # --- Notifications -------------------------------------------------------------- +def _parse_telegram_chat_id(raw: str) -> int | str: + """Telegram accepts int chat ids; groups are often negative (-100...).""" + s = raw.strip() + if s.lstrip("-").isdigit(): + return int(s) + return s + + +def _telegram_chat_ids() -> list[int | str]: + """Comma-separated TELEGRAM_CHAT_ID: DM + group, etc.""" + raw = os.getenv("TELEGRAM_CHAT_ID", "").strip() + if not raw: + return [] + return [_parse_telegram_chat_id(x) for x in raw.split(",") if x.strip()] + + def _telegram_configured() -> bool: token = os.getenv("TELEGRAM_BOT_TOKEN", "").strip() - chat = os.getenv("TELEGRAM_CHAT_ID", "").strip() + chats = _telegram_chat_ids() enabled = os.getenv("TELEGRAM_ENABLED", "true").strip().lower() == "true" - return bool(token and chat and enabled) + return bool(token and chats and enabled) def _clip(text: str, max_len: int) -> str: @@ -243,8 +269,9 @@ def build_notification_message( source: str, log_excerpt: str, github_info: dict[str, Any] | None = None, + for_telegram: bool = False, ) -> str: - """Structured alert: error β†’ diagnosis β†’ fix β†’ PR outcome β†’ meta.""" + """Telegram: full story + short Meta (decision + confidence only). Console: + risk + analysis source.""" src = "Groq LLM" if source == "groq" else "rule-based fallback (no GROQ_API_KEY or API error)" route = ( "auto_fix β€” patch proposed; PR opened when GitHub is configured." @@ -253,8 +280,9 @@ def build_notification_message( ) target = (analysis.get("file") or "").strip() target_line = f"Likely file: {target}\n\n" if target else "" + conf = analysis.get("confidence") - return ( + body = ( "PipelineMedic Β· CI failed after a push\n\n" f"Repository: {repository}\n\n" "β€” Error signal (from CI log) β€”\n" @@ -266,10 +294,20 @@ def build_notification_message( f"{_github_notify_block(github_info, decision)}" f"{target_line}" "β€” Routing β€”\n" - f"{route}\n\n" - "β€” Meta β€”\n" + f"{route}\n" + ) + if for_telegram: + return ( + body + + "\n\nβ€” Meta β€”\n" + f"Decision: {decision}\n" + f"Confidence: {conf}\n" + ) + return ( + body + + "\n\nβ€” Meta β€”\n" f"Decision: {decision}\n" - f"Confidence: {analysis.get('confidence')} Β· Risk: {analysis.get('risk')}\n" + f"Confidence: {conf} Β· Risk: {analysis.get('risk')}\n" f"Analysis source: {src}" ) @@ -285,7 +323,13 @@ def notify_console_mock_slack( print("\n--- Mock Slack block ---") print( build_notification_message( - repository, decision, analysis, source, log_excerpt, github_info + repository, + decision, + analysis, + source, + log_excerpt, + github_info, + for_telegram=False, ) ) print("--- End mock Slack ---\n") @@ -302,26 +346,33 @@ def notify_telegram( if not _telegram_configured(): return token = os.getenv("TELEGRAM_BOT_TOKEN", "").strip() - chat_id = os.getenv("TELEGRAM_CHAT_ID", "").strip() + chat_ids = _telegram_chat_ids() text = build_notification_message( - repository, decision, analysis, source, log_excerpt, github_info + repository, + decision, + analysis, + source, + log_excerpt, + github_info, + for_telegram=True, ) # Telegram hard limit 4096 characters for a single message text = _clip(text, 4000) url = f"https://api.telegram.org/bot{token}/sendMessage" - try: - r = requests.post( - url, - json={ - "chat_id": chat_id, - "text": text, - "disable_web_page_preview": True, - }, - timeout=15, - ) - r.raise_for_status() - except Exception as e: - print(f"[PipelineMedic] Telegram send failed: {e}") + for chat_id in chat_ids: + try: + r = requests.post( + url, + json={ + "chat_id": chat_id, + "text": text, + "disable_web_page_preview": True, + }, + timeout=15, + ) + r.raise_for_status() + except Exception as e: + print(f"[PipelineMedic] Telegram send failed (chat_id={chat_id}): {e}") def mock_pipeline_rerun(decision: str) -> None: @@ -621,7 +672,17 @@ def maybe_create_autofix_pr( def health_payload() -> dict[str, Any]: - return {"status": "ok", "service": "pipelinemedic", "version": "1.0.0"} + # GET / does not call Groq β€” these flags only show whether env vars are present (for Vercel debugging). + return { + "status": "ok", + "service": "pipelinemedic", + "version": "1.0.0", + "groq_configured": bool(os.getenv("GROQ_API_KEY", "").strip()), + "telegram_configured": bool( + os.getenv("TELEGRAM_BOT_TOKEN", "").strip() and _telegram_chat_ids() + ), + "github_token_configured": bool(os.getenv("GITHUB_TOKEN", "").strip()), + } @app.get("/") diff --git a/start-presentation.sh b/start-presentation.sh new file mode 100755 index 000000000..b97a4e10e --- /dev/null +++ b/start-presentation.sh @@ -0,0 +1,108 @@ +#!/usr/bin/env bash +# Hackathon: start API + open docs (and optional slides), then run demo once. +# Usage: +# chmod +x start-presentation.sh && ./start-presentation.sh +# ./start-presentation.sh ~/Desktop/PipelineMedic-slides.pdf +# Env: PIPELINEMEDIC_PRESENTATION=/path/to/slides.pdf (if no first arg) + +set -euo pipefail + +ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +cd "$ROOT" + +HOST="${PIPELINEMEDIC_HOST:-127.0.0.1}" +PORT="${PIPELINEMEDIC_PORT:-8000}" +BASE="http://${HOST}:${PORT}" +SLIDES="${1:-${PIPELINEMEDIC_PRESENTATION:-}}" +NO_BROWSER="${PIPELINEMEDIC_NO_BROWSER:-0}" + +open_url() { + if [[ "${NO_BROWSER}" == "1" ]]; then + echo " (browser skipped) ${1}" + return 0 + fi + if command -v open >/dev/null 2>&1; then + open "$1" + elif command -v xdg-open >/dev/null 2>&1; then + xdg-open "$1" + else + echo " Open in browser: ${1}" + fi +} + +open_file() { + if [[ -z "${SLIDES}" ]]; then + return 0 + fi + if [[ ! -f "${SLIDES}" ]]; then + echo "Warning: presentation file not found: ${SLIDES}" >&2 + return 0 + fi + if [[ "${NO_BROWSER}" == "1" ]]; then + echo " Slides path: ${SLIDES}" + return 0 + fi + if command -v open >/dev/null 2>&1; then + open "${SLIDES}" + elif command -v xdg-open >/dev/null 2>&1; then + xdg-open "${SLIDES}" + else + echo " Open slides: ${SLIDES}" + fi +} + +if [[ -f "${ROOT}/.venv/bin/activate" ]]; then + # shellcheck source=/dev/null + source "${ROOT}/.venv/bin/activate" +fi + +if [[ ! -f "${ROOT}/.env" ]] && [[ -f "${ROOT}/.env.example" ]]; then + echo "Tip: copy .env.example to .env and set keys for full demo (Groq, Telegram)." >&2 +fi + +echo "Starting PipelineMedic at ${BASE} ..." +python "${ROOT}/main.py" & +SERVER_PID=$! + +cleanup() { + if kill -0 "${SERVER_PID}" 2>/dev/null; then + kill "${SERVER_PID}" 2>/dev/null || true + wait "${SERVER_PID}" 2>/dev/null || true + fi +} +trap cleanup EXIT INT TERM + +for _ in $(seq 1 60); do + if curl -sf "${BASE}/" >/dev/null 2>&1; then + break + fi + sleep 0.25 +done + +if ! curl -sf "${BASE}/" >/dev/null 2>&1; then + echo "Error: server did not become ready at ${BASE}/" >&2 + exit 1 +fi + +echo "" +echo "PipelineMedic β€” presentation mode" +echo " β€’ Health: ${BASE}/" +echo " β€’ API docs: ${BASE}/docs" +echo " β€’ Webhook: POST ${BASE}/webhook" +echo "" + +open_url "${BASE}/docs" +open_file + +if [[ -x "${ROOT}/demo.sh" ]]; then + echo "Running demo webhook POST (sample failing log) ..." + echo "" + PIPELINEMEDIC_URL="${BASE}" "${ROOT}/demo.sh" || true + echo "" +else + echo "demo.sh not executable; run: chmod +x demo.sh && PIPELINEMEDIC_URL=${BASE} ./demo.sh" + echo "" +fi + +echo "Server running (PID ${SERVER_PID}). Press Enter to stop." +read -r _ From d243bf1cb52d547aba81c5d57c01841724af5fd1 Mon Sep 17 00:00:00 2001 From: aqib053 Date: Thu, 16 Apr 2026 16:00:15 +0530 Subject: [PATCH 04/22] Add Next.js frontend: landing, live health, webhook playground Made-with: Cursor --- .gitignore | 5 + web/.env.example | 2 + web/.gitignore | 42 + web/README.md | 36 + web/app/components/health-panel.tsx | 72 + web/app/components/webhook-playground.tsx | 108 + web/app/favicon.ico | Bin 0 -> 25931 bytes web/app/globals.css | 31 + web/app/layout.tsx | 40 + web/app/page.tsx | 133 + web/app/providers.tsx | 17 + web/eslint.config.mjs | 25 + web/lib/api-base.ts | 7 + web/next.config.ts | 10 + web/package-lock.json | 6124 +++++++++++++++++++++ web/package.json | 28 + web/postcss.config.mjs | 5 + web/public/file.svg | 1 + web/public/globe.svg | 1 + web/public/next.svg | 1 + web/public/vercel.svg | 1 + web/public/window.svg | 1 + web/tsconfig.json | 27 + 23 files changed, 6717 insertions(+) create mode 100644 web/.env.example create mode 100644 web/.gitignore create mode 100644 web/README.md create mode 100644 web/app/components/health-panel.tsx create mode 100644 web/app/components/webhook-playground.tsx create mode 100644 web/app/favicon.ico create mode 100644 web/app/globals.css create mode 100644 web/app/layout.tsx create mode 100644 web/app/page.tsx create mode 100644 web/app/providers.tsx create mode 100644 web/eslint.config.mjs create mode 100644 web/lib/api-base.ts create mode 100644 web/next.config.ts create mode 100644 web/package-lock.json create mode 100644 web/package.json create mode 100644 web/postcss.config.mjs create mode 100644 web/public/file.svg create mode 100644 web/public/globe.svg create mode 100644 web/public/next.svg create mode 100644 web/public/vercel.svg create mode 100644 web/public/window.svg create mode 100644 web/tsconfig.json diff --git a/.gitignore b/.gitignore index b29203098..8a72667d2 100644 --- a/.gitignore +++ b/.gitignore @@ -9,3 +9,8 @@ venv/ data/ *.egg-info/ .DS_Store + +# Next.js frontend (web/) +web/node_modules/ +web/.next/ +web/out/ diff --git a/web/.env.example b/web/.env.example new file mode 100644 index 000000000..2423ad1dc --- /dev/null +++ b/web/.env.example @@ -0,0 +1,2 @@ +# Optional: override production API (e.g. http://127.0.0.1:8000 for local FastAPI) +NEXT_PUBLIC_API_BASE=https://hacktofuture4.vercel.app diff --git a/web/.gitignore b/web/.gitignore new file mode 100644 index 000000000..7b8da95f5 --- /dev/null +++ b/web/.gitignore @@ -0,0 +1,42 @@ +# See https://help.github.com/articles/ignoring-files/ for more about ignoring files. + +# dependencies +/node_modules +/.pnp +.pnp.* +.yarn/* +!.yarn/patches +!.yarn/plugins +!.yarn/releases +!.yarn/versions + +# testing +/coverage + +# next.js +/.next/ +/out/ + +# production +/build + +# misc +.DS_Store +*.pem + +# debug +npm-debug.log* +yarn-debug.log* +yarn-error.log* +.pnpm-debug.log* + +# env files (can opt-in for committing if needed) +.env* +!.env.example + +# vercel +.vercel + +# typescript +*.tsbuildinfo +next-env.d.ts diff --git a/web/README.md b/web/README.md new file mode 100644 index 000000000..e215bc4cc --- /dev/null +++ b/web/README.md @@ -0,0 +1,36 @@ +This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app). + +## Getting Started + +First, run the development server: + +```bash +npm run dev +# or +yarn dev +# or +pnpm dev +# or +bun dev +``` + +Open [http://localhost:3000](http://localhost:3000) with your browser to see the result. + +You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file. + +This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel. + +## Learn More + +To learn more about Next.js, take a look at the following resources: + +- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API. +- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial. + +You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome! + +## Deploy on Vercel + +The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js. + +Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details. diff --git a/web/app/components/health-panel.tsx b/web/app/components/health-panel.tsx new file mode 100644 index 000000000..4a78e7052 --- /dev/null +++ b/web/app/components/health-panel.tsx @@ -0,0 +1,72 @@ +import { getApiBase } from "@/lib/api-base"; + +export type HealthPayload = { + status: string; + service: string; + version: string; + groq_configured: boolean; + telegram_configured: boolean; + github_token_configured: boolean; +}; + +function Flag({ ok, label }: { ok: boolean; label: string }) { + return ( +
+ {label} + + {ok ? "Ready" : "Not set"} + +
+ ); +} + +export async function HealthPanel() { + const base = getApiBase(); + let data: HealthPayload | null = null; + let error: string | null = null; + + try { + const res = await fetch(`${base}/`, { next: { revalidate: 30 } }); + if (!res.ok) { + error = `HTTP ${res.status}`; + } else { + data = (await res.json()) as HealthPayload; + } + } catch (e) { + error = e instanceof Error ? e.message : "Request failed"; + } + + return ( +
+
+
+

+ Live API status +

+

+ Server-rendered check of {base} +

+
+
+ + {error ? ( +

+ Could not reach the API: {error} +

+ ) : data ? ( +
+ + + +
+ ) : null} +
+ ); +} diff --git a/web/app/components/webhook-playground.tsx b/web/app/components/webhook-playground.tsx new file mode 100644 index 000000000..ca0844e9a --- /dev/null +++ b/web/app/components/webhook-playground.tsx @@ -0,0 +1,108 @@ +"use client"; + +import { useMutation } from "@tanstack/react-query"; +import { useState } from "react"; +import { getApiBase } from "@/lib/api-base"; + +const SAMPLE_LOG = `Traceback (most recent call last): + File "tests/test_app.py", line 4, in + import foobar +ModuleNotFoundError: No module named 'foobar' +`; + +export function WebhookPlayground() { + const base = getApiBase(); + const [repository, setRepository] = useState("demo-repo"); + const [logText, setLogText] = useState(SAMPLE_LOG); + + const mutation = useMutation({ + mutationFn: async () => { + const res = await fetch(`${base}/webhook`, { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ + repository, + log_text: logText, + }), + }); + const text = await res.text(); + let json: unknown; + try { + json = JSON.parse(text); + } catch { + json = text; + } + if (!res.ok) { + throw new Error( + typeof json === "object" && json !== null && "detail" in json + ? String((json as { detail: unknown }).detail) + : `HTTP ${res.status}`, + ); + } + return json; + }, + }); + + return ( +
+

+ Try the webhook +

+

+ POSTs sample JSON to /webhook. No API keys in the browser β€” only what you already deployed on the server is used. +

+ +
+
+ + setRepository(e.target.value)} + autoComplete="off" + /> + +