Skip to content

devrupt-io/LLaMAudit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🦙 LLaMa Audit

Self-hosted AI text detection — analyze written work for AI-generated content using local or cloud LLMs.

License: MIT

Overview

LLaMa Audit is an open-source tool that lets anyone host and run their own LLM-based text auditing system. It analyzes written text (essays, articles, reports) to detect sections likely generated by AI, providing per-paragraph probability scores, rationale, and highlighted results.

Under the hood, an AI prompt with a structured output schema tasks one or more LLMs with examining the provided text and answering:

  • Which sections are likely to have been AI-generated?
  • What is the probability of each section being AI-generated?
  • What linguistic markers indicate AI authorship?

Results from multiple models are aggregated into an overall score with highlighted areas, helping students, educators, and writers understand how their writing may be fingerprinted by AI detection systems.

Features

  • 📝 Rich text editor — paste text or import Word (.docx) and OpenDocument (.odt) files
  • 🔍 AI detection analysis — per-paragraph scoring with rationale and linguistic markers
  • 📊 Visual results — overall score gauge, highlighted text, and section-by-section breakdown
  • 🤖 Multi-model support — run analysis across multiple models and aggregate results
  • 🌐 OpenRouter integration — use any model available on OpenRouter
  • 🦙 Ollama support — run analysis using local models
  • ⚙️ Settings panel — configure API keys, endpoints, and model selection in the UI
  • 🔌 Connection testing — verify provider connectivity before running analysis
  • 📋 Analysis history — review past analyses stored in PostgreSQL
  • 🐳 Docker-first — everything runs in containers with hot reload for development

Tech Stack

Layer Technology
Frontend Next.js 14, React 18, Tailwind CSS, TypeScript
Backend Express, Sequelize, PostgreSQL, TypeScript
AI Providers OpenRouter API, Ollama
Infrastructure Docker Compose, Node 20 Alpine

Quick Start

Prerequisites

1. Clone and configure

git clone https://github.com/your-org/llamaudit.git
cd llamaudit
cp example.env .env  # Edit with your API keys

2. Start development

docker compose --profile dev up -d

Services will be available at:

3. Configure your provider

Open http://localhost:52000/settings and:

  1. Select your AI provider (OpenRouter or Ollama)
  2. Enter your API key or endpoint
  3. Click "Test Connection" to verify
  4. Select your preferred models
  5. Save settings

4. Analyze text

Go to http://localhost:52000, paste or import your text, and click Run Analysis.

Development

Architecture

llamaudit/
├── backend/           # Express + Sequelize + TypeScript API
│   ├── src/
│   │   ├── config/    # Environment & database configuration
│   │   ├── models/    # Sequelize models (Analysis, Settings)
│   │   ├── routes/    # REST API endpoints
│   │   └── services/  # AI provider integrations (OpenRouter, Ollama)
│   └── tests/         # Jest unit & integration tests
├── frontend/          # Next.js 14 + Tailwind CSS
│   ├── src/
│   │   ├── app/       # Next.js app router pages
│   │   ├── components/# React components
│   │   └── types/     # TypeScript type definitions
│   └── tests/         # Jest component tests
├── docker-compose.yml # Docker orchestration (dev/test/prod profiles)
├── run-tests.sh       # Dockerized test runner
└── example.env        # Example environment variables (copy to .env)

API Endpoints

Method Endpoint Description
GET /api/health Health check
POST /api/analysis Run AI detection on text
GET /api/analysis List analysis history
GET /api/analysis/:id Get single analysis
DELETE /api/analysis/:id Delete an analysis
GET /api/settings Get current settings
PUT /api/settings Update a setting
PUT /api/settings/batch Batch update settings
POST /api/settings/test-connection Test provider connectivity
GET /api/settings/models List available models

Running Tests

# Run all tests (ephemeral Docker containers)
./run-tests.sh

# View last test output
./run-tests.sh --last

# Run only backend tests
./run-tests.sh --backend

# Run only frontend tests
./run-tests.sh --frontend

Container Commands

# Backend shell
docker compose --profile dev exec backend-dev sh

# Frontend shell
docker compose --profile dev exec frontend-dev sh

# Database CLI
docker compose --profile dev exec db psql -U llamaudit -d llamaudit

# View logs
docker compose --profile dev logs -f

Environment Variables

Variable Description Default
OPENROUTER_API_KEY OpenRouter API key
OPENROUTER_API_URL OpenRouter API base URL https://openrouter.ai/api/v1
OPENROUTER_CHAT_MODELS Comma-separated model list qwen/qwen3-8b
OLLAMA_ENDPOINT Ollama server URL http://host.docker.internal:11434
OLLAMA_MODELS Comma-separated Ollama model list
DATABASE_* PostgreSQL connection settings See example.env
FRONTEND_URL Frontend URL for CORS http://localhost:52000

Roadmap

  • Core text analysis with structured output
  • OpenRouter integration with retry logic
  • Ollama integration for local models
  • Multi-model aggregation
  • Settings panel with connection testing
  • Document import (Word, ODT, Markdown)
  • Analysis history with PostgreSQL storage
  • Docker Compose dev/test infrastructure
  • User authentication
  • Batch analysis (multiple documents)
  • PDF import support
  • Export reports (PDF, HTML)
  • Model comparison dashboard
  • Webhook notifications
  • Rate limiting and usage quotas

Contributing

Contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes and add tests
  4. Run ./run-tests.sh to verify
  5. Submit a pull request

License

This project is licensed under the MIT License. See LICENSE for details.

About

Self-hosted AI text detection — analyze written work for AI-generated content using local or cloud LLMs.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors