LLM prompt injection detection for Go applications
-
Updated
Jan 24, 2026 - Go
LLM prompt injection detection for Go applications
IDVoice + ChatGPT iOS demo app
Pack de Scripts Para Pentesting Inicial a Chatbots
IDVoice + ChatGPT Android demo app
A comprehensive security and governance checklist for developers integrating AI and LLMs into production applications.
Prompt Injection Detection in LLaMA-based Chatbots using LLM Guard
Open-source LLM red teaming framework. Security-test any model (Claude, GPT, Llama) for prompt injection, data leakage, etc. 15 probes, 29 prompt converters, LLM-as-judge grading, adaptive red teaming, static code audit. SARIF + JUnit for CI/CD.
Chat-only and whitelist safety plugin for OpenClaw public channels and group chats
Detect and prevent prompt injection attacks on LLM applications
🔒 Safeguard LLM behavior with PromptGuard to detect unseen regressions and ensure reliable outputs amid evolving model updates.
Add a description, image, and links to the chatbot-security topic page so that developers can more easily learn about it.
To associate your repository with the chatbot-security topic, visit your repo's landing page and select "manage topics."