Vex Protocol The trust layer for AI agents — adversarial verification, cryptographic audit trails, and tamper-proof execution
-
Updated
Jan 4, 2026 - Rust
Vex Protocol The trust layer for AI agents — adversarial verification, cryptographic audit trails, and tamper-proof execution
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.
Implementation of Vocabulary-Based Adversarial Fuzzing (VB-AF) to systematically probe vulnerabilities in Large Language Models (LLMs).
A research framework for simulating, detecting, and defending against backdoor loop attacks in LLM-based multi-agent systems.
🛡️ Enterprise-grade AI security framework protecting LLMs from prompt injection attacks using ML-powered detection
Código y demos para generar exploits de kernel vulnerables y defensas en tiempo real con IA.
Ethically-bounded red team framework for AI-driven social engineering simulation with consent enforcement and identity graph mapping
Formal research on Cognitive Side-Channel Extraction (CSCE) and AI semantic leakage vulnerabilities.
A complete self-hosted AI research platform running on Docker with GPU acceleration. Combines LLM inference, vector search, web search, code execution. and fully searchable logging with Splunk - all running locally.
AI Security Research: Gemini 3.0 Pro S2-Class Exfiltration & Adversarial Robustness. Hardening frontier models against autonomous mutation vectors. NIST VDP / AI Safety Institute compliant.
Add a description, image, and links to the adversarial-ai topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-ai topic, visit your repo's landing page and select "manage topics."