Back to search
An adversarial LLM that stress-tests AI systems via exploit prompts, uncovering vulnerabilities like bias, data leaks, and jailbreaks. Designed for ethical AI security research. Open-source, developer-focused, and ethics-driven.
Stars
3
Forks
1
Watchers
3
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
24
commits