Back to search
PromptFuzz systematically discovers vulnerabilities in LLM-powered applications through automated adversarial testing. It generates, mutates, and executes attack prompts against any LLM endpoint, then reports findings with severity ratings, reproduction steps, and CI integration.
Stars
0
Forks
0
Watchers
0
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
3
commits
feat: initialize prompt-fuzz project with core engine, attack modules, detectors, and React-based dashboard UI
9f1dd0fView on GitHub