Test your LLM system prompts against real-world attack vectors including prompt injection, jailbreaks, and data leaks.
Stars
11
Forks
1
Watchers
Open Issues
Overall repository health assessment
1.58.2
User
34
commits
Add provided benchmark PDF to assets
12157a8
Replace demo PDF with provided benchmark report
c56295b
Add tracked demo benchmark PDF asset
886765a
Add demo benchmark PDF report
72be142
Fix provider model loading and add demo output sample
3f128ab
Compress README demo gif
851d0e4
Add demo video planning docs
ffbf868
Fix duplicate review queue widget keys
5ae818c
Add README demo gif
4d1c2de
Update motivation section for clarity and detail
d4403ea
Refine project description in README
0c2333b
docs: update README hero wording
f61805a
docs: center README hero summary
ae9141a
docs: refine README positioning
716d407
feat: prepare next benchmark release
25bcd95