Lightweight prompt injection detector. 22 attack patterns. Blocks jailbreaks before they reach your model.
Stars
1
Forks
0
Watchers
1
Open Issues
2
Overall repository health assessment
No package.json found
This might not be a Node.js project
1
commits
v0.3.0: Add OutputScanner — LLM output scanning without PyTorch
6253f79View on GitHubRewrite README with honest competitive positioning and accurate claims
7f2be2dView on GitHubv0.2.1: Add delimiter injection + base64 payload detection (75 patterns)
1ba9392View on GitHubMerge branch 'main' of https://github.com/manja316/prompt-shield
22bb86dView on GitHub