Back to search
A comprehensive benchmark system for evaluating whether Large Language Models (LLMs) can be tricked into ignoring security vulnerabilities through deceptive code patterns and misleading comments.
Stars
7
Forks
0
Watchers
7
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
6
commits