Hands‑on AI Agent Security Evaluation — Explore and simulate 15 advanced LLM attack techniques (prompt injection, RAG poisoning, multi‑agent compromise, etc.) with interactive Jupyter tutorials. Includes adversarial testing methods, vulnerability analysis, and defense strategies for building secure AI systems.
Stars
2
Forks
0
Watchers
2
Open Issues
0
Overall repository health assessment
No package.json found
This might not be a Node.js project
11
commits
Rename agent-security-evaluation-tutorial.ipynb to agent-security-evaluation.ipynb
08886f5View on GitHubMerge branch 'main' of https://github.com/Tanujkumar24/AgentSecurityEvaluation
828ae42View on GitHub