A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
Stars
60
Forks
8
Watchers
60
Open Issues
1
Overall repository health assessment
No language data available
No package.json found
This might not be a Node.js project
12
commits
Integrate original content from commit 34e22c38cda06aa49e1ea4d378418abcbeee1c71 with updates from commit 738a94a6c025ac72e9a493268f15c927f8dd7b01
bb56a33View on GitHubUpdate Readme.md to include comprehensive LLM Security Guide as of 2025.
738a94aView on GitHub