Found 25 repositories(showing 25)
requie
A comprehensive reference for securing Large Language Models (LLMs). Covers OWASP GenAI Top-10 risks, prompt injection, adversarial attacks, real-world incidents, and practical defenses. Includes catalogs of red-teaming tools, guardrails, and mitigation strategies to help developers, researchers, and security teams deploy AI responsibly.
Alexanderdunlop
AI prompts that teach Claude, ChatGPT, and Cursor to identify and fix OWASP Top 10 vulnerabilities in code. Transform any AI assistant into a security focused code reviewer.
perplext
Enterprise-grade LLM security testing framework implementing OWASP LLM Top 10 with advanced prompt injection, jailbreak techniques, and automated vulnerability discovery for AI safety research.
regaan
Basilisk — Open-source AI red teaming framework with genetic prompt evolution. Automated LLM security testing for GPT-4, Claude, Grok, Gemini. OWASP LLM Top 10 coverage. 32 attack modules.
cmaenner
Open-source security playbook for AI agents — OWASP-grounded procedures for prompt injection testing, agent audits, LLM risk assessment, and more.
Addy-shetty
PITT is an open‑source, OWASP‑aligned LLM security scanner that detects prompt injection, data leakage, plugin abuse, and other AI‑specific vulnerabilities. Supports 90+ attack techniques, multiple LLM providers, YAML‑based rules, and generates detailed HTML/JSON reports for developers and security teams.
tessera-ops
A curated list of awesome AI security tools, frameworks, and resources. OWASP AI Testing Guide, Agentic AI Top 10, EU AI Act, adversarial ML, LLM red-teaming, prompt injection.
InitiumBuilders
Claris AI — Federated Cortex security engine for OpenClaw agents. Prompt injection defense, zero-day hunting, OWASP LLM coverage. MIT open source.
Wddptesting
Production-ready email AI agent with 11 OWASP-aligned security tests for Groq API. Protects against prompt injection, phishing, data leakage & more.
CloudSec-Jay
Cloud ML/AI security portfolio. Agent security, prompt injection, model supply chain, MLOps integrity. Foundation: Zero Trust IAM, Wazuh XDR, DevSecOps CI pipeline, hardened containers. Every artifact maps to OWASP LLM Top 10, MITRE ATLAS, or ATT&CK — not as an afterthought, but as the design constraint.
Ak-cybe
Comprehensive red team methodology for Web LLM attacks, topics: llm-security, prompt-injection, web-security, red-teaming, owasp, agentic-ai
sunny6300
Senior AppSec engineer embedded in your AI coding workflow — OWASP, LLM security, prompt injection, supply chain risks
empowered-humanity
GitHub Action to scan for AI agent security vulnerabilities — 190+ detection patterns for OWASP ASI Top 10, prompt injection, MCP security, and credential exposure
osayande-infosec
Enterprise AI security risk assessment toolkit - OWASP LLM Top 10 2025, NIST AI RMF, EU AI Act compliance with automated risk scoring and prompt injection detection
toluowo
Adversarial testing framework for evaluating LLM safety, prompt injection vulnerabilities, and jailbreak resilience using structured AI security methodologies. This AI security research focused on LLM adversarial testing, prompt injection defense, and OWASP Top 10 for LLMs evaluation.
Lukog10
A zero-trust AI security framework combining API gateway protection, prompt injection defense, OWASP API scanning, and real-time PII sanitization for secure LLM deployments.
Open-source checklists & templates for EU AI Act compliance, OWASP LLM Top 10 vulnerabilities 2025, NIST AI RMF mapping, generative AI security audits, red-teaming, prompt injection prevention, bias & fairness verification. Free AI governance resources
StateraSolutions
Comprehensive Python library for AI red team testing and LLM security assessment. Features adversarial prompt library, modern web UI, CLI tools, and OWASP/MITRE ATLAS integration.
fboiero
Deep security analysis framework for autonomous AI agent implementations. Analyzes prompt injection, excessive agency, data privacy compliance (GDPR, CCPA, Habeas Data), and more against OWASP LLM Top 10 and NIST AI RMF.
k-celal
An educational repository on designing secure agentic AI systems with prompt injection defenses, tool access controls, data protection, audit logging, human approval flows, and OWASP-aligned security patterns.
Production-ready QA framework for AI/LLM prompt engineering with 8 comprehensive edge case tests and OWASP Top 10 security validation. Includes Flask API, testing suites, and deployment configurations.
agentnode-dev
Security audit for AI agent skills. Detect malicious skills, prompt injection, data exfiltration, supply chain poisoning, two-stage payloads. 61 patterns aligned with OWASP Agentic AI Top 10. Works on Claude, ChatGPT, OpenAI, Gemini, Cursor, OpenClaw, ClawHub.
I-am-Bradley
AI Governance & Security Architecture Proposal (OWASP LLM Top 10): Threat modeling and establishing GRC (Governance, Risk, and Compliance) controls for a Generative AI application. Demonstrates ability to translate adversarial AI risk (Prompt Injection, Data Poisoning) into financial risk metrics for executive stakeholders.
kbajish
Middleware security gateway for LLM applications. 3-layer hybrid detection (rules + ML + LLM) for prompt injection, PII leakage, and jailbreaks. Aligned with OWASP LLM Top 10, EU AI Act, MITRE ATLAS, and BSI IT-Grundschutz.
hariram32
Senior-level AI networking agent with knowledge-first architecture. Ingests protocol docs into a hybrid RAG pipeline (vector + BM25), reasons through an OODA loop with confidence scoring, and executes against live network devices via Scrapli. Security-hardened: SNA trust tiers, GAIT audit trail, prompt injection defense, OWASP agentic threat model.
All 25 repositories loaded