Found 15 repositories(showing 15)
karpie28
SCALE 23x talk: Practical Open Source Security for LLMs — Garak, PyRIT, guardrails, and live demos
Avaly-ai-Corp
Agentic Garak — The first agentic LLM vulnerability scanner, developed by Avaly.ai. Open source and driven by intelligent agents, it invites you to adapt and expand its security scanning power.
tyrianinstitute
Unified AI/LLM/Agentic security testing CLI. Orchestrates Promptfoo, Garak, PyRIT under one harness with OWASP Agentic Top 10 coverage and compliance-ready evidence packs.
tmpoulionis
Security benchmarking of low param (< 3B) LLMS using Nvidia's garak tool.
280Zo
Just playing around with Garak AI for security testing of LLMs
rubinfletcher84-commits
Built an AI security lab using Ubuntu, VirtualBox, Ollama, and Garak to scan local LLMs.
gengirish
Bayesian attack planning engine for LLM security — turns garak scan results into adaptive red team campaigns
dwain-barnes
Web-based GUI for Garak LLM security scanner. Test local Ollama models with an intuitive interface.
Faishun
A combination of AgentDojo, Garak, Augustus and Local LLM as a Judge (Inspect AI) to thoroughly assess the security of LLMs.
picassoendless
A prototype self-healing LLM security pipeline integrating [Garak](https://github.com/NVIDIA/garak) for automated vulnerability discovery and Vulnerability-Driven Patch Synthesis (VDPS) for automated mitigation.
Hossein1998
Automated LLM Security Testing with Garak This repository contains a suite of tests for evaluating the security and robustness of large language models (LLMs) using Garak. It includes automated adversarial testing for models like Falcon 7B, Llama2-7B, GPT-2, and Mistral 7B.
vivashu27
A Python tool that converts Garak JSONL report files into interactive, self-contained HTML dashboards for visualizing LLM security scan results.
Dimitrios79
Adversarial AI Security Lab: LLM red-teaming, RAG poisoning attacks, automated evaluations, and defensive mitigations. Includes Garak-based vulnerability scanning, retrieval hijacking demos, and security-focused agent simulations.
thaaaru
🔴 LLM Red Team Lab: Complete environment for testing LLM security vulnerabilities including prompt injection, jailbreaking, and data exfiltration attacks. Features Ollama, PyRIT, Garak, vulnerable RAG app, and Tor integration for educational security research.
sudosuraj
An automated backend system that exposes an API to perform security testing on REST-based LLMs using Garak, a powerful prompt injection and vulnerability testing tool.
All 15 repositories loaded