Found 12 repositories(showing 12)
NVIDIA-NeMo
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
rustyorb
AI/LLM Red Team Suite — Automated security testing toolkit for probing language models against prompt injection, jailbreaks, data extraction, and guardrail bypasses
Arnav9923386924
PromptLab is a BSP-driven LLM validation and hardening toolkit that helps teams test, score, and improve model behavior using multi-model council evaluation, adversarial guardrail testing, and iterative BSP linting/optimization through a simple CLI workflow.
inference-stack-llc
Production-grade Python toolkit for AI product engineering — LLM gateway, policy guardrails, RAG eval, agent collaboration, telemetry, and more.
danielmaddaleno
Pluggable guardrails pipeline for LLM apps – PII redaction, prompt injection, toxicity & token budget
PianoKeyDreamer
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
ankur28121982
Production-ready LLM evaluation & guardrails toolkit (provider-agnostic). Generate explainable metrics and ALLOW/WARN/BLOCK recommendations.
AdamsCode1
🛡️ Production-ready LLM evaluation toolkit with CLI tool and real-time guardrails dashboard. Local-first, policy-driven safety monitoring.
sharkfabri
Security testing toolkit for AI applications built on LLMs. Verify that your system prompts, guardrails, and pipelines hold up against known prompt injection techniques.
alex-vbcoding
AI design guardrail toolkit — Make LLMs better at generating consistent, accessible UI code with any design system
saifsysim
Open-source LLM safety audit toolkit. Test any model for hallucinations, citation fabrication, and domain risks. Powered by Guardrail AI.Open-source LLM safety audit toolkit. Test any model for hallucinations, citation fabrication, and domain risks. Powered by Guardrail AI.
Adversarial Testing Orchestrator for LM Studio & DeepTeam — A Python toolkit that connects two local LLMs (uncensored attacker + censored defender) to automate jailbreak and guardrail-bypass testing. Uses Hugging Face adversarial prompt datasets, mutation strategies, and DeepTeam’s red teaming framework to generate, execute, and log attacks.
All 12 repositories loaded