Found 139 repositories(showing 30)
Bit-Pulse-AI
OpenClaw Prompt Shield: Security framework that protects OpenClaw AI agents from prompt injections, data leaks, and dangerous commands using Azure AI Content Safety and Microsoft Purview DLP.
promptshield-io
The security layer for AI prompts. A unified monorepo for detecting and neutralizing adversarial Unicode, invisible character poisoning, and homoglyph attacks in LLM workflows. Includes the GhostBuster engine, VS Code extension, and CLI.
promptshieldhq
Detection and anonymization microservice for the PromptShield stack.
promptshieldhq
A free, open-source LLM security proxy. Drop it between your app and any LLM provider to get rate limiting, audit logging, token tracking, and Prometheus metrics with no code changes to your app.
wagner-group
No description available
Aegis-Logic-Systems
PromptShield: Multi-Layer Prompt Injection Detection for .NET
gmuskan95
Browser extension that detects PII in AI chat inputs and lets you redact before sending.
Pradeep-2901
A 4-class classification system for securing technical LLMs
promptshieldhq
Open source LLM gateway with PII and secret detection built in. Runs on your infrastructure.
varunmahajan1
Prompt injection defense for LLM agents — zero dependencies, pattern-based detection
sidpreneur
No description available
Zero-Harm-AI-LLC
Detect prompt injection, PII leaks, secrets exposure, and unsafe LLM usage in pull requests.
yksanjo
🛡️ AI prompt security and validation tool to protect against prompt injection attacks
Abdulbasith0512
No description available
mergenhan
This repository contains a simple but functional Chrome Extension. It monitors input fields, textareas, and editable elements on websites. When forbidden words are detected, the extension warns the user (or blocks submission). Users can manage the blocked words list from the popup or options page.
10486-JosephMutua
No description available
L0uisHu
No description available
pravin9033
Runtime security firewall for LLM applications.
prabujayant
No description available
Aadhithya-T
> PromptShield is a prompt security middleware for LLMs. It uses a fine-tuned DistilBERT classifier to label incoming prompts as safe, unsafe, suspicious, or jailbreak before forwarding to Google Gemini 2.5 Flash. Built with Python, Transformers, and a vanilla HTML/CSS/JS frontend.
Dhwanit2501
A Context-Aware Prompt Injection Defense System for LLM Chatbots that detects and neutralizes prompt injection attacks before they reach your LLM.
MdAmineTrabelssi
PromptShield
ydmw74
Promptshield for AI Agents
Ashu-pixel08
No description available
Elvis-NChalant
No description available
miozilla
promptshield :shield: : Tech & Social Media #Content-Safety
iwaseemrahmani
No description available
Harini2809
AI powered Middleware for Prompt Injection Attacks in LLM
Bluwii
No description available
bojin-clawflow
PromptShield - AI Agent Runtime Security as a Service