Real-time AI safety guardrails for LLM apps. 10 scanners: prompt injection, PII, harmful content, code vulnerabilities, obfuscation detection. Sub-ms latency. Python + TypeScript SDKs. MCP proxy. Claude Code hooks.
Stars
9
Forks
2
Watchers
9
Open Issues
6
Overall repository health assessment
No package.json found
This might not be a Node.js project
268
commits
Add ResponseSafety and PromptHistory modules (v0.86.0)
876aff3View on GitHubAdd InstructionParser and DataFlowGuard modules (v0.85.0)
a48060dView on GitHubAdd ResponseQuality and SafetyConfig modules (v0.83.0)
12fb9aaView on GitHubAdd OutputFilter and PromptAugmentor modules (v0.82.0)
298d793View on GitHubAdd SemanticValidator and ThreatIntelligence modules (v0.81.0)
c5376a7View on GitHubAdd ToolSafetyGuard and SafetyPipeline modules (v0.80.0)
b886dc5View on GitHubAdd ContextBoundary and SafetyReport modules (v0.79.0)
0e56a67View on GitHubAdd PromptInjectionV2 and SafetyCache modules (v0.78.0)
b8c0bfaView on GitHubAdd PayloadAnalyzer and ComplianceChecker modules (v0.77.0)
807d6adView on GitHubAdd ConversationSafety and ModelGuard modules (v0.76.0)
8032229View on GitHubAdd InputNormalizer and OutputRanker modules (v0.75.0)
c16c2bcView on GitHubAdd TokenEstimator and SafetyLogger modules (v0.74.0)
a6177e6View on GitHubAdd EmbeddingValidator and SafetyEnsemble modules (v0.73.0)
b6ec5b8View on GitHubAdd ContentClassifierV2 and ResponseCoherence modules (v0.72.0)
cede21dView on GitHub