Found 9 repositories(showing 9)
sattyamjjain
Open-source security firewall for AI agents — validates tool calls, strips ghost arguments, enforces type safety, PII masking, RBAC, cost tracking & sandbox isolation. Works with LangChain, OpenAI Agents SDK, PydanticAI & CrewAI.
foersben
A secure, rootless Podman sandbox for running AI coding agents (Antigravity, Cursor) with SSH forwarding and full host isolation.
h30s
A secure “airlock” for AI agents to interact with APIs using risk-based decisions, Auth0 Token Vault, and human-in-the-loop approvals.
es617
Credential-isolating reverse proxy for AI agents. Lets agents call APIs without seeing the keys.
popivanova
airlock is a cryptographic handshake protocol for verifying AI model identity at runtime. It enables real-time attestation of model provenance, environment integrity, and agent authenticity - without relying on vendor trust or static manifests.
tushar5623
OpenClaw Airlock: A secure authorization gateway for AI agents, enforcing risk-tiered policies, human-in-the-loop approvals, delegated token exchange, and full audit logging. Enables safe automation across GitHub, Gmail, Slack, and other APIs while maintaining governance and trust.
brianmulder
a good fence between you and your coding agents
ComputClaw
A trust boundary between AI agents and infrastructure. Agents write code, Airlock runs it with injected secrets.
ArtigasChristopher
A robust compliance layer that acts as a secure airlock for AI Agents, preventing sensitive data leaks via reversible tokenization. Built with strict typing and a custom Python backend to bridge the gap between innovation and data privacy.
All 9 repositories loaded