Found 241 repositories(showing 30)
sgasser
AI gets the context. Not your secrets. Open-source privacy proxy for LLMs.
saofund
MarryWise-LLM: AI-Powered Suitor Analysis. "Can you marry this man?" Let AI uncover the secrets of dating.
cxumol
Never give AI companies your secrets! A local LLM-based privacy filter for LLM users. Seamless integration with your existing AI tools as a Python library / OpenAI SDK replacement / API Gatetway / Web Server.
OSU-NLP-Group
[TMLR'25] "Is Your LLM Secretly a World Model of the Internet? Model-Based Planning for Web Agents"
tianyi-lab
[ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"
lechmazur
Multi-Agent Step Race Benchmark: Assessing LLM Collaboration and Deception Under Pressure. A multi-player “step-race” that challenges LLMs to engage in public conversation before secretly picking a move (1, 3, or 5 steps). Whenever two or more players choose the same number, all colliding players fail to advance.
luuyin
Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"
skywalker023
🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory"
CharlieDigital
The only MCP server you need: let your LLM generate and safely execute JavaScript -- including fetch API calls, JSONPath ETL, built-in resiliencey, and secrets management
ZhiningLiu1998
[ACL'25 Main] SelfElicit: Your Language Model Secretly Knows Where is the Relevant Evidence! | 让你的LLM更好地利用上下文文档:一个基于注意力的简单方案
sail-sg
The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"
opena2a-org
One command to keep secrets out of AI (LLMs). Works with Claude Code, Cursor, Copilot, Windsurf, and any AI coding tool.
Arthurizijar
The official code implementation of the ACL2025 paper “A Text is Worth Several Tokens: Text Embedding from LLMs Secretly Aligns Well with The Key Tokens”. Text Embedding from LLMs Secretly Aligns Well with The Key Tokens”.
foresturquhart
A lightweight tool that converts directory contents into structured output optimized for LLM interpretation, featuring Git-aware file ordering, secret detection/redaction, token counting, and customizable filtering.
llmsecrets
Protect .env secrets from AI coding assistants. Windows Hello encryption for Claude Code.
huseynovvusal
🤖 AI-powered Git CLI assistant built with Go. Automate commit messages, enforce pre-commit policies, detect secrets, and improve code quality with LLM-based suggestions.
pandalla
Fine-Tune LLM Synthetic-Data application and "From Data to AGI: Unlocking the Secrets of Large Language Model"
lenaxia
A K8s controller that watches your cluster for failures and opens pull requests on your GitOps repository with fixes. Security is a first class citizen, and it runs in-cluster with read-only RBAC, redacts secrets before they reach LLM, and requires human-in-the-loop. Formerly known as k8s-mendabot
hexxt-git
hide secrets in normal looking text using an LLM
nhtlongcs
This is a proof-of-concept application that utilizes the OpenAI API to embed the secrets of a LLM's knowledge.
azerozero
LLM proxy with built-in DLP and regulatory compliance. Redacts secrets before they reach the API. EU AI Act, GDPR, HDS/PCI DSS ready. Multi-provider failover, live TUI, virtual keys, fan-out. 6 MB, zero deps. Rust.
jaimemorales52
Spring Boot backend for evaluating Large Language Models on the detection of Indicators of Compromise (IoCs) embedded as secrets in obfuscated JavaScript code. In this implementation, the IoC is an IP address hidden inside transformed JS files. The service exposes REST APIs to query multiple LLM providers and normalize their IoC detection responses
andrasfe
Python vulnerability scanner & MCP security toolkit. Real-time dependency checking via OSV/NVD/GitHub Advisory DBs, Docker analysis, secrets detection, MCP config validation, LLM-powered risk assessment & interactive security audits. Full CVE details, CVSS scores & remediation guidance.
fpytloun
Guardrails service for AI agents. Default-deny tool call evaluation with LLM safety analysis, priority-ordered decision matrix, and human-in-the-loop escalations. Session recording, behavioral analysis, MCP proxy, secret redaction, and real-time audit.
yayashuxue
A curated list of projects, tools, and resources for securing AI agent authentication, protecting credentials, and managing secrets in LLM-powered systems.
user1342
A security testing tool designed to evaluate the effectiveness of large language models (LLMs) in protecting secrets and preventing security breaches. With customisable LLM options, the tool allows you to simulate attacks on LLMs using various techniques and observe their defence capabilities.
TheJamesLoy
ScrubDuck is a local-first security tool that strips sensitive data (API keys, PII, passwords) from your source code and replaces them with context-aware placeholders. It allows you to use LLMs for debugging without leaking proprietary secrets.
Shayanthn
Compress LLMs to mobile size without losing accuracy — the industry's best-kept secret is now open-source!
jordan-gibbs
An LLM benchmark based on the popular social deception game, Secret Hitler. Test intelligence, long context planning, logic, and duplicitous capabilities of popular AI models.
deeplearning-wisc
Official repo for ICLR 2025: Your Weak LLM is Secretly a Strong Teacher for Alignment