Found 322 repositories(showing 30)
CyberAlbSecOP
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Tencent
A full-stack AI Red Teaming platform securing AI ecosystems via OpenClaw Security Scan, Agent Scan, Skills Scan, MCP scan, AI Infra scan and LLM jailbreak evaluation.
iOS17
iOS 26.4 - 26, 17 - 17.7.5 & iOS 18 - 18.7.3 Jailbreak Tools, Cydia/Sileo/Zebra Tweaks & Jailbreak News Updates || AI Jailbreak Finder 👇
General-Analysis
An encyclopedia of jailbreaking techniques to make AI models safer.
PromptLabs
A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection
SlowLow999
sharing NEW strong AI jailbreaks of multiple vendors (LLMs)
BlackTechX011
HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like ChatGPT, LLaMA, and more with the world's most advanced Jailbreak prompts 🔓.
iOS17
The Definitive Guide of Palera1n Jailbreak Tool, iOS 17 - iOS 26/18 Version Compatibility, How To Install Guide, Device Compatibility, Achievements, Research Data And Alternatives & Working Tweak List. AI Jailbreak Finder 👇👇
TheRook
Albert is a general purpose AI Jailbreak for Llama 2 and ChatGPT. Similar to DAN, but better.
AI hacking snippets for prompt injection, jailbreaking LLMs, and bypassing AI filters. Ideal for ethical hackers and security researchers testing AI security vulnerabilities. One README.md with practical AI prompt engineering tips. (180 chars) Keywords: AI hacking, prompt injection, LLM jailbreaking, AI security, ethical hacking.
successfulstudy
Compile a list of AI jailbreak scenarios for enthusiasts to explore and test.
dobriban
Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.
UndercodeUtilities
"ACCESS LIST" Bypass collections used during pentesting, gathered in one place. The list types include tools, usernames, passwords, combos, wordlists, Ai Jailbreaks, Dorks and many more.
Th3-C0der
Th3-GPT Prompt/Script Will Jailbreak ChatGPT / Other AI Models
BestAIApps
Bootstra AI Jailbreak for iOS: The World’s First AI-Powered Jailbreaking Tool
0din-ai
0DIN Sidekick is a Firefox/Chromium Add-on/Extension for AI security researchers that streamlines LLM jailbreak testing and vulnerability discovery across multiple providers.
whosdread
Who me? Yes, I am a Gemini. And so is googles AI so use my jailbreaks because why not?
HOLYKEYZ
AI red teaming, jailbreaking, and all forms of adversarial attacks for security purposes
Playing around with various jailbreaking techniques ahead of the Gray Swan AI Ultimate Jailbreaking Competition
kokatesaurabh
Cyber-Jarvis is a versatile AI assistant for automation and cybersecurity. It handles tasks like playing videos, detecting objects, performing OSINT, scanning for vulnerabilities, cracking hashes, steganography, and AI jailbreak. Integrates various tools for enhanced digital management and security.
TrustAI-laboratory
Research on "Many-Shot Jailbreaking" in Large Language Models (LLMs). It unveils a novel technique capable of bypassing the safety mechanisms of LLMs, including those developed by Anthropic and other leading AI organizations. Resources
B1gN0Se
No description available
SecNode
AISecLists - Your AI Red Teaming Arsenal. Discover a curated collection of prompt lists for diverse AI security assessments, including LLM jailbreaks, prompt injection, information disclosure, and more
Alibaba-AAIG
Shark Family is an AI safety red-teaming and jailbreak attack module. it harnesses powerful optimization and automated strategies to generate highly effective jailbreak prompts that penetrate diverse model defenses for extreme stress testing. | 鲨鱼家族 是一个 AI 安全红队与越狱攻击组件。它利用强大的优化能力与自动化策略,生成极具穿透力的越狱指令,击破多种模型防御,为极限压力测试提供最强“利矛”。
perplext
Enterprise-grade LLM security testing framework implementing OWASP LLM Top 10 with advanced prompt injection, jailbreak techniques, and automated vulnerability discovery for AI safety research.
Kim-Minseon
Automatic Jailbreaking of the Text-to-Image Generative AI Systems
jeb1399
Jailbreak ai without jailbreaking ai
ECTO-1A
Advanced AI Jailbreak. Uses steganography and fernet encryption to pass data that is hidden in images to AI models undetected.
randalltr
AI Hacking for Beginners: Learn Prompt Injection, Jailbreaking & Red Teaming Techniques
openclay-ai
Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.