Collected molecules will appear here. Add from search or explore.
A curated repository of prompt injection and jailbreaking snippets designed for security researchers and red-teaming exercises.
Defensibility
stars
51
forks
10
The project is a static collection of text-based prompt injection techniques. While useful as a quick reference for security researchers, it lacks any functional moat. Its defensibility is minimal (score 2) as it is essentially a Markdown file that can be trivially cloned or superseded by more comprehensive, frequently updated lists. Frontier labs (OpenAI, Anthropic) pose a 'high' risk because their core business involves patching the very vulnerabilities this cheatsheet documents; as models are updated via RLHF, these specific 'jailbreaks' (like DAN or STAN) often stop working. Competitively, it trails behind larger community efforts like 'JailbreakChat' or professional red-teaming tools like Microsoft's 'PyRIT' or 'Giskard', which provide automated testing frameworks rather than just snippets. With a velocity of 0.0, the project appears stagnant, making its displacement horizon very short as the underlying LLMs evolve.
TECH STACK
INTEGRATION
reference_implementation
READINESS