
Harness ChatGPT's Power: Earn $50,000 with These Proven Strategies
The internet's notorious ‘Pliny the Prompter’ has partnered with HackAPrompt 2.0 to turn AI jailbreaking into a competitive sport.[...]
Pliny the Prompter: The Face Behind AI Jailbreaking
Pliny the Prompter, the internet's most notorious AI jailbreaker, operates in plain sight. He teaches thousands how to bypass ChatGPT's guardrails, convincing AI models to ignore their core purpose of being helpful, honest, and non-harmful.
Now, Pliny is pushing to mainstream digital lockpicking through a collaboration with HackAPrompt 2.0—a jailbreaking competition hosted by Learn Prompting, an educational and research organization focused on prompt engineering. With $500,000 in prize money up for grabs, Pliny is offering winners a spot on his elite "strike team."
How Jailbreaking Works
Jailbreaking large language models is essentially social engineering. Crafty prompts exploit the inherent tension in how these models function—they're trained to be helpful and follow instructions, yet also programmed to refuse certain requests. By finding the right phrasing, jailbreakers can manipulate AI into revealing forbidden information instead of defaulting to safety protocols.
Pliny has honed this craft since at least 2023, cultivating a community dedicated to bypassing AI restrictions. His GitHub repositories, "L1B3RT4S" and "CL4R1T4S," contain an arsenal of jailbreaks and system prompts designed for today's most popular large language models.
The HackAPrompt 2.0 Competition
Structured like a video game season, HackAPrompt 2.0 features multiple tracks targeting different vulnerability categories:
- CBRNE Track: Tests whether models can be tricked into providing false or misleading information about weapons or hazardous materials.
- Agents Track: Focuses on AI agent systems capable of taking real-world actions.
Pliny amplifies the competition's impact through his Discord server "BASI PROMPT1NG," where he regularly demonstrates jailbreaking techniques. His educational approach highlights a critical truth: true AI robustness comes from understanding every possible attack vector—an urgent priority given growing concerns about super-intelligent AI risks.