Echo Chamber, Prompts Used to Jailbreak GPT-5 in 24 Hours
Researchers have demonstrated a jailbreaking technique that manipulates a language model (LLM) to produce harmful content, specifically instructions for creating a Molotov cocktail, without using inappropriate language. This highlights vulnerabilities in AI systems that can be exploited through creative prompting and storytelling.