Researchers Uncover GPT-5 Jailbreak and Zero-Click AI Agent Attacks Exposing Cloud and IoT Systems
Researchers have discovered a jailbreak technique that allows users to bypass the ethical safeguards of OpenAI's GPT-5, enabling the generation of illicit instructions. This method combines a known technique with narrative-driven steering to manipulate the model's output.