Red Teams Jailbreak GPT-5 With Ease, Warn It’s ‘Nearly Unusable’ for Enterprise
Researchers have successfully demonstrated that multi-turn 'storytelling' attacks can bypass the prompt-level filters of GPT-5, revealing significant vulnerabilities in its security measures. As a result, they warn that the model is 'nearly unusable' for enterprise applications due to these systemic weaknesses.