ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
Summary
Malicious actors can exploit vulnerabilities in ServiceNow's Now Assist AI platform through second-order prompt injection attacks, allowing unauthorized actions and potential data exfiltration. This issue highlights significant security risks associated with default configurations in generative AI systems.
Original Article Summary
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive
Impact
ServiceNow's Now Assist generative artificial intelligence platform
In the Wild
Unknown
Timeline
Newly disclosed
Remediation
Review and adjust default configurations in ServiceNow's Now Assist to prevent prompt injection attacks. Implement security best practices for generative AI systems.