ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
Overview
Malicious actors can exploit vulnerabilities in ServiceNow's Now Assist AI platform through second-order prompt injection attacks, allowing unauthorized actions and potential data exfiltration. This issue highlights significant security risks associated with default configurations in generative AI systems.
Key Takeaways
- Affected Systems: ServiceNow's Now Assist generative artificial intelligence platform
- Action Required: Review and adjust default configurations in ServiceNow's Now Assist to prevent prompt injection attacks.
- Timeline: Newly disclosed
Original Article Summary
Malicious actors can exploit default configurations in ServiceNow's Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks. The second-order prompt injection, according to AppOmni, makes use of Now Assist's agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive
Impact
ServiceNow's Now Assist generative artificial intelligence platform
Exploitation Status
The exploitation status is currently unknown. Monitor vendor advisories and security bulletins for updates.
Timeline
Newly disclosed
Remediation
Review and adjust default configurations in ServiceNow's Now Assist to prevent prompt injection attacks. Implement security best practices for generative AI systems.
Additional Information
This threat intelligence is aggregated from trusted cybersecurity sources. For the most up-to-date information, technical details, and official vendor guidance, please refer to the original article linked below.
Related Topics: This incident relates to Exploit.