The default configurations within ServiceNow’s Now Assist generative artificial intelligence (AI) platform are vulnerable to exploitation by malicious actors who can leverage its inherent agentic capabilities to launch prompt injection attacks. Specifically, a technique termed “second-order prompt injection,” identified by AppOmni, exploits the mechanism where Now Assist agents can discover and recruit one another. This allows the attack to unfold behind the scenes, enabling unauthorized actions like copying and stealing sensitive corporate data, altering records, and escalating privileges—all without the victim organization’s immediate knowledge.
According to AppOmni’s Chief of SaaS Security Research, Aaron Costello, this alarming discovery is not due to a flaw or bug in the AI itself, but rather a direct result of the system’s expected behavior defined by its default configuration settings. The problem is rooted in the platform’s core architecture, which, by default, groups agents into the same team, marks them as discoverable when published, and uses a large language model (LLM) that supports cross-agent communication. These settings, while intended to facilitate helpful agent-to-agent collaboration and automate functions like help-desk operations, inadvertently create an avenue for serious security risks.
The attack scenario works when a benign, low-privilege agent parses a specially crafted malicious prompt that has been embedded into content it is allowed to access. Once parsed, this prompt quietly compels the innocuous agent to recruit a more potent, privileged agent on its team. The recruited, powerful agent is then used to perform harmful actions such as reading or changing sensitive records, exfiltrating data, or sending unauthorized emails, even if standard built-in prompt injection protections are active. Crucially, the agents run with the privileges of the user who initiated the original interaction, not the user who created and planted the malicious prompt, making the resulting actions highly impactful.
ServiceNow has acknowledged the behavior, stating it is intended, but has since updated its documentation to provide greater clarity regarding the settings. This situation underscores a critical need for strengthening AI agent protection as enterprises rapidly integrate these capabilities into their workflows. To effectively mitigate the risk of such prompt injection threats, organizations utilizing Now Assist are strongly advised to take corrective actions.
These mitigation strategies include configuring a supervised execution mode for any agents handling privileged tasks, disabling the autonomous override property (“sn_aia.enable_usecase_tool_execution_mode_override”), segmenting agents by team to limit cross-communication, and implementing stringent monitoring of AI agents for any suspicious or unexpected behavior. Security experts warn that organizations using Now Assist’s AI agents that fail to closely scrutinize and adjust these default configurations are likely already exposed to unnecessary risk.
Reference:






