In early 2024, Microsoft was notified of a vulnerability that could allow for the theft of sensitive user information. This vulnerability, which has now been patched, affected Microsoft 365 Copilot and opened the door for ASCII smuggling.
Due to the vulnerability, a reliable exploit chain could be fashioned with a string of various attack methods:
- Induce prompt injection via malicious, concealed content in a document shared through chat,
- Leverage a prompt injection payload to command Microsoft 365 Copilot to find more emails and documents,
- And deploy ASCII smuggling to cause the target to click on a link, exfiltrating sensitive data to a third-party server.
As a result of such an attack, sensitive information in emails (including multi-factor authentication codes) could be given to a server controlled by the malicious actor.
Security leaders weigh in
Stephen Kowski, Field CTO at SlashNext Email Security+:
“This ASCII smuggling technique highlights the evolving sophistication of AI-enabled attacks, where seemingly innocuous content can conceal malicious payloads capable of exfiltrating sensitive data. To protect against such threats, organizations should implement advanced threat detection systems that can analyze content across multiple communication channels, including email, chat and collaboration platforms. These solutions should leverage AI and machine learning to identify subtle anomalies and hidden malicious patterns that traditional security measures might miss. Additionally, continuous employee education on emerging threats and the implementation of strict access controls and data loss prevention measures are crucial in mitigating the risks posed by these innovative attack vectors.”
Jason Soroko, Senior Fellow at Sectigo:
“The ASCII smuggling flaw in Microsoft 365 Copilot is a novel vulnerability that allows attackers to hide malicious code within seemingly harmless text using special Unicode characters. These characters resemble ASCII but are invisible in the user interface, allowing the attacker to embed hidden data within clickable hyperlinks. When a user interacts with these links, the hidden data can be exfiltrated to a third-party server, potentially compromising sensitive information, such as MFA one time password codes.
“The attack works by stringing together multiple methods: First, a prompt injection is triggered by sharing a malicious document in a chat. Then, Copilot is manipulated to search for more sensitive data, and finally, ASCII smuggling is used to trick the user into clicking on an exfiltration link.
“To mitigate this risk, users should ensure their Microsoft 365 software is updated, as Microsoft has patched the vulnerability. Additionally, they should exercise caution when interacting with links in documents and emails, especially those received from unknown or untrusted sources. Regular monitoring of AI tools like Copilot for unusual behavior is also essential to catch and respond to any suspicious activity quickly
“What needs to be reported on more often is the tactic of prompt injections. A prompt injection is a type of attack where an attacker manipulates an AI system, such as a large language model, by crafting specific inputs (or “prompts”) that cause the AI to perform unintended actions. In the context of AI-driven tools like Microsoft 365 Copilot, a prompt injection can involve embedding malicious instructions within a document or message. When the AI processes these inputs, it mistakenly interprets them as legitimate commands, leading to actions like retrieving sensitive information, altering responses, or even exfiltrating data.
“The essence of a prompt injection attack is that it takes advantage of the AI’s ability to interpret and act on natural language inputs, causing it to carry out operations that the user or system owner did not intend. This can be particularly dangerous when the AI has access to sensitive data or controls within a system.”