Copilot's No-Code AI Agents Liable to Leak Company Data

 

Artificial intelligence (AI) agents are a breeze to create using Microsoft Copilot Studio, and almost just as easy to manipulate into divulging sensitive corporate data.

Despite broad security concerns about AI agents, last year, Microsoft decided to allow even totally nontechnical users to deploy their own autonomous bots. You don't need to know how to code at all now — using a simple graphical interface, employees can spin up robots that automate business processes, integrate with other business platforms, and can perform customer-facing functions.

There's a certain lack of shock factor, then, in a new Tenable report detailing just how insecure these agents can be. In a simple experiment, researchers created a basic agent, and then very easily demonstrated how it could be coaxed into spilling private data and granting attackers other silly powers.

"These tools can naively become a massive risk due to their level of access, ability to perform actions, and ability to be easily manipulated," warns Keren Katz, senior group manager of AI security product and research at Tenable. "As a result, agentic tools, combined with inherent LLM vulnerabilities, can quickly lead to data exposure and workflow hijacking."

A lot of the time technologies can be complicated, so novice users might install them incorrectly or miss some important security setting. Here, it's the opposite: Copilot Studio agents are so simple and easy to deploy that users not trained in cybersecurity basics are probably going to forget that they even need any kind of layered protection.

For example, think of a banal way somebody might make use of an AI agent. How about a lazy travel agency that wants to save time on calls by having customers book their travel plans with a chatbot? Tenable created such an agent, allowing theoretical customers to book and edit their theoretical reservations, access information about pricing and availability, and receive summaries of their itineraries. They then connected the bot to a SharePoint file, with fake customers' names and credit card details.

To account for nosey customers, the researchers very specifically instructed the bot to never allow one customer to access data pertaining to any others. "This is a critical security mandate," they wrote in bold text in its system prompt.

Once the bot was set up, it took no time to tear it back down. The researchers first tried a rudimentary prompt injection message to get it to reveal what actions it could perform, and it dutifully agreed. Then in spite of their "critical security mandate," when they asked for information pertaining to random customers, it did not hesitate to provide those details. They, playing the role of a totally non-permissioned customer, immediately got access to other customers' names, credit card details, and related booking information.


Next, the researchers highlighted how dangerous it is to give customers even a distant power to edit data using an AI agent. Again playing the role of a customer, they prompted the travel bot to update their vacation booking to cost $0. A single sentence prompt did the trick.

Asked whether her security instructions for the bot could have been improved, Katz offers a definitive no. "This is a built-in implementation issue, not a configuration issue," she says.

She also warns that although this experiment was specific to Microsoft, "we believe the underlying problem is endemic to other AI platforms. Any platform that provides an easy way to build an AI agent, especially one that connects the LLM to external tools to perform real-world tasks, is operating with the same fundamental vulnerability risk."

What makes it worse is the reality of "shadow AI" today. Because it's so quick and easy to create autonomous agents, employees of medium and large organizations are deploying tons of them here and there at their own discretion, usually outside of the visibility of security teams. Katz claims that most enterprises are unaware that dozens or even hundreds of agents are currently active in their environments. For example, she recalls that "we had a recent scenario where a customer switched from one enterprise AI company to another and when [we looked into it], we found several dozen instances of the old AI vendor."

"Organizations need a centralized view that automatically maps exactly which systems and data stores an agent can interact with before and during deployment. This visibility enables security teams to identify misconfigurations, like an agent having broad 'edit' permissions, and take action before sensitive data and systems are compromised," she says. Organizations can also monitor requests made to AI agents, and the actions those agents then perform, on an ongoing basis.

Katz encourages companies to make use of tools like Copilot, "provided they are aware of the risks and proactively manage the security of agents within their enterprise environment. The goal is to innovate safely by treating these systems with the same rigor as other critical technologies."

Microsoft declined to comment on this story.


Reference:https://www.darkreading.com/

Comments

Popular posts from this blog

The Hidden Lag Killing Your SIEM Efficiency

Critical Vulnerability in Veeam Backup & Replication Exposes Enterprises to Remote Code Execution

Lotus Panda Hacks SE Asian Governments With Browser Stealers and Sideloaded Malware