A year ago, “shadow AI” mostly meant an analyst pasting a draft contract into ChatGPT. It was a data-leakage problem dressed up as a productivity one, and the answer was a stricter acceptable-use policy plus a corporate licence for an approved chatbot. The category has changed shape since then. The Lenovo Work Reborn Research Series 2026, published on 1 May, found that between one-fifth and one-third of employees now use AI outside any IT oversight, and that 31% have received no employer-provided training at all. More importantly, those workers are no longer typing into a chat box. They are spinning up agents that read calendars, query CRM records, fire off emails, and call APIs on systems the IT team did not register and cannot see. Shadow AI is autonomous now. The question is whether your governance is.
That shift explains the wave of “AI control plane” launches in early May. ServiceNow used Knowledge 2026 on 6 May to position itself as “the governance layer for the enterprise, regardless of where AI agents are built, deployed, or operating.” Microsoft Agent 365 went generally available for commercial customers on 1 May, billed as the way to discover, govern, and secure AI agents across Microsoft, third-party SaaS, cloud, and local environments. Google Cloud Next 2026 made the same pitch under a different name, the Gemini Enterprise Agent Platform with a governance stack on top. Three vendors, one week, one message: there is now a software product called “AI agent governance,” and your CIO should buy it.
The product is real, the engineering is impressive, and most large organisations probably do need a control plane of some kind. But buying one is not the same thing as solving the underlying problem, and the rush to procure governance can paper over a more uncomfortable truth. Almost every shadow AI risk in your business is a policy and operating-model failure that a control plane will surface, not fix.
What is shadow AI in 2026?
Shadow AI is the use of artificial intelligence tools, agents, and integrations inside an organisation without the knowledge, approval, or oversight of the IT and security functions. In its 2024 form, it usually meant employees pasting confidential text into consumer chatbots. In 2026, it more commonly means autonomous agents built on Microsoft Copilot, Salesforce Agentforce, custom GPTs, or vibe-coded scripts wired into corporate APIs, running on schedules nobody approved, reading data nobody mapped, and writing into systems of record without an audit trail. Lenovo’s data puts the practical scope at roughly one in three workers. IBM’s most recent Cost of a Data Breach research found shadow AI added an average of $670,000 to incident costs and that only 37% of organisations had detection or governance policies in place. The 2024 framing still applies as a baseline, but the risk surface is now larger, faster, and harder to inventory.
The most useful working definition for a board paper today distinguishes three layers: human-in-the-loop AI (a person reviewing every action), supervised agent AI (a person reviewing exceptions), and autonomous agent AI (no human in the loop on routine actions). Shadow AI in 2026 increasingly sits in that third layer, which is also the layer your existing acceptable-use policy almost certainly does not cover.
Why are vendors racing to ship a control plane?
Because the buyers asked for one, and because the platform vendors saw a category they did not own. ServiceNow’s Autonomous Security and Risk launch on 6 May folded the Veza Access Graph and Armis asset intelligence into a single product that governs agent identities, permissions, and connected assets across the enterprise. The same announcement extended ServiceNow’s AI Control Tower into Microsoft Agent 365 and into NVIDIA’s Enterprise AI Factory, so that Microsoft-built and on-data-centre agents both report into one pane of glass.
Microsoft, for its part, used the 1 May general availability of Agent 365 to make the same claim from inside its own ecosystem. Google answered with the Gemini Enterprise Agent Platform at Cloud Next. The market signal is unmistakable. Boards have started asking executives where their AI agents are, what those agents can access, and who is liable when one misfires, and no executive wants to answer that with a spreadsheet. The Lenovo data, which says 61% of IT leaders report AI is increasing cybersecurity risk while only 31% feel confident addressing it, gives the boardroom version of that anxiety in numbers. A “control plane” answers a procurement question that did not exist eighteen months ago.
What does an AI agent control plane actually do?
An AI agent control plane is a software layer that catalogues every AI agent operating in or against an organisation’s systems, assigns each agent an identity and a scope of permissions, monitors what it does, and enforces policy when it strays. In practice, a working control plane needs four capabilities. The first is discovery, finding agents that the IT team did not deploy, including the ones embedded inside SaaS products. The second is identity, assigning each agent a non-human identity with credentials that can be rotated and revoked. The third is authorisation, binding that identity to a least-privilege scope of data and actions, ideally based on the same access model used for human users. The fourth is observability, logging every action with enough context for an auditor or an incident responder to reconstruct it.
ServiceNow, Microsoft, and Google all advertise versions of those four capabilities. The category itself is genuine, the technology is converging fast, and most enterprises will need at least one of these platforms by the end of 2027. The interesting questions for the next twelve months are not whether to buy, but how the pieces fit together, whether the control plane sits inside one vendor’s ecosystem or across all of them, and how it integrates with the access governance you already run for human identities.
Why does buying a control plane not fix your shadow AI problem?
Because a control plane is software, and shadow AI is mostly an organisational failure. Three patterns keep showing up in the work, and they make the point. The first is the policy void. Most companies still write AI acceptable-use policies that assume a human is at the keyboard. They prohibit pasting customer data into ChatGPT but say nothing about an agent doing the same thing on a schedule, and a control plane cannot enforce a rule that has not been written.
The second is the ownership gap. When discovery turns up 400 unsanctioned agents, somebody has to decide which to keep, which to retire, and who owns each one going forward. That decision sits with a business sponsor, not with the platform team, and most organisations have no map of who that sponsor is for an agent that crosses sales, marketing, and finance.
The third is the change-management debt. Most of the agents your control plane will surface were created because an approved tool was too slow, too restrictive, or too unfamiliar. Unless you address the original demand, employees will route around the control plane the same way they routed around procurement five years ago. None of these problems are solved by a purchase order. They are solved by a written policy, an owner per agent, and an honest conversation about why people built the agents in the first place.
How should governance and tooling evolve together?
The right sequence is policy first, discovery second, controls third, enforcement fourth, and then iterate. Policy clarity is the cheap part and the part most organisations skip. A written, board-approved statement that distinguishes between human-in-the-loop AI, supervised agent AI, and autonomous agent AI, with clear data-handling tiers for each, is achievable in a fortnight if the right people are in the room. Discovery is what the new control planes do well, and most companies will be surprised by what they find.
Controls is where the platform and the policy meet. Each surfaced agent is assigned an identity, a scope, and an owner, and the ones that fail the policy test are retired or rebuilt under sanctioned constraints. Enforcement closes the loop, with logging, alerting, and periodic audits that sit inside the same risk register as access reviews and privileged-account monitoring. The temptation in May 2026 will be to reverse this sequence, buy the platform, run discovery, and then write the policy backwards from the data, which produces a policy that codifies whatever was already happening rather than what should be allowed. That is a worse outcome than no policy at all, because it gives the board a written document that legitimises an unsafe operating model.
What does this mean for CRM AI agents specifically?
CRM is where shadow AI becomes shadow revenue. Salesforce Agentforce, HubSpot’s Breeze agents, and Dynamics 365 Copilot are all designed to be configured by a business user, often a sales-operations or marketing-operations lead, not by IT. That is by design and it is one of the better aspects of the current platform generation. It is also exactly how a fleet of unsanctioned agents enters a customer-facing system without ever crossing an IT review board.
The risk is concrete. An agent that updates opportunity stages on rules nobody documented can corrupt a forecast for an entire quarter before anyone notices. A Breeze workflow that emails prospects without a sender review can damage deliverability across the marketing domain. A Copilot action that writes back to Dynamics with permissions inherited from a power user who left the company can survive long after the relevant access review should have caught it. A control plane will eventually surface those agents, but it will only know what to flag if the CRM’s own administrative policy has been updated to define what an “agent” means in the context of the pipeline, the customer record, and the outbound channel. Most CRM administration handbooks were written before that question existed, which is the right place to start the work, not the place to finish it.
The Sirocco perspective
We spend a lot of our time inside Salesforce, HubSpot, and Dynamics 365 orgs that have already turned on agent capabilities, and the pattern is consistent. The platform team has done its job, the agents are running, and somebody senior has just realised they cannot answer two questions: how many of these are there, and what can they do. A control plane will help with the first question, eventually. The second question is governance work that no software can do for you.
Our usual recommendation is to draft the AI agent policy and the data-handling tiers before the procurement decision, not after, even if the first draft of the policy is rough. It costs less, it surfaces the political disagreements early, and it gives you a yardstick against which to evaluate ServiceNow, Microsoft, Google, and the inevitable next entrant. If you need a partner who has no vendor allegiance and who will say “the platform is the easy part” before they say anything else, that is the conversation we are set up to have. Schedule a consultation if you would like a 60-minute discovery on where your CRM agent footprint actually sits today.
Get in Touch
If you have already turned on AI agents inside Salesforce, HubSpot, or Dynamics 365 and cannot easily say who owns each one or what data they can touch, the conversation below is the right place to start. Tell us what you have running and what you wish you knew, and we will come back with a short, honest read.
