Microsoft’s 2026 Release Wave 1 for Dynamics 365 went live on 1 April, and if you have been following the Copilot roadmap, you might be tempted to file it alongside every previous update: more AI suggestions, a few new dashboards, another incremental step toward the intelligent enterprise Microsoft has been promising since 2023. That reading would be a mistake.
Wave 1 is the release where Microsoft’s AI agents move from advisory to operational. The agents introduced this wave can close loops without human confirmation. Across Sales, Finance, Supply Chain Management, Field Service, Business Central, and Contact Center, agents are now built to research, score, disqualify, communicate, schedule, reconcile, and act, without waiting for a user to click confirm. The span of that list is what makes this release categorically different from anything that preceded it under the Copilot banner.
That shift carries real implications for organisations running Dynamics 365, not because the capabilities are unwelcome, but because D365 sits across both the CRM and ERP layers simultaneously. The governance challenge this release creates is of a different order than anything the Copilot era prepared organisations for, and the industry conversation has not yet caught up with that reality.
What Microsoft Shipped in Dynamics 365 Wave 1
The scope of Wave 1 is broader than typical release communications suggest, and it is worth mapping out in concrete terms before getting to the governance argument. Across the Dynamics 365 portfolio, Microsoft has introduced or significantly matured autonomous agents in nearly every major module, with each agent designed to act rather than advise.
In Dynamics 365 Sales, agents now research and score leads by drawing on CRM signals alongside Microsoft Graph data: email threads, calendar activity, and meeting history factor into qualification decisions without a sales representative manually updating records. Low-intent leads can be disqualified and personalised outreach proposed, all without human instruction triggering the process. In Customer Service, agents handle case management, email triage, customer intent classification, and knowledge management with a degree of autonomy that goes well beyond previous Copilot capabilities.
In Finance, the Finance Agent performs reconciliation, variance analysis, and Excel data preparation. It also communicates with customers about outstanding invoices directly through Outlook, without a finance team member composing or approving the message in advance. In Field Service, the Scheduling Operations Agent makes dispatch decisions autonomously in response to changing conditions on the ground. In Business Central, the platform itself has been explicitly redesigned around agentic ERP as its organising principle, with agents automating sales order and purchase order workflows end to end.
In warehouse operations, AI-driven picking, stock rebalancing, and hands-free scanning operate without manual triggers. In the Contact Center, agents handle containment and assisted service across every channel, including new and emerging ones. Copilot Studio, the platform layer for building and deploying custom agents, has received investment in multi-agent orchestration, advanced governance controls, and real-time risk assessment in this same wave.
The pattern across all of these is consistent: the agent acts, and the human reviews the outcome rather than authorising the action in advance. That is the shift Wave 1 makes real at scale.
Why This Is Different from Every Previous Copilot Update
Every Copilot update since the original 2023 launch has followed the same model: the AI surfaces a recommendation and the human decides whether to act on it. Draft this email. Summarise this meeting. This lead looks promising based on recent activity. The agent was, in functional terms, an intelligent autocomplete sitting inside a human workflow. The human remained the decision-making unit. The AI removed friction from a step the human was already taking.
Wave 1 breaks that model in a specific and consequential way. When a sales agent disqualifies a lead in the new D365 Sales configuration, it does not flag the lead for human review. It updates the record and moves on. When the Finance Agent completes a reconciliation run, it acts on the variance data rather than presenting it for sign-off. When the Scheduling Operations Agent re-routes a field engineer’s day, it does so based on its own assessment of the relevant conditions, not on a dispatcher’s explicit instruction.
None of this is a criticism of the design. Closing loops autonomously is precisely the productivity gain these agents are built to deliver. But it means the mental model organisations developed around Copilot as a co-pilot is no longer adequate for what Wave 1 makes possible. When a system closes a loop without human confirmation, the question of accountability changes shape. The agent acted: on whose authority, within what constraints, and against what standard of correctness? Those questions need answers before the agent goes live, not after it has been operating for three months.
The ERP-CRM Governance Problem Nobody Is Talking About
Here is the point that most Wave 1 commentary has missed. Salesforce Agentforce operates inside the revenue layer. When an Agentforce agent makes an error, the blast radius is bounded by the CRM: a lead misqualified, a case misrouted, a quote generated with incorrect pricing rules. These are consequential failures, but they are contained within the sales and service domain. The rest of the business continues operating independently.
Dynamics 365 does not have that boundary. An autonomous agent in D365 Sales is sitting one integration layer away from Finance, Supply Chain Management, and Business Central. In mature D365 deployments, the Power Platform connects these modules deliberately, and that connectivity is a feature, not a flaw. But it means that an agent operating in the CRM layer can, through the normal flow of data and business logic, cascade decisions into procurement, cash flow forecasting, inventory allocation, and financial reporting without the cascade ever being flagged as an autonomous action requiring review.
Consider a plausible scenario. A sales agent disqualifies a set of leads based on engagement scoring signals. Some of those leads were associated with pending supply orders. The supply chain agent, reading updated demand signals from the CRM, reduces the corresponding procurement order. The Finance Agent, processing the revised purchasing data, updates cash flow projections and communicates the adjustment to the relevant supplier via Outlook. Each individual decision is defensible in isolation. Together, they have altered the organisation’s operational and financial position without a single human authorisation along the chain.
This is the governance challenge Dynamics 365 Wave 1 creates that no previous Microsoft AI release has. It is the natural consequence of genuine cross-domain automation across a platform specifically designed to span both CRM and ERP. It demands a different category of organisational response, and CRM technical debt accumulated in the data and configuration layers makes the problem significantly harder to address once agents are live.
How to Integrate CRM with ERP When Agents Make the Decisions
Integrating CRM with ERP has historically been a data synchronisation challenge: keep customer records, order history, invoices, and inventory data consistent between a sales system and an operational one. Standard integration approaches, APIs, middleware, Power Automate flows, solve for data consistency. They ensure that a closed deal in CRM creates a corresponding order in ERP, and that an invoice raised in Finance is visible to the sales representative managing the account.
The introduction of autonomous agents changes the integration design challenge in a fundamental way. When an agent in CRM can trigger actions in ERP, the integration layer is no longer just a data pipeline. It becomes a governance boundary that defines what an agent is authorised to do across systems. That is a very different design problem, and it requires a very different approach to integration architecture than the synchronisation layer most organisations have built.
Organisations deploying Dynamics 365 Wave 1 need to think about CRM-ERP integration across three layers simultaneously. The data layer has not changed: clean, deduplicated, well-governed master data remains the foundation without which agent decisions are unreliable. The permission layer is new: what actions can each agent type take in each connected module, and does that configuration reflect a deliberate architectural decision or simply the default Microsoft shipped in the standard activation flow? The audit trail layer is the most novel: when an agent acts autonomously across CRM and ERP, the organisation must be able to reconstruct the decision chain after the fact, assign accountability to a specific agent configuration, and intervene at the right point when needed.
Most organisations deploying Wave 1 today are focused on the data layer, which is a reasonable starting point. Very few have given serious architectural thought to layers two and three. That gap is where exposure accumulates, quietly, until a cross-module cascade makes it visible.
Why CRM Data Quality Becomes Existential in an Agentic World
CRM data quality matters because every downstream process that touches customer data depends on records being accurate, complete, and current. Sales forecasting, service routing, marketing segmentation, financial reporting: all of them produce unreliable outputs when the underlying CRM data is inconsistent. This is an argument every CRM team has heard, accepted in principle, and then deprioritised against more urgent configuration work.
The reason data quality tends to stay at the bottom of the backlog is that, in a manually operated CRM, poor data quality is primarily an efficiency problem. Humans compensate by applying judgement. A salesperson who notices that a lead’s last activity was logged incorrectly updates the record. A service manager who spots a misrouted case reassigns it. The human in the loop is the error correction mechanism, and organisations have tacitly accepted that this is how it works.
In an agentic CRM, that compensating mechanism disappears. A sales agent trained to disqualify leads based on engagement scoring will disqualify genuinely interested prospects if their interaction history was never logged correctly. The Finance Agent will produce inaccurate reconciliations if the underlying transaction data has been inconsistently maintained across modules. The Scheduling Operations Agent will make suboptimal dispatch decisions if asset and technician records are incomplete or out of date.
What changes in an agentic context is not the argument for data quality but the consequence of ignoring it. Poor CRM data quality in a human-operated system costs time and creates noise in reporting. Poor CRM data quality in an agent-operated system creates operational risk, because the agent acts on what it finds without the benefit of human interpretation at the point of decision. The organisation that has tolerated data quality as a chronic, manageable problem is about to discover that “manageable” assumed a person in the loop at every critical juncture.
What Shadow AI Looks Like When It Wears a Microsoft Badge
Shadow AI refers to the use of artificial intelligence tools by employees or business units without the knowledge, approval, or governance oversight of IT and security teams. It typically emerges because commercial AI tools offer faster or more accessible capability than sanctioned alternatives, and employees adopt them informally. The governance risks are well documented: data processed through unauthorised tools falls outside the organisation’s data handling policies, there is no audit trail, and accountability for AI-generated errors is impossible to establish clearly.
The conventional shadow AI narrative in enterprise settings is about employees using external tools outside the visibility of the IT function. Wave 1 introduces a subtler variant of this problem that existing shadow AI detection approaches are not designed to catch. Microsoft 365 Copilot role-based agents for Dynamics 365 Sales and Customer Service entered public preview in April 2026. Copilot Studio allows citizen developers to build and deploy custom agents without writing code, using credentials and licences the organisation has already activated.
A business unit leader who discovers that Copilot Studio can deliver a working sales qualification agent in an afternoon may do exactly that, without involving the IT governance function, the CRM team, or any architectural review process. The resulting agent is not shadow AI in the traditional sense. It was built on sanctioned Microsoft infrastructure, using credentials the user legitimately holds, within the licence the organisation has already paid for. The standard detection approach, looking for unauthorised external tools accessing corporate data, will not identify it.
But the governance failure is equivalent: an autonomous agent operating in the production CRM and ERP environment without the permission model, data access boundaries, or audit trail the organisation intended for that category of automation. This is the form shadow AI takes when the infrastructure is the official stack, and it is the harder problem to govern because the usual signals that trigger IT intervention simply do not fire.
The Sirocco Perspective
We work with organisations running Dynamics 365 across manufacturing, energy, professional services, and construction, and the Wave 1 conversations we are having in April 2026 fall broadly into two categories. One group has identified specific pilot use cases, is designing their agent permission model deliberately, and is treating the rollout as an architecture decision. The other group has not yet registered that Wave 1 is live, and will encounter it through a deployment that arrived without a governance framework in place.
Our view is straightforward. The agentic capabilities Microsoft has shipped in Wave 1 are real and valuable. The organisations that will extract the most from them are those that treat deployment as a governance and architecture question first, and a feature adoption question second. The platform is ready. The harder question is whether your organisation’s data quality, permission model, and CRM-ERP integration design are ready to support autonomous action at the scale this release makes possible.
If you are planning your Dynamics 365 Wave 1 rollout, assessing whether your current configuration is agent-ready, or trying to establish the governance structures you need before going live with autonomous workflows, schedule a consultation with our team. We can help you work out where you are and what a sensible next step looks like.
Get in Touch
If your organisation is navigating the Dynamics 365 Wave 1 rollout and needs a clear-eyed view of agent governance, data readiness, or CRM-ERP integration design, our team can help you build a framework before autonomous workflows go live.
