Navigating Copilot Studio Agent Access: Data Grounding and Licensing for Unlicensed Users

bp1

Executive Summary

A user without a Microsoft 365 Copilot license can interact with a custom Agent built in Copilot Studio that uses both public and company data sources. However, their access to company data will be strictly governed by their existing Microsoft Entra ID permissions to those specific data sources (e.g., SharePoint, Dataverse, uploaded PDFs). If they lack the necessary permissions or if the agent’s authentication is not configured to allow their access to internal resources, the agent will effectively be “blocked” from retrieving or generating responses from that internal company data for them. Their results will then be limited to what can be generated from public data sources, provided the agent is designed to handle such limitations gracefully and the query can be fulfilled solely from public information. It is crucial to note that interactions by unlicensed users with Copilot Studio agents, especially those using generative answers or internal data, will incur costs against the organization’s Copilot Studio message capacity.

Introduction

The rapid evolution of AI capabilities within Microsoft’s ecosystem, particularly with Microsoft 365 Copilot and Copilot Studio, has introduced powerful new ways to interact with information. However, this advancement often brings complexities, especially concerning licensing and data access. A common point of confusion arises when organizations deploy custom AI Agents built using Microsoft Copilot Studio, which can leverage a mix of public internet data and sensitive internal company information (such as PDFs, SharePoint documents, or Dataverse records). The central question for IT professionals is whether users who do not possess a Microsoft 365 Copilot license will be able to utilize these Agents, and if so, what limitations apply, particularly regarding access to proprietary company data. This report aims to demystify these interactions, providing a clear, definitive guide to the interplay of licensing, data grounding, and authentication for Copilot Studio Agents.

Understanding the Copilot Ecosystem

The Microsoft Copilot ecosystem comprises several distinct but interconnected components, each with its own purpose and licensing model. Understanding these distinctions is fundamental to clarifying access rights.

Microsoft 365 Copilot: The Enterprise Productivity AI

Microsoft 365 Copilot represents an advanced AI-powered productivity tool deeply integrated across the suite of Microsoft 365 applications, including Word, Excel, PowerPoint, Outlook, and Teams. Its primary function is to enhance user productivity by orchestrating Large Language Models (LLMs) with a user’s organizational data residing within Microsoft Graph and the content generated and managed within Microsoft 365 applications.1 This powerful synergy enables it to generate responses, summarize extensive content, draft emails, create presentations, and analyze data, all within the rich context of a user’s specific work environment.

To fully leverage Microsoft 365 Copilot, users must satisfy specific licensing prerequisites. This includes possession of an eligible Microsoft 365 or Office 365 base license, such as Microsoft 365 E3, E5, A3, A5, Business Standard, Business Premium, or Office 365 E3, E5, A3, A5, F1, F3, E1, Business Basic.2 Beyond this foundational license, a separate Microsoft 365 Copilot add-on license is required, typically priced at $30 per user per month.3 This add-on license is not merely an optional feature; it is essential for unlocking the full spectrum of capabilities, particularly its seamless ability to access and ground responses in a user’s

shared enterprise data and individual data that is indexed via Microsoft Graph.1

A cornerstone of Microsoft 365 Copilot’s design is its robust Enterprise Data Protection (EDP) framework. It operates strictly within the Microsoft 365 service boundary, ensuring that all user prompts, retrieved data, and generated responses remain securely within the organization’s tenant.1 Critically, this data is not utilized to train the foundational LLMs that power Microsoft 365 Copilot. Furthermore, the system rigorously adheres to existing Microsoft 365 permission models. This means that Microsoft 365 Copilot will “only surface organizational data to which individual users have at least view permissions”.1 This semantic index inherently respects user identity-based access boundaries, preventing unauthorized data exposure. This design implies a fundamental level of trust where Microsoft 365 Copilot acts as an intelligent extension of the user’s existing access rights within the Microsoft 365 ecosystem. This broad, personalized access to all relevant Microsoft 365 data, coupled with built-in security and privacy controls that mirror existing access permissions, represents a core differentiation from a more basic custom agent built in Copilot Studio. Consequently, organizations planning to deploy Microsoft 365 Copilot must first ensure their Microsoft 365 permission structures are meticulously managed and robust. Without proper governance, Copilot could inadvertently expose data based on over-permissioned content or previously “dark data,” underscoring the necessity for a well-defined data access strategy.

Microsoft Copilot Studio: The Custom Agent Builder

In contrast to the integrated nature of Microsoft 365 Copilot, Microsoft Copilot Studio serves as a low-code platform specifically engineered for the creation of custom conversational AI agents. These agents, often referred to as “copilots” or “bots,” are designed to answer specific questions, perform defined tasks, and integrate with a diverse array of data sources and external systems.5 A key strength of Copilot Studio is its versatility in deployment; agents can be published across multiple channels, including public websites, Microsoft Teams, and can even be configured to extend the capabilities of Microsoft 365 Copilot itself.5 The platform empowers makers to define explicit instructions, curate specific knowledge sources, and program actions for their agents.6

The agents developed within Copilot Studio possess a standalone operational nature. They can function independently, establishing connections to various data repositories. These include public websites, documents uploaded directly (such as PDFs), content residing in SharePoint, data within Dataverse, and other enterprise systems accessible via a wide range of connectors.5 This independent operation distinguishes them from the deeply embedded functionality of Microsoft 365 Copilot.

Despite their standalone capability, Copilot Studio agents are also designed for seamless integration with Microsoft 365 Copilot. They can be purpose-built to “extend Microsoft 365 Copilot” 6, allowing organizations to develop highly specialized agents. These custom agents can then leverage the sophisticated orchestration engine of Microsoft 365 Copilot, incorporating bespoke domain knowledge or executing specific actions directly within the broader Microsoft 365 Copilot experience. This positions Copilot Studio as a controlled gateway for data access. Unlike Microsoft 365 Copilot, which provides broad access to a user’s Microsoft Graph data based on their existing permissions, Copilot Studio explicitly enables makers to

select and configure precise knowledge sources.7 This granular control over what information an agent can draw upon is critical for effective governance. It makes Copilot Studio particularly suitable for scenarios where only specific, curated datasets should be exposed via an AI agent, even if the user might possess broader permissions elsewhere within the Microsoft 365 environment. This capability allows organizations to create agents that offer targeted access to internal knowledge bases for a wider audience, potentially including users who do not possess Microsoft 365 Copilot licenses, without inadvertently exposing the full breadth of their Microsoft Graph data. However, achieving this requires meticulous configuration of the agent’s knowledge sources and authentication mechanisms.

Licensing Models for Copilot Studio Agents

A frequent area of misunderstanding pertains to the distinct licensing model governing Copilot Studio Agents, which operates separately from the Microsoft 365 Copilot user license. This fundamental distinction means that the agent itself incurs costs based on its usage, regardless of whether the individual interacting with it holds a Microsoft 365 Copilot license.

Copilot Studio’s Independent Licensing

Copilot Studio offers flexible licensing models tailored to organizational needs. The Pay-as-you-go model allows organizations to pay solely for the actual number of messages consumed by their agents each month, eliminating the need for upfront commitment. This model is priced at $0.01 per message 8, providing inherent flexibility and scalability to accommodate fluctuating usage patterns.9 Alternatively, organizations can opt for

Message Packs, which are subscription-based, with one message pack equating to 25,000 messages per month for a cost of $200 per tenant.8 Should an agent’s usage exceed the purchased message pack capacity, the pay-as-you-go meter automatically activates to cover the excess.8 It is important to note that unused messages from a message pack do not roll over to the subsequent month.8

A critical understanding for any deployment is that all interactions with a Copilot Studio agent that result in a generated response—defined as a “message”—contribute to these billing models. This applies unless specific exceptions are met, such as interactions by users holding a Microsoft 365 Copilot license within Microsoft 365 services, as detailed below. Consequently, interactions initiated by users who do not possess a Microsoft 365 Copilot license will directly consume from the organization’s Copilot Studio message capacity and, therefore, incur costs.8 This represents a significant operational cost consideration that is often overlooked. Even when an unlicensed user interacts with a Copilot Studio agent to query seemingly “free” public data, the organization still bears a per-message cost for the Copilot Studio service itself. This necessitates a careful evaluation of the anticipated usage by unlicensed users and the integration of these Copilot Studio message costs into the overall budget. Such financial implications can significantly influence decisions regarding the broad exposure of certain agents versus prioritizing Microsoft 365 Copilot licensing for frequent users who require access to internal data, thereby leveraging the benefits of zero-rated usage.

Copilot Studio Use Rights with Microsoft 365 Copilot

For users who are provisioned with a Microsoft 365 Copilot license, a distinct advantage emerges in their interactions with Copilot Studio agents. Their usage of agents specifically built in Copilot Studio for deployment within Microsoft Teams, SharePoint, and Microsoft 365 Copilot (such as Copilot Chat) is designated as “zero-rated”.8 This means that interactions by these licensed users, when occurring within the context of Microsoft 365 products, do not count against the organization’s Copilot Studio message pack or pay-as-you-go meter. This zero-rating applies to classic answers, generative answers, and tenant Microsoft Graph grounding.8

Beyond cost benefits, the Microsoft 365 Copilot license also confers specific use rights within Copilot Studio. These rights include the ability to “Create and publish your own agents and plugins to extend Microsoft 365 Copilot”.8 This capability underscores a symbiotic relationship: users with Microsoft 365 Copilot licenses gain enhanced functionality and significant cost efficiencies when interacting with custom agents that are integrated within the Microsoft 365 ecosystem. This contrast in billing models highlights a clear financial incentive. If a substantial volume of agent usage involves internal data or generative answers, and the users engaged in these interactions already possess Microsoft 365 Copilot licenses, the organization benefits from the zero-rated usage, potentially leading to considerable cost savings. Conversely, if a large proportion of users are unlicensed, every message generated by the Copilot Studio agent will incur a direct cost. This situation presents a strategic licensing decision point for organizations. A thorough analysis of the user base and agent usage patterns is advisable. If widespread access to internal data via AI agents is a strategic priority, investing in Microsoft 365 Copilot licenses for relevant users can substantially reduce or eliminate the Copilot Studio message costs for those specific interactions within Microsoft 365 applications. This tiered access and cost model is crucial for informing the overall AI strategy and budget allocation, distinguishing between basic, publicly-grounded agents (which still incur Copilot Studio message costs for unlicensed users) and agents providing deep internal data insights (which are more cost-effective when accessed by Microsoft 365 Copilot licensed users within the Microsoft 365 environment).

Data Grounding and Knowledge Sources in Copilot Studio

Copilot Studio agents derive their intelligence and ability to provide relevant information from “knowledge sources,” which are meticulously configured to provide the data necessary for generative answers. The specific type of knowledge source selected directly dictates its authentication requirements and, consequently, determines who can access the information presented by the agent.

Supported Knowledge Sources

Copilot Studio agents offer robust capabilities for grounding in a diverse array of data sources, enabling them to provide rich, relevant information and insights.7 These supported knowledge sources include:

  • Public Websites: Agents can be configured to search and return results from specific, predefined public URLs. Additionally, they can perform a broader general web search, drawing information from public websites indexed by Bing.7 Crucially, no authentication is required for public websites to serve as a knowledge source.7
  • Documents (Uploaded Files/PDFs): Agents can search the content of documents, including PDFs, that have been uploaded to Dataverse. The agent then generates responses based on the information contained within these document contents.7 These are considered internal organizational sources.
  • SharePoint: Agents can establish connections to specified SharePoint URLs, utilizing Microsoft Graph Search capabilities to retrieve and return relevant results from the SharePoint environment.7 This is a common internal data source for many organizations.
  • Dataverse: The agent can connect directly to the configured Dataverse environment, employing retrieval-augmented generative techniques within Dataverse to synthesize and return results from structured data.7 This is a powerful internal data source for business applications.
  • Enterprise Data using Connectors: Copilot Studio agents can connect to a wide array of connectors that facilitate access to organizational data indexed by Microsoft Search or other external systems.5 The platform supports over 1,400 Power Platform connectors, enabling integration with a vast ecosystem of internal and third-party services.5 These are fundamental internal data sources.
  • Real-time Connectors: For specific enterprise systems like ServiceNow or Zendesk, real-time connectors can be added.10 In these configurations, Microsoft primarily indexes metadata, such as table and column names, rather than the raw data itself. Access to the actual enterprise data remains strictly controlled by the user’s existing access permissions within that specific enterprise system.10
The Role of Authentication

Authentication plays an indispensable role in controlling access for agents that interact with restricted resources or sensitive information.11 Copilot Studio provides several authentication options to meet varying security requirements, including “No authentication,” “Authenticate with Microsoft” (leveraging Microsoft Entra ID), or “Authenticate manually” with various OAuth2 identity providers such as Google or Facebook.11

The choice of authentication directly impacts data accessibility:

  • Public Data Access: If an agent is configured with the “No authentication” option, it is inherently limited to accessing only public information and resources.11 This configuration allows public website grounding to function without requiring any user sign-in.
  • Internal Data Access: For knowledge sources containing sensitive internal data, such as SharePoint, Dataverse, or enterprise data accessed via connectors, authentication is explicitly required.7 These internal sources typically rely on the “Agent user’s Microsoft Entra ID authentication”.7 This means that the user interacting with the agent must successfully sign in with their Microsoft Entra ID account. Once authenticated, their existing Microsoft Entra ID permissions to the underlying data source are meticulously honored.1

This principle of permission inheritance is foundational. Microsoft 365 Copilot, and by extension, Copilot Studio agents configured to access Microsoft 365 data, will “only surface organizational data to which individual users have at least view permissions”.1 This fundamental security control ensures that the AI agent cannot inadvertently or intentionally provide information to a user that they would not otherwise be authorized to access directly. This establishes the user’s existing permission boundary as the ultimate gatekeeper for data access. The most significant factor in “blocking” access to internal company data is not the Copilot Studio agent’s configuration itself, but rather the underlying permission structure within Microsoft 365, encompassing SharePoint permissions, Dataverse security roles, and granular file-level permissions. If a user lacks the requisite permissions to a specific document, SharePoint site, or Dataverse record, the agent is inherently unable to retrieve or generate information from that source for them, irrespective of the agent’s own capabilities. This reinforces the paramount importance of robust data governance and diligent permission management within an organization. The deployment of AI agents amplifies the necessity for a “least privilege” approach to data access, ensuring that any potential data exposure via an agent is a symptom of pre-existing permission vulnerabilities, rather than a flaw in the agent’s inherent security model.

The Core Question: Unlicensed Users and Mixed Data Agents

Addressing the user’s central query directly, the behavior of a Copilot Studio Agent configured with mixed data sources (combining company data and public websites) when accessed by users without a Microsoft 365 Copilot license is nuanced but can be clearly defined.

Access to Public Data

Users who do not possess a Microsoft 365 Copilot license can indeed access information derived from public websites through a Copilot Studio Agent. This functionality is feasible under specific conditions: the agent must be explicitly configured to utilize public website knowledge sources.7 Furthermore, the agent’s authentication setting can be configured as “No authentication” 11, allowing access without requiring user sign-in, provided the query can be resolved solely from public sources. It is important to remember that even in this scenario, the organization will incur Copilot Studio message costs for these interactions.8 The mechanism for this access involves the agent searching public websites indexed by Bing when Web Search is enabled, a process that can occur in parallel with searches of specific public website knowledge sources configured for the agent.7

Access to Company Data (PDFs, SharePoint, etc.)

Access to internal company data, such as PDFs uploaded to Dataverse, SharePoint content, or data retrieved from enterprise connectors, is fundamentally governed by the individual user’s existing permissions to those specific data sources.1 This is the primary blocking mechanism. If an unlicensed user—or, for that matter, any user regardless of their licensing status—lacks the necessary view permissions to the underlying document, SharePoint site, or Dataverse record, the agent will unequivocally

not be able to retrieve or generate responses from that data for them. The agent operates strictly within the security context and permission boundaries of the interacting user.

For agents configured to access internal data sources like SharePoint, Dataverse, or enterprise connectors, authentication is an explicit requirement.7 This typically involves setting the agent’s authentication to “Authenticate with Microsoft,” which mandates that the user interacting with the agent

must sign in with their Microsoft Entra ID account. Upon successful authentication, the user’s existing Microsoft Entra ID permissions are rigorously checked against the internal data source.7

Therefore, for unlicensed users, the outcome is clear: they will be effectively “blocked” from accessing company data via the agent if they lack the necessary permissions to the underlying data source. This blocking also occurs if the agent’s authentication configuration, such as being set to “Authenticate with Microsoft” without the user being signed in, or if organizational policies prevent their access to internal resources, restricts their access. In scenarios where an agent is configured with mixed data sources, and a query requires internal data for which the user is unauthorized, the agent’s response will be limited to what can be generated from the public data sources. This limitation is only possible if the agent has been specifically designed to gracefully handle such access denials and if the query can be adequately fulfilled using only public information. This graceful degradation is a critical aspect of agent design.

It is also important to understand the nuance regarding Microsoft Graph grounding and Enterprise Data Protection (EDP). While Copilot Studio agents can be configured to access specific SharePoint sites or Graph connectors 4, the broader, seamless access to a user’s

shared enterprise data, individual data, or external data indexed via Microsoft Graph connectors is a core capability fundamentally tied to the Microsoft 365 Copilot license.1 For users who do not possess a Microsoft 365 Copilot license, “Copilot Chat can’t access the user’s shared enterprise data, individual data, or external data indexed via Microsoft Graph connectors”.4 This means that while a Copilot Studio agent

could be configured to provide access to a specific, shared SharePoint site for an unlicensed user (provided authentication and permissions are met, and Copilot Studio metering is enabled), the unlicensed user will not experience the personalized, broad Graph-grounded capabilities that a Microsoft 365 Copilot licensed user would. Furthermore, the “Enhanced search results” feature within Copilot Studio, which leverages semantic search to improve the quality of results from SharePoint and connectors, also necessitates a Microsoft 365 Copilot license within the same tenant and requires the agent’s authentication to be set to “Authenticate with Microsoft”.7

The distinction between results being “limited” or “blocked” is crucial. While access to company data is generally “blocked” if permissions are not met, the results can be “limited” to public data if the agent is intelligently programmed to fall back to publicly available information. This highlights a critical imperative for agent design: developers building agents with mixed data sources must explicitly consider and implement how the agent behaves when internal data access is denied for a given user. This requires robust error handling and conditional logic within the agent’s design. If an agent attempts to access internal data for a user and is denied due to insufficient permissions, it should be programmed either to inform the user clearly about the access restriction or to attempt to answer the query using only the public data sources, if applicable and relevant. This proactive approach ensures a significantly better user experience, preventing hard failures or ambiguous responses. This extends beyond mere technical feasibility to encompass user experience design and effective governance, as an agent that silently fails or provides incomplete answers without explanation can lead to user frustration and erode trust in the system. Clear communication regarding data access limitations is therefore essential.

The following table summarizes user access capabilities to mixed-data agents based on their Microsoft 365 Copilot license status:

Feature/Scenario User has Microsoft 365 Copilot License User does NOT have Microsoft 365 Copilot License
Access to Public Website Data (via Copilot Studio Agent) Yes (Agent configured for public sources) Yes (Agent configured for public sources, “No authentication” option possible)
Access to Company Data (PDFs, SharePoint, Dataverse, etc. via Copilot Studio Agent) Yes (Subject to user’s existing Microsoft Entra ID permissions to data sources; agent authentication required) Yes (Subject to user’s existing Microsoft Entra ID permissions to data sources; agent authentication required; access is specific to configured Copilot Studio sources, not broad Graph access)
Seamless Access to User’s Shared Enterprise/Individual Data (via Microsoft Graph) Yes (Core capability of M365 Copilot)1 No (Copilot Chat cannot access user’s shared enterprise/individual data indexed via Microsoft Graph connectors)4
“Enhanced Search Results” (Semantic Search for SharePoint/Connectors in Copilot Studio) Yes (Requires M365 Copilot license in tenant & “Authenticate with Microsoft” for agent)7 No (Feature requires M365 Copilot license in tenant)7
Copilot Studio Message Billing for Agent Interactions Zero-rated when used within Microsoft 365 products (Teams, SharePoint, Copilot Chat)8 Incurs Copilot Studio message costs (Pay-as-you-go or Message Packs)8
Ability to Extend Microsoft 365 Copilot with Custom Agents/Plugins Included use rights with M365 Copilot license8 Not included8

Conclusion and Recommendations

The analysis demonstrates that users without a Microsoft 365 Copilot license are not entirely excluded from interacting with custom Agents built in Copilot Studio that leverage both public and company data sources. However, their access is critically contingent upon several factors, primarily their existing permissions to internal data and the authentication configuration of the Copilot Studio agent. While public data can generally be accessed without explicit user authentication (though still incurring Copilot Studio message costs for the organization), access to internal company data is strictly governed by the user’s Microsoft Entra ID permissions. If these permissions are insufficient, the agent will effectively be prevented from retrieving that sensitive information for the user.

Organizations deploying Copilot Studio Agents with mixed data sources should consider the following recommendations to ensure optimal functionality, security, and cost management:

  • Prioritize Robust Data Governance: The foundational security principle is that Copilot Studio Agents, like Microsoft 365 Copilot, honor existing user permissions. Therefore, a meticulous review and ongoing management of permissions on SharePoint sites, Dataverse environments, and other internal data sources are paramount. This proactive approach prevents unintended data exposure and ensures that agents only surface information to authorized individuals.1
  • Implement Strategic Authentication: Configure Copilot Studio agent authentication settings carefully based on the data sources employed. For agents accessing internal company data, “Authenticate with Microsoft” should be enabled to leverage Microsoft Entra ID and enforce user-specific permissions. For agents relying solely on public information, “No authentication” can be used, but with an understanding of the associated Copilot Studio message costs.7
  • Design Agents for Graceful Degradation: When developing agents that combine public and internal data sources, incorporate robust error handling and conditional logic. If an agent attempts to access internal data for an unauthorized user, it should be programmed to either clearly inform the user of the access restriction or intelligently pivot to providing information solely from public sources, if the query allows. This approach enhances the user experience and maintains trust in the agent’s capabilities.
  • Manage Copilot Studio Costs Proactively: All interactions with a Copilot Studio agent, regardless of the user’s Microsoft 365 Copilot license status, consume messages that are billed against the organization’s Copilot Studio capacity (Pay-as-you-go or Message Packs).8 Organizations should closely monitor message consumption and factor these costs into their budget.
  • Leverage Microsoft 365 Copilot Licenses Strategically: For scenarios requiring extensive, personalized access to Microsoft Graph-grounded enterprise data via agents within Microsoft 365 applications (Teams, SharePoint, Copilot Chat), licensing users for Microsoft 365 Copilot offers significant benefits, including zero-rated Copilot Studio message usage for those interactions.8 This can lead to substantial cost optimization for high-usage internal data scenarios.
  • Manage User Expectations: Clearly communicate to users what an agent can and cannot provide based on their licensing and permissions. Transparency helps manage expectations and reduces frustration when access to certain internal data is restricted.

By adhering to these recommendations, organizations can effectively deploy Copilot Studio Agents, maximizing their utility across diverse user groups while maintaining stringent control over data access and managing operational costs efficiently.

Works cited
  1. Data, Privacy, and Security for Microsoft 365 Copilot, accessed on July 3, 2025, https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-privacy
  2. App and network requirements for Microsoft 365 Copilot admins, accessed on July 3, 2025, https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-requirements
  3. Microsoft 365 Copilot: Licensing, Pricing, ROI – SAMexpert, accessed on July 3, 2025, https://samexpert.com/microsoft-365-copilot-licensing/
  4. Frequently asked questions about Microsoft 365 Copilot Chat, accessed on July 3, 2025, https://learn.microsoft.com/en-us/copilot/faq
  5. Customize Copilot and Create Agents | Microsoft Copilot Studio, accessed on July 3, 2025, https://www.microsoft.com/en-us/microsoft-copilot/microsoft-copilot-studio
  6. Copilot Studio overview – Learn Microsoft, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/fundamentals-what-is-copilot-studio
  7. Knowledge sources overview – Microsoft Copilot Studio, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/knowledge-copilot-studio
  8. Copilot Studio licensing – Learn Microsoft, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/billing-licensing
  9. Billing rates and management – Microsoft Copilot Studio, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/requirements-messages-management
  10. Add real-time knowledge with connectors – Microsoft Copilot Studio, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/knowledge-real-time-connectors
  11. Configure user authentication in Copilot Studio – Learn Microsoft, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/configuration-end-user-authentication
  12. FAQ for Copilot data security and privacy for Dynamics 365 and Power Platform, accessed on July 3, 2025, https://learn.microsoft.com/en-us/power-platform/faqs-copilot-data-security-privacy
  13. Copilot Studio security and governance – Learn Microsoft, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/security-and-governance

MVP 2025-26

image

Excited and proud to share that I’ve been awarded Microsoft MVP for 2025–26! This is now 14 years as a Microsoft MVP.

Huge thanks to Microsoft and the Microsoft MVP team for the continued recognition and support. It’s a privilege to be part of such a passionate and innovative community and I look forward to another year of helping others work with the Microsoft Cloud.

Of course, thanks also to everyone who reads, listens or consumes the things that I create. It is always great to hear the benefits that this content has helped, so don’t be shy in reaching out if I have been able to help in any way. Your continued support of my endeavours is what drives me every day to create more.

This past year, I’ve been all-in on Microsoft 365—especially Copilot. From building agents and using notebooks with podcasts to exploring automations and more, it’s been incredible to see how AI is transforming the way I work and exciting to see what the future brings with AI.

Grateful for the opportunities to learn, share, and collaborate—and looking forward to another year of building, breaking (in the lab), and helping others get the most out of Microsoft 365 + Copilot and everything in the Microsoft Cloud.

Let’s keep pushing what’s possible.

Thank you.

Exchange Online PowerShell configuration rules and policy relationship

bp1

In the context of configuring anti-spam settings in Exchange (particularly Exchange Online, which uses Exchange Online Protection or EOP), “rules” and “policies” work together to define how email is processed and protected. PowerShell is the primary tool for granular control over these settings.

Here’s a breakdown of their relationship:

1. Policies (Anti-Spam Policies):

  • What they are: Policies are the core configuration containers that define the overall anti-spam settings. They specify what actions to take when a message is identified with a certain spam confidence level (SCL) or other anti-spam verdict (e.g., spam, high-confidence spam, phishing, bulk email).

  • Key settings within policies:

    • Spam Actions: What to do with messages identified as spam (e.g., move to Junk Email folder, quarantine, add X-header, redirect).

    • High-Confidence Spam Actions: Similar to spam actions, but for messages with a very high probability of being spam.

    • Phishing Actions: Actions for phishing attempts.

    • Bulk Email Thresholds (BCL – Bulk Complaint Level): How to treat bulk mail (e.g., newsletters, marketing emails) that isn’t necessarily spam but users might not want.

    • Allowed/Blocked Senders and Domains: Lists of specific senders or domains that should always be allowed or blocked, bypassing some or all spam filtering.

    • Advanced Spam Filter (ASF) settings: More granular options like increasing spam score for specific characteristics (e.g., certain languages, countries, or specific URLs/patterns).

  • Default Policies: Exchange/EOP comes with built-in default policies (e.g., “Default,” “Standard Preset Security,” “Strict Preset Security”) that provide a baseline level of protection.

  • Custom Policies: You can create custom anti-spam policies to apply different settings to specific users, groups, or domains within your organization.

  • PowerShell Cmdlets:

    • Get-HostedContentFilterPolicy: Views existing anti-spam policies.

    • New-HostedContentFilterPolicy: Creates a new custom anti-spam policy.

    • Set-HostedContentFilterPolicy: Modifies an existing anti-spam policy.

    • Get-HostedOutboundSpamFilterPolicy, Set-HostedOutboundSpamFilterPolicy, New-HostedOutboundSpamFilterPolicy: Manage outbound spam policies.

2. Rules (Anti-Spam Rules / Mail Flow Rules / Transport Rules):

  • What they are: Rules are used to apply policies to specific recipients or groups of recipients, or to implement more dynamic and conditional anti-spam actions. While “anti-spam rules” are directly linked to anti-spam policies, “mail flow rules” (also known as “transport rules”) offer a broader range of conditions and actions, including those that can influence spam filtering.

  • Relationship to Policies:

    • Anti-Spam Rules (specifically): An anti-spam rule (e.g., created with New-HostedContentFilterRule) links an anti-spam policy to specific conditions (e.g., applying the policy to members of a certain distribution group). A single anti-spam policy can be associated with multiple rules, but a rule can only be associated with one policy. This allows you to apply different policies to different sets of users.

    • Mail Flow Rules (broader impact): Mail flow rules can also be used to influence anti-spam behavior, even if they aren’t strictly “anti-spam rules.” For example:

      • Bypassing spam filtering: You can create a mail flow rule to set the Spam Confidence Level (SCL) of a message to -1 (Bypass spam filtering) if it meets certain conditions (e.g., from a trusted internal system, or specific external partners).

      • Increasing SCL: You can increase the SCL of messages that contain specific keywords or come from particular sources, forcing them to be treated more aggressively by anti-spam policies.

      • Redirecting/Quarantining: Mail flow rules can directly redirect suspicious messages to a quarantine mailbox or add specific headers for further processing, often based on content or sender characteristics that might indicate spam or phishing.

  • PowerShell Cmdlets:

    • Get-HostedContentFilterRule: Views existing anti-spam rules.

    • New-HostedContentFilterRule: Creates a new anti-spam rule and links it to an anti-spam policy.

    • Set-HostedContentFilterRule: Modifies an existing anti-spam rule.

    • Get-TransportRule, New-TransportRule, Set-TransportRule: Manage general mail flow (transport) rules, which can include anti-spam related actions.

How they work together (with PowerShell in mind):

  1. Define the “What”: You use New-HostedContentFilterPolicy or Set-HostedContentFilterPolicy to define the core anti-spam behavior (e.g., “quarantine spam, move high-confidence spam to junk, block these specific senders”).

  2. Define the “Who/When”: You then use New-HostedContentFilterRule to create a rule that applies that specific policy to certain users or under specific conditions. You can prioritize these rules using the -Priority parameter on the Set-HostedContentFilterRule cmdlet, where a lower number means higher priority.

  3. Advanced Scenarios: For more nuanced control, or to handle edge cases not covered directly by anti-spam policies, you leverage New-TransportRule or Set-TransportRule. These allow you to:

    • Exempt certain senders/domains from all spam filtering (SCL -1).

    • Apply custom actions based on message headers (e.g., from a third-party spam filter).

    • Implement more sophisticated content-based filtering using keywords or regular expressions before the message hits the main anti-spam policies.

Example Scenario and PowerShell:

Let’s say you want to:

  • Apply a strict anti-spam policy to your “Executives” group.

  • Allow a specific partner domain to bypass most spam filtering.

Using PowerShell, you might:

  1. Create a custom anti-spam policy for executives:

    PowerShell

    New-HostedContentFilterPolicy -Name "ExecutiveSpamPolicy" -HighConfidenceSpamAction Quarantine -SpamAction Quarantine -BulkThreshold 4 -MarkAsSpamBulkMail $true
    
  2. Create an anti-spam rule to apply this policy to the “Executives” group:

    PowerShell

    New-HostedContentFilterRule -Name "ApplyExecutiveSpamPolicy" -HostedContentFilterPolicy "ExecutiveSpamPolicy" -SentToMemberOf "ExecutivesGroup" -Priority 1
    
  3. Create a mail flow rule to bypass spam filtering for the partner domain:

    PowerShell

    New-TransportRule -Name "BypassSpamForPartner" -FromScope OutsideOrganization -FromDomainIs "partnerdomain.com" -SetSCL -1 -Priority 0 # Higher priority to ensure it's processed first
    

In summary:

  • Policies define the actions for different spam verdicts and general anti-spam behavior.

  • Rules (both anti-spam rules and broader mail flow/transport rules) define the conditions under which those policies or other anti-spam actions are applied.

PowerShell gives administrators the power to create, modify, and manage these policies and rules with a high degree of precision and automation, which is crucial for effective anti-spam protection in Exchange environments.

M365 Copilot reasoning agents limits

bp1

Yes, there is a usage limit for Research and Analyst Agent prompts in Microsoft 365 Copilot. These agents are included in a Microsoft 365 Copilot license but not with the free Copilot Chat.

According to Microsoft’s official documentation and recent updates, each user with a Microsoft 365 Copilot license is allowed to run up to 25 combined queries per calendar month using the Researcher and Analyst agents

Researcher and Analyst Usage Limits | Microsoft Community Hub

Researcher and Analyst are now generally available | Microsoft 365 Blog

This limit resets on the 1st of each month, not on a rolling 30-day basis

This cap is in place because the Research Agent performs deep, multi-step reasoning and consumes more compute resources than standard Copilot Chat. It’s designed for complex, structured tasks—like generating detailed reports with citations—rather than quick, conversational queries.

If your organization anticipates higher usage, Microsoft offers message packs as an add-on. For example, a couple of packs covering ~50,000 queries might cost around $400/month, while licensing 100 users directly would be about $3,000/month. Microsoft recommends starting with minimal licenses, monitoring usage, and scaling based on actual demand.

The next question is then about how the 25-prompt monthly limit for the Researcher agent in Microsoft 365 Copilot applies when you create a custom agent in Copilot Studio that uses “reason” in its instructions.

Key Clarification

The 25-prompt limit applies specifically to the Researcher agent—a specialized, high-computation mode within Microsoft 365 Copilot designed for deep, multi-step reasoning across enterprise and web data. It’s distinct from standard Copilot chat and is licensed separately.

What Happens in Copilot Studio?

When you create an agent in Copilot Studio and include the word “reason” in the instructions (e.g., “use reasoning to evaluate…”), that does not automatically invoke the Researcher agent or count against your 25 monthly Researcher prompts.

Instead, Copilot Studio agents operate based on:

  • The generative answer node and its custom instructions.

  • The data sources you’ve connected (e.g., SharePoint, Dataverse, websites).

  • The reasoning complexity you’ve built into the flow (e.g., branching logic, memory, variables).

Unless your agent explicitly calls the Researcher agent (via a connector or API that invokes it), or unless Microsoft has integrated Researcher-like capabilities into your agent’s backend (which would be documented), your usage won’t count against the Researcher quota.

Optimize prompts with custom instructions – Microsoft Copilot Studio | Microsoft Learn

Researcher agent in Microsoft 365 Copilot | Microsoft Community Hub


What “Reason” Means in Instructions

Using “reason” in your instructions—such as “reason through the options before answering”—is interpreted as a style or tone directive for the generative AI. It helps shape how the agent responds but doesn’t trigger the Researcher agent’s quota-based behaviour.

✅ Summary
  • No, using “reason” in Copilot Studio instructions does not count against your 25 Researcher prompts.

  • The Researcher agent quota applies only when that specific agent is invoked.

  • You can safely build reasoning-style flows in Copilot Studio without worrying about prompt limits—unless you explicitly integrate the Researcher agent.

Would you like help designing a reasoning-style Copilot Studio agent that stays within standard usage?

References

Optimize prompts with custom instructions – Microsoft Copilot Studio

Researcher agent in Microsoft 365 Copilot

Robert.agent in action

Here’s an example of how clever AI is getting.

Someone sent the following screen shot of PowerShell code to robert.agent@ciaops365.com. Which, if you haven’t seen, is an agent I built to respond automatically to emails using Copilot Studio.

Screenshot 2025-07-10 130705

My Copilot Agent was able to read the PowerShell inside the screen shot and return the following 103 lines of PowerShell for that person!

Screenshot 2025-07-10 130823

Why don’t you give robert.agent@ciaops365.com a try to get your Microsoft Cloud questions answered?

Exchange Online Mail Flow rules basics

bp1

In Exchange Online, mail flow rules (formerly known as transport rules) are a powerful tool that IT administrators can use to fine-tune how emails are handled, and they are intricately tied to an organization’s overall spam policies within Microsoft 365.

Here’s how they are connected in non-technical terms:

1. Exchange Online Protection (EOP) as the Foundation:

  • **EOP is your first line of defense: Think of Exchange Online Protection (EOP) as the core spam filtering engine built into Microsoft 365. It automatically scans all incoming and outgoing emails for known spam, malware, phishing attempts, and other threats. EOP uses a variety of technologies, including:

    • Connection Filtering: Checks the sender’s IP address reputation.
    • Spam (Content) Filtering: Analyzes the message content for characteristics of spam. This assigns a Spam Confidence Level (SCL), a numeric score (0-9, higher means more likely spam).
    • Anti-Malware and Anti-Phishing: Detects malicious attachments, links, and spoofing attempts.
  • Anti-Spam Policies: Within EOP, you have “Anti-spam policies” (also called spam filter policies). These policies define what actions EOP should take based on the spam verdict (e.g., if an email is “Spam,” “High Confidence Spam,” or “Bulk Email”). Actions can include:

    • Moving the message to the Junk Email folder.
    • Quarantining the message (holding it in a safe place for review).
    • Rejecting the message.
    • Redirecting the message to an administrator.
    • Adding an X-header to the message for further processing.
  • Default Policy: There’s a default anti-spam policy that applies to everyone in your organization, but you can create custom policies for specific users, groups, or domains.

2. Mail Flow Rules (Transport Rules) as the Customization Layer:

  • Mail flow rules work with EOP policies: While EOP and its anti-spam policies provide a robust baseline, mail flow rules allow you to create custom, highly specific conditions and actions that can interact with, bypass, or enhance the default spam filtering behavior.
  • How they’re tied to spam policies:
    • Setting the SCL: A primary way mail flow rules tie into spam policies is by allowing you to set the Spam Confidence Level (SCL) for messages that meet certain criteria. For example:

      • If you receive legitimate newsletters that are frequently marked as “Bulk,” you can create a rule that says: “If an email is from newsletter@example.com, set its SCL to -1 (Bypass Spam Filtering).” This tells EOP to treat that specific sender’s emails as non-spam, effectively allowing them to bypass the regular spam filters and directly reach the inbox.
      • Conversely, if you notice a new type of spam getting through that contains specific keywords or phrases, you can create a rule that says: “If the subject or body contains ‘Urgent crypto investment opportunity,’ set the SCL to 9 (High Confidence Spam).” This will ensure that anti-spam policies apply their “High Confidence Spam” action (e.g., quarantine or delete) to those messages, even if EOP’s default content filters haven’t yet caught up.
    • Overriding or Enhancing Actions: Mail flow rules can also take actions independently or in conjunction with anti-spam policies. For instance:

      • You might have an anti-spam policy that quarantines “high confidence spam.” A mail flow rule could say: “If an email is from badspammer.com AND it’s marked as ‘High Confidence Spam,’ also send a notification to the security team.”
      • You can create rules to completely bypass spam filtering for certain trusted senders or internal communication, preventing false positives (legitimate emails being mistaken for spam).
      • You can block messages outright based on criteria like sender domain, specific keywords, or attachments, even before EOP fully processes them for spam, providing a very direct defense.
      • You can tag messages with custom headers that can then be used by other systems or for further processing.
  • Order of Processing: It’s important to understand that mail flow rules have a priority, and they are processed before or alongside the standard anti-spam policies. This allows administrators to ensure critical rules are applied first.

In essence:

  • EOP and Anti-Spam Policies provide the automated, intelligent, and broad-spectrum defense against spam.
  • Mail Flow Rules are your administrative scalpel, allowing you to fine-tune, customize, override, or supplement that broad defense for specific scenarios unique to your organization. They let you proactively respond to new threats, ensure delivery of critical legitimate mail, and implement your own nuanced email handling policies beyond the default spam filtering.

M365 Copilot Chat vs. Copilot Research Agent: Use Cases and Examples

bp1

Microsoft 365 Copilot serves as your AI-powered assistant across Office apps and Teams, helping with everyday tasks through a conversational chat interface. In contrast, the Copilot Research Agent is a specialized AI mode for deep, multi-step research that can comb through vast amounts of data (both your enterprise data and web) to produce comprehensive, evidence-backed reports. Choosing the right tool will ensure you get the best results for your needs. Below, we break down the strengths, ideal use cases, and examples for each, as well as when not to use one versus the other.

Overview of the Two Copilot Modes

M365 Copilot Chat (Standard Copilot): This is the default Copilot experience integrated into Microsoft 365 apps (such as Teams, Outlook, Word, etc.). It provides quick, near real-time responses in a conversational way[1]. Copilot Chat can draft content, answer questions, summarize information, and help with tasks in seconds using the context you provide or your work data via Microsoft Graph[2]. It’s like an AI assistant always available in-app to help you “work smarter” on everyday tasks.

Copilot Research Agent (Researcher Mode): This is an advanced reasoning agent for in-depth research. It uses a more powerful, iterative reasoning process to handle complex, multi-step queries that require analyzing multiple sources. The Research agent will take longer (often a few minutes per query) to gather information from across emails, chats, meetings, documents, enterprise systems, and even the web, then synthesize a thorough answer[1][3]. The output is usually a well-structured report or detailed response with sources cited for verification[1][1]. In short, Researcher acts like a diligent analyst digging through all data available to answer your question with high accuracy and detail – albeit with a slower response time than standard Chat.

Key Differences at a Glance

Aspect M365 Copilot Chat (Standard) Copilot Research Agent (Researcher)
Response Speed Near-instant answers (usually seconds). Optimized for real-time use so you can get quick help while working. Slower, deep processing (often 3–6 minutes for a full response). It spends more time reasoning, gathering and verifying information.
Complexity Handling Basic to moderate complexity. Great for straightforward or single-step questions and tasks. It can use context but generally handles one prompt at a time without extensive planning. High complexity, multi-step reasoning. Designed for complex questions that require breaking down into sub-tasks, looking up multiple sources, and synthesising findings. Performs chain-of-thought planning and iterative research.
Data Scope Immediate context + relevant enterprise data. Can tap into your recent emails, files, chats if needed (via Graph) to give an answer, but typically focuses on the content at hand (e.g., the document or thread you’re viewing). Broad enterprise and external data. Securely searches across emails, documents, meeting transcripts, chat history, and even external connectors or web sources as needed. It will “search everywhere” to ensure no relevant info is missed.
Typical Output Brief replies or edits. E.g., a paragraph answering your question, a list of bullet points, a draft email or document section. The style is often concise and may not always cite sources (it’s more like a quick assistant). Detailed reports or comprehensive answers. Often provides a structured report with sections, detailed explanations, and inline citations to sources for fact-checking. It resembles what an analyst’s researched memo might look like.
Interaction Style Conversational and interactive. You can have a back-and-forth with Copilot Chat, ask follow-ups instantly, or refine the output. It’s meant for real-time collaboration while you work. Task-focused sessions. The Research agent might ask clarifying questions up-front then deliver a final report. It’s less about continuous chat and more about digging for answers, though you can still follow up with additional questions (each may invoke a new deep research cycle).
Limitations May not fully answer very broad or data-heavy queries. It uses faster reasoning, which can sometimes mean less depth or context. Complex multi-source questions might get summary-level answers or require you to prompt multiple times. Not ideal for trivial or time-sensitive queries. Because it takes longer and uses intensive resources (often even limited to a certain number of uses per month), it’s overkill for simple tasks. You wouldn’t use Researcher for a one-line answer or tiny task you needed immediately.

When to Use M365 Copilot Chat (with Examples)

Use Copilot Chat for day-to-day productivity tasks, especially when you need a quick, on-the-fly response or assistance within the flow of work. Here are the best use cases and examples:

  • Quick Summaries of Single Sources: When you want a fast summary of a specific item (an email thread, document, or meeting). For example, “Summarise this email chain for me” – Copilot Chat can instantly pull out the key points from a long email conversation[2]. Or in Teams, you might ask, “What were the main action items from the meeting I missed?”, and it will recap the meeting recording or chat for you in seconds. This is ideal for catching up on information without reading everything yourself.
  • Drafting and Composing Content: Copilot Chat excels at generating initial drafts and content ideas quickly. If you need to write something, you can instruct Copilot to draft it for you, then you refine it. For instance, you could say: *“Draft an email to

References

[1] Researcher agent in Microsoft 365 Copilot

[2] Top 10 things to try first with Microsoft 365 Copilot

[3] Conversation Modes: Quick, Think Deeper, Deep Research

[4] Introducing Researcher and Analyst in Microsoft 365 Copilot

[5] Inside Copilot’s Researcher and Analyst Agents

Need to Know podcast–Episode 349

Explore the future of AI integration, Microsoft Cloud updates, and security innovations tailored for the SMB market. In this episode, we dive into the transformative role of AI MCP servers, the latest Microsoft 365 and Teams updates, and practical security and compliance strategies. Whether you’re an IT pro, business leader, or tech enthusiast, this episode delivers actionable insights and resources to stay ahead in the Microsoft ecosystem.

Brought to you by www.ciaopspatron.com

you can listen directly to this episode at:

https://ciaops.podbean.com/e/episode-349-mcp-is-for-me/

Subscribe via iTunes at:

https://itunes.apple.com/au/podcast/ciaops-need-to-know-podcasts/id406891445?mt=2

or Spotify:

https://open.spotify.com/show/7ejj00cOuw8977GnnE2lPb

Don’t forget to give the show a rating as well as send me any feedback or suggestions you may have for the show.

Resources

CIAOPS Need to Know podcast – CIAOPS – Need to Know podcasts | CIAOPS

X – https://www.twitter.com/directorcia

Join my Teams shared channel – Join my Teams Shared Channel – CIAOPS

CIAOPS Merch store – CIAOPS

Become a CIAOPS Patron – CIAOPS Patron

CIAOPS Blog – CIAOPS – Information about SharePoint, Microsoft 365, Azure, Mobility and Productivity from the Computer Information Agency

CIAOPS Brief – CIA Brief – CIAOPS

CIAOPS Labs – CIAOPS Labs – The Special Activities Division of the CIAOPS

Support CIAOPS – https://ko-fi.com/ciaops

Get your M365 questions answered via email

Show Notes

What’s new in Microsoft Entra – June 2025: Highlights include upcoming support for backing up account names in the Authenticator app using iCloud Keychain
Enhancing Defense Security with Entra ID Governance: Discusses how Entra ID Governance strengthens defense sector security
What’s New in Microsoft Teams | June 2025: Covers new Teams features and enhancements 3.
What’s new in Microsoft Intune: June 2025: Summarizes Intune updates including device management improvements
Microsoft Intune data-driven management | Device Query & Copilot: Introduces new Copilot-powered device query features

Data Breach Reporting with Microsoft Data Security Investigations: Guidance on regulatory breach reporting
Modern, unified data security in the AI era: New Microsoft Purview capabilities for AI-driven data protection
Safeguarding data with Microsoft 365 Copilot: Focuses on compliance and security in Copilot deployments
Protection Against Email Bombs: Microsoft Defender for Office 365 introduces new protections
Introducing the Microsoft 365 Copilot App Learning Series: Learning resources for Copilot adoption
Making the Most of Attack Simulation Training: Best practices for security training
Processing status pane for SharePoint Autofill: New UI enhancements for SharePoint
Introducing the New SharePoint Template Gallery: Streamlined template discovery and usage
Planning your move to Microsoft Defender portal: Transition guidance for Sentinel customers
Jasper Sleet: North Korean IT infiltration tactics: Threat intelligence update
Managing warehouse devices with Microsoft Intune: Real-world Intune use case

Integrating Microsoft Learn Docs with Copilot Studio using MCP