Copilot Agents licensing usage update

 

Things have changed recently when it comes to licensing Coilot Agents. Here is the latest information I can find. In short, every user that needs access to tenant information for use with Copilot, requires a license.


🔒 Confirmed Licensing Requirements

1. No Included Message Capacity with a Single M365 Copilot License

Confirmation: Correct. Your individual Microsoft 365 Copilot license does not include a pool of Copilot Studio message capacity that can be used by other users in the tenant who are unlicensed.

  • Your License Rights: Your M365 Copilot license grants you the right to:

    • Create and manage Copilot Studio agents for internal workflows at no extra charge for your own usage.

    • Access and use those agents yourself without incurring additional usage costs.

  • The Consumption: The consumption of your unlicensed colleagues is considered an organizational-level cost that must be covered by a separate organizational subscription for Copilot Studio.

2. Unlicensed Users Cannot Use Tenant-Grounded Agents Without Organizational Metering

Confirmation: Correct. Unlicensed users will not be able to use an agent that grounds its answers in shared tenant data (like SharePoint or OneDrive) unless the organization has set up a Copilot Studio billing subscription.

  • Agents that Access Tenant Data (SharePoint/OneDrive):

    • These agents access Graph-grounded data, which is considered a premium function and is billed on a metered basis (using “Copilot Credits”).

    • This metered consumption must be paid for by the organization.

  • The Required Organizational Licensing: To enable the unlicensed users to chat with your agent, the tenant administrator must set up one of the following Copilot Studio subscriptions:

    • Copilot Studio Message Pack (Pre-paid Capacity): Purchase packs of Copilot Credits (e.g., 25,000 credits per pack/month). The unlicensed users’ interactions are consumed from this central pool.

    • Copilot Studio Pay-As-You-Go (PAYG): Link a Copilot Studio environment to an Azure subscription. The interactions from the unlicensed users are billed monthly based on actual consumption (credits used) through Azure.

Official Licensing References

SharePoint / OneDrive Agent — Licensing & Usage Summary

Quick reference table describing what licenses and costs are required for users to access an agent that integrates with SharePoint or OneDrive.

Scenario User’s License Licensing Requirement to Access SharePoint/OneDrive Agent Usage Cost
Licensed User (You) Microsoft 365 Copilot (Add-on License) No additional license required. No additional charges for using the agent you created.
Unlicensed User (Colleague) Eligible M365 Plan (e.g., E3/E5) WITHOUT M365 Copilot Organizational Copilot Studio subscription (Pay‑As‑You‑Go or Message Pack) must be enabled in the tenant. Metered charges (Copilot Credits) are incurred against the organizational capacity / Azure subscription.

Key Reference: Microsoft documentation explicitly states: “If a user doesn’t have a Microsoft 365 Copilot license… if their organization enables metering through Copilot Studio, users can access agents in Copilot Chat that provide focused grounding on specific SharePoint sites, shared tenant files, or third-party data.” This confirms the unlicensed users’ access is contingent on the organizational metering being active.

Summary of Action Required

To make your agent available to your unlicensed colleagues, you need to inform your IT/licensing administrator that they must procure and enable Copilot Studio capacity (either Message Packs or Pay-As-You-Go metering) in your tenant. Your personal M365 Copilot license covers your creation and use, but not the consumption of others who are accessing premium, tenant-grounded data.

Microsoft agent usage estimator

The organizational consumption for agents created in Copilot Studio is measured in Copilot Credits.


💰 Copilot Studio Organizational Pricing (USD)

Microsoft offers two main ways for the organization to purchase the capacity consumed by unlicensed users accessing tenant-grounded data:

 

Copilot Credits — Pricing

Pricing Model Cost Capacity Provided Best For
Prepaid Capacity Pack USD $200.00 per month (per pack) 25,000 Copilot Credits per month (tenant-wide pool) Stable/predictable, moderate usage, budget control (lower cost per credit).
Pay-As-You-Go (PAYG) USD $0.01 per Copilot Credit No upfront commitment. Billed monthly based on actual usage. Pilots, highly variable usage, or as an overage safety net for the Prepaid Packs.

Note: All prices are Estimated Retail Price (ERP) in USD and are subject to change. Your final price will depend on your specific Microsoft agreement (e.g., Enterprise Agreement) and local currency conversion.


📊 Copilot Credit Consumption Rates

The cost is based on the complexity of the agent’s response, not just the number of messages. Since your agent uses SharePoint/OneDrive data, the key consumption rate to note is for Tenant Graph grounding.

 

Copilot credit consumption per agent action / scenario
Agent Action/Scenario Copilot Credits Consumed (Per Event)
Tenant Graph Grounding (Accessing SharePoint/OneDrive data) 10 Copilot Credits
Generative Answer (Using an LLM to form a non-grounded answer) 2 Copilot Credits
Classic Answer (Scripted topic response) 1 Copilot Credit
Agent Action (Invoking tools/steps, e.g., a Power Automate flow) 5 Copilot Credits

Example Cost Calculation

Let’s assume an unlicensed user asks the agent a question that requires it to search your SharePoint knowledge source (Tenant Graph Grounding) and generate a summary answer (Generative Answer)The Prepaid Pack option is more economical for this level of steady, high usage. Your IT team will need to monitor usage and choose the appropriate mix of Prepaid Packs and PAYG overage protection.

Total Credits = (Credits for Grounding) + (Credits for Generative Answer)
Total Credits = 10 + 2 = 12 Credits per conversation

If 100 unlicensed users each have 5 conversations per day:

Daily Conversations: 100 users × 5 conversations = 500
Daily Credits: 500 conversations × 12 credits/conversation = 6,000 credits

Monthly Credits (approx): 6,000 credits/day × 30 days = 180,000 credits

Monthly Cost Estimate:

Using Prepaid Packs:
180,000 credits / 25,000 credits per pack ≈ 7.2 packs
The organization would need to buy 8 packs per month.

Monthly Cost: 8 packs × $200 = USD $1,600

Using Pay-As-You-Go (PAYG):
Monthly Cost: 180,000 credits × $0.01/credit = USD $1,800

The Prepaid Pack option is more economical for this level of steady, high usage. Your IT team will need to monitor usage and choose the appropriate mix of Prepaid Packs and PAYG overage protection.

Here are the sources that were used to compile the information, each with a direct hyperlink:

  1. Copilot Studio licensing – Microsoft Learn

  2. Billing rates and management – Microsoft Copilot Studio

  3. Microsoft 365 Copilot Pricing – AI Agents | Copilot Studio

  4. Copilot Studio pricing & licensing (2025): packs and credits

  5. Copilot Credits consumption – LicenseVerse – Licensing School

  6. Get access to Copilot Studio – Microsoft Learn

  7. Manage Copilot Studio credits and capacity – Power Platform | Microsoft Learn

 

 

Unlock Anthropic AI in Microsoft Copilot: Step-by-Step Setup & Crucial Warnings!

In this video, I walk you through how to enable Anthropic’s powerful AI models—like Claude—inside Microsoft Copilot. I’ll show you exactly where to find the settings, how to activate new AI providers, and what features you unlock in Researcher and Copilot Studio. Plus, I share an important compliance warning you need to know before turning this on, so you can make informed decisions for your organization. If you want to supercharge your Copilot experience and stay ahead with the latest AI integrations, this guide is for you!

Video link = https://www.youtube.com/watch?v=Gxa9OrI6VJs

Need to Know podcast–Episode 352

In this episode of the CIAOPS “Need to Know” podcast, we dive into the latest updates across Microsoft 365, GitHub Copilot, and SMB-focused strategies for scaling IT services. From new Teams features to deep dives into DLP alerts and co-partnering models for MSPs, this episode is packed with insights for IT professionals and small business tech leaders looking to stay ahead of the curve. I also take a look at building an agent to help you work with frameworks like the ASD Blueprint for Secure Cloud.

Brought to you by www.ciaopspatron.com

you can listen directly to this episode at:

https://ciaops.podbean.com/e/episode-352-agents-to-the-rescue/

Subscribe via iTunes at:

https://itunes.apple.com/au/podcast/ciaops-need-to-know-podcasts/id406891445?mt=2

or Spotify:

https://open.spotify.com/show/7ejj00cOuw8977GnnE2lPb

Don’t forget to give the show a rating as well as send me any feedback or suggestions you may have for the show.

Resources

CIAOPS Need to Know podcast – CIAOPS – Need to Know podcasts | CIAOPS

X – https://www.twitter.com/directorcia

Join my Teams shared channel – Join my Teams Shared Channel – CIAOPS

CIAOPS Merch store – CIAOPS

Become a CIAOPS Patron – CIAOPS Patron

CIAOPS Blog – CIAOPS – Information about SharePoint, Microsoft 365, Azure, Mobility and Productivity from the Computer Information Agency

CIAOPS Brief – CIA Brief – CIAOPS

CIAOPS Labs – CIAOPS Labs – The Special Activities Division of the CIAOPS

Support CIAOPS – https://ko-fi.com/ciaops

Get your M365 questions answered via email

Microsoft 365 & GitHub Copilot Updates
GPT-5 in Microsoft 365 Copilot:
https://www.microsoft.com/en-us/microsoft-365/blog/2025/08/07/available-today-gpt-5-in-microsoft-365-copilot/

GPT-5 Public Preview for GitHub Copilot: https://github.blog/changelog/2025-08-07-openai-gpt-5-is-now-in-public-preview-for-github-copilot/

Microsoft Teams & UX Enhancements

Mic Volume Indicator in Teams: https://techcommunity.microsoft.com/blog/Microsoft365InsiderBlog/new-microphone-volume-indicator-in-teams/4442879

Pull Print in Universal Print: https://techcommunity.microsoft.com/blog/windows-itpro-blog/pull-print-is-now-available-in-universal-print/4441608

Audio Overview in Word via Copilot: https://techcommunity.microsoft.com/blog/Microsoft365InsiderBlog/listen-to-an-audio-overview-of-a-document-with-microsoft-365-copilot-in-word/4439362

Hidden OneDrive Features: https://techcommunity.microsoft.com/blog/microsoft365insiderblog/get-the-most-out-of-onedrive-with-these-little-known-features/4435197

SharePoint Header/Footer Enhancements: https://techcommunity.microsoft.com/blog/spblog/introducing-new-sharepoint-site-header–footer-enhancements/4444261

Security & Compliance

DLP Alerts Deep Dive (Part 1 & 2): https://techcommunity.microsoft.com/blog/microsoft-security-blog/deep-dive-dlp-incidents-alerts–events—part-1/4443691

https://techcommunity.microsoft.com/blog/microsoft-security-blog/deep-dive-dlp-incidents-alerts–events—part-2/4443700

Security Exposure Management Ninja Training: https://techcommunity.microsoft.com/blog/securityexposuremanagement/microsoft-security-exposure-management-ninja-training/4444285

Microsoft Entra Internet Access & Shadow AI Protection: https://techcommunity.microsoft.com/blog/microsoft-entra-blog/uncover-shadow-ai-block-threats-and-protect-data-with-microsoft-entra-internet-a/4440787

ASD Blueprint for Secure Cloud – https://blueprint.asd.gov.au/

Building a Collaborative Microsoft 365 Copilot Agent: A Step-by-Step Guide

Creating a Microsoft 365 Copilot agent (a custom AI assistant within Microsoft 365 Copilot) can dramatically streamline workflows. These agents are essentially customised versions of Copilot that combine specific instructions, knowledge, and skills to perform defined tasks or scenarios[1]. The goal here is to build an agent that multiple team members can collaboratively develop and easily maintain – even if the original creator leaves the business. This report provides:

  • Step-by-step guidelines to create a Copilot agent (using no-code/low-code tools).
  • Best practices for multi-user collaboration, including managing edit permissions.
  • Documentation and version control strategies for long-term maintainability.
  • Additional tips to ensure the agent remains robust and easy to update.

Step-by-Step Guide: Creating a Microsoft 365 Copilot Agent

To build your Copilot agent without code, you will use Microsoft 365 Copilot Studio’s Agent Builder. This tool provides a guided interface to define the agent’s behavior, knowledge, and appearance. Follow these steps to create your agent:

As a result of the steps above, you have a working Copilot agent with its name, description, instructions, and any connected data sources or capabilities configured. You built this agent in plain language and refined it with no code required, thanks to Copilot Studio’s declarative authoring interface[2].

Before rolling it out broadly, double-check the agent’s responses for accuracy and tone, especially if it’s using internal knowledge. Also verify that the knowledge sources cover the expected questions. (If the agent couldn’t answer a question in testing, you might need to add a missing document or adjust instructions.)

Note: Microsoft also provides pre-built templates in Copilot Studio that you can use as a starting point (for example, templates for an IT help desk bot, a sales assistant, etc.)[2]. Using a template can jump-start your project with common instructions and sample prompts already filled in, which you can then modify to suit your needs.


Collaborative Development and Access Management

One key to long-term maintainability is ensuring multiple people can access and work on the agent. You don’t want the agent tied solely to its creator. Microsoft 365 Copilot supports this through agent sharing and permission controls. Here’s how to enable collaboration and manage who can use or edit the agent:

  • Share the Agent for Co-Authoring: After creating the agent, the original author can invite colleagues as co-authors (editors). In Copilot Studio, use the Share menu on the agent and add specific users by name or email for “collaborative authoring” access[3]. (You can only add individuals for edit access, not groups, and those users must be within your organisation.) Once shared, these teammates are granted the necessary roles (Environment Maker/Bot Contributor in the underlying Power Platform environment) automatically so they can modify the agent[3]. Within a few minutes, the agent will appear in their Copilot Studio interface as well. Now your agent effectively has multiple owners — if one person leaves, others still have full editing rights.
  • Ensure Proper Permissions: When sharing for co-authoring, make sure the colleagues have appropriate permissions in the environment. Copilot Studio will handle most of this via the roles mentioned, but it’s good for an admin to know who has edit access. By design, editors can do everything the owner can: edit content, configure settings, and share the agent further. Viewers (users who are granted use but not edit rights) cannot make changes[4]. Use Editor roles for co-authors and Viewer roles for end users as needed to control access[4]. For example, you may grant your whole team viewer access to use the agent, but only a smaller group of power users get editor access to change it. (The platform currently only allows assigning Editor permission to individuals, not to a security group, for safety[4].)
  • Collaborative Editing in Real-Time: Once multiple people have edit access, Copilot Studio supports concurrent editing of the agent’s topics (the conversational flows or content nodes). The interface will show an “Editing” indicator with the co-authors’ avatars next to any topic being worked on[3]. This helps avoid stepping on each other’s toes. If two people do happen to edit the same piece at once, Copilot Studio prevents accidental overwrites by detecting the conflict and offering choices: you can discard your changes or save a copy of the topic[3]. For instance, if you and a colleague unknowingly both edited the FAQ topic, and they saved first, when you go to save, the system might tell you a newer version exists. You could then choose to keep your version as a separate copy, review differences, and merge as appropriate. This built-in change management ensures that multi-author collaboration is safe and manageable.
  • Sharing the Agent for Use: In addition to co-authors, you likely want to share the finished agent with other employees so they can use it in Copilot. You can share the agent via a link or through your tenant’s app catalog. In Copilot Studio’s share settings, choose who can chat with (use) the agent. Options include “Anyone in your organization” or specific security groups[5]. For example, you might initially share it with just the IT department group for a pilot, or with everyone if it’s broadly useful. When a user adds the shared agent, it will show up in their Microsoft 365 Copilot interface for them to interact with. Note that sharing for use does not grant edit rights – it only allows using the agent[5]. Keep the sharing scope to “Only me” if it’s a draft not ready for others, but otherwise switch it to an appropriate audience so the agent isn’t locked to one person’s account[5].
  • Manage Underlying Resources: If your agent uses additional resources like Power Automate flows (actions) or certain connectors that require separate permissions, remember to share those as well. Sharing an agent itself does not automatically share any connected flow or data source with co-authors[3]. For example, if the agent triggers a Power Automate flow to update a SharePoint list, you must go into that flow and add your colleagues as co-owners there too[3]. Otherwise, they might be able to edit the agent’s conversation, but not open or modify the flow. Similarly, ensure any SharePoint sites or files used as knowledge sources have the right sharing settings for your team. A good practice is to use common team-owned resources (not one person’s private OneDrive file) for any knowledge source, so access can be managed by the team or admins.
  • Administrative Oversight: Because these agents become part of your organisation’s tools, administrators have oversight of shared agents. In the Microsoft 365 admin center (under Integrated Apps > Shared Agents), admins can see a list of all agents that have been shared, along with their creators, status, and who they’re shared with[1]. This means if the original creator does leave the company, an admin can identify any orphaned agents and reassign ownership or manage them as needed. Admins can also block or disable an agent if it’s deemed insecure or no longer appropriate[1]. This governance is useful for ensuring continuity and compliance – your agent isn’t tied entirely to one user’s account. From a planning perspective, it’s wise to have at least two people with full access to every mission-critical agent (one primary and one backup person), plus ensure your IT admin team is aware of the agent’s existence.

By following these practices, you create a safety net around your Copilot agent. Multiple team members can improve or update it, and no single individual is irreplaceable for its maintenance. Should someone exit the team, the remaining editors (or an admin) can continue where they left off.


Documentation and Version Control Practices

Even with a collaborative platform, it’s important to document the agent’s design and maintain version control as if it were any other important piece of software. This ensures that knowledge about how the agent works is not lost and changes can be tracked over time. Here are key practices:

  • Create a Design & Usage Document: Begin a living document (e.g. in OneNote or a SharePoint wiki) that describes the agent in detail. This should include the agent’s purpose, the problems it solves, and its scope (what it will and won’t do). Document the instructions or logic you gave it – you might even copy the core parts of the agent’s instruction text into this document for reference. Also list the knowledge sources connected (e.g. “SharePoint site X – HR Policies”) and any capabilities/flows added. This way, if a new colleague takes over the agent, they can quickly understand its configuration and dependencies. Include screenshots of the agent’s setup from Copilot Studio if helpful. If the agent goes through iterations, note what changed in each version (“Changelog: e.g. Added new Q\&A section on 2025-08-16 to cover Covid policies”). This documentation will be invaluable if the original creator is not available to explain the agent’s behavior down the line.
  • Use Source Control for Agent Configuration (ALM): Treat the agent as a configurable solution that can be exported and versioned. Microsoft 365 Copilot agents built in Copilot Studio actually reside in the Power Platform environment, which means you can leverage Power Platform’s Application Lifecycle Management (ALM) features. Specifically, you can export the agent as a solution package and store that file for version control[6]. Using Copilot Studio, create a solution in the environment, add the agent to it, and export it as an unzip-able file. This exported solution contains the agent’s definition (topics, flows, etc.). You can keep these solution files in a source repository (like a GitHub or Azure DevOps repo) to track changes over time, similar to how you’d version code. Whenever you make significant updates to the agent, export an updated solution file (with a version number or date in the filename) and commit it to the repository. This provides a backup and a history. In case of any issue or if you need to restore or compare a previous version, you can import an older solution file into a sandbox environment[6]. Microsoft’s guidance explicitly supports moving agents between environments using this export/import method, which can double as a backup mechanism[6].
  • Implement CI/CD for Complex Projects (Optional): If your organisation has the capacity, you can integrate the agent development into a Continuous Integration/Continuous Deployment process. Using tools like Azure DevOps or GitHub Actions, you can automate the export/import of agent solutions between Dev, Test, and Prod environments. This kind of pipeline ensures that all changes are logged and pass through proper testing stages. Microsoft recommends maintaining healthy ALM processes with versioning and deployment automation for Copilot agents, just as you would for other software[7]. For example, you might do initial editing in a development environment, export the solution, have it reviewed in code review (even though it’s mostly configuration, you can still check the diff on the solution components), then import into a production environment for the live agent. This way, any change is traceable. While not every team will need full DevOps for a simple Copilot agent, this approach becomes crucial if your agent grows in complexity or business importance.
  • **Consider the Microsoft 365 *Agents SDK* for Code-Based Projects:** Another approach to maintainability is building the agent via code. Microsoft offers an Agents SDK that allows developers to create Copilot agents using languages like C#, JavaScript, or Python, and integrate custom AI logic (with frameworks like Semantic Kernel or LangChain)[8]. This is a more advanced route, but it has the advantage that your agent’s logic lives in code files that can be fully managed in source control. If your team has software engineers, they could use the SDK to implement the agent with standard dev practices (unit testing, code reviews, git version control, etc.). This isn’t a no-code solution, but it’s worth mentioning for completeness: a coded agent can be as collaborative and maintainable as any other software project. The SDK supports quick scaffolding of projects and deployment to Copilot, so you could even migrate a no-code agent to a coded one later if needed[8]. Only pursue this if you need functionality beyond what Copilot Studio offers or want deeper integration/testing – for most cases, the no-code approach is sufficient.
  • Keep the Documentation Updated: Whichever development path you choose, continuously update your documentation when changes occur. If a new knowledge source is added or a new capability toggled on, note it in the doc. Also record any design rationale (“We disabled the image generator on 2025-09-01 due to misuse”) so future maintainers understand past decisions. Good documentation ensures that even if original creators or key contributors leave, anyone new can come up to speed quickly by reading the material.

By maintaining both a digital paper trail (documents) and technical version control (solution exports or code repositories), you safeguard the project’s knowledge. This prevents the “single point of failure” scenario where only one person knows how the agent really works. It also makes onboarding new team members to work on the agent much easier.


Additional Tips for a Robust, Maintainable Agent

Finally, here are additional recommendations to ensure your Copilot agent remains reliable and easy to manage in the long run:

  • Define a Clear Scope and Boundaries: A common pitfall is trying to make one agent do too much. It’s often better to have a focused agent that excels at a specific set of tasks than a catch-all that becomes hard to maintain. Clearly state what user needs the agent addresses. If later you find the scope creeping beyond original intentions (for example, your HR bot is suddenly expected to handle IT helpdesk questions), consider creating a separate agent for the new domain or using multi-agent orchestration, rather than overloading one agent. This keeps each agent simpler to troubleshoot and update. Also use the agent’s instructions to explicitly guard against out-of-scope requests (e.g., instruct it to politely decline questions unrelated to its domain) so that maintenance remains focused.
  • Follow Best Practices in Instruction Design: Well-structured instructions not only help the AI give correct answers, but also make the agent’s logic easier for humans to understand later. Use clear and action-oriented language in your instructions and avoid unnecessary complexity[9]. For example, instead of a vague instruction like “help with leaves,” write a specific rule: “If user asks about leave status, retrieve their leave request record from SharePoint and display the status.” Break down the agent’s workflow into ordered steps where necessary (using bullet or numbered lists in the instructions)[9]. This modular approach (goal → action → outcome for each step) acts like commenting your code – it will be much easier for someone else to modify the behavior if they can follow a logical sequence. Additionally, include a couple of example user queries and desired responses in the instructions (few-shot examples) for clarity, especially if the agent’s task is complex. This reduces ambiguity for both the AI and future editors.
  • Test Thoroughly and Collect Feedback: Continuous testing is key to robustness. Even after deployment, encourage users (or the team internally) to provide feedback if the agent gives an incorrect or confusing response. Periodically review the agent’s performance: pose new questions to it or check logs (if available) to see how it’s handling real queries. Microsoft 365 Copilot doesn’t yet provide full conversation logs to admins, but you can glean some insight via any integrated telemetry. If you have access to Azure Application Insights or the Power Platform CoE kit, use them – Microsoft suggests integrating these to monitor usage, performance, and errors for Copilot agents[7]. For example, Application Insights can track how often certain flows are called or if errors occur, and the Power Platform Center of Excellence toolkit can inventory your agent and its usage metrics[7]. Monitoring tools help you catch issues early (like an action failing because of a permissions error) and measure the agent’s value (how often it’s used, peak times, etc.). Use this data to guide maintenance priorities.
  • Implement Governance and Compliance Checks: Since Copilot agents can access organisational data, ensure that all security and compliance requirements are met. From a maintainability perspective, this means the agent should be built in accordance with IT policies (e.g., respecting Data Loss Prevention rules, not exposing sensitive info). Work with your admin to double-check that the agent’s knowledge sources and actions comply with company policy. Also, have a plan for regular review of content – for instance, if one of the knowledge base documents the agent relies on is updated or replaced, update the agent’s knowledge source to point to the new info. Remove any knowledge source that is outdated or no longer approved. Keeping the agent’s inputs current and compliant will prevent headaches (or forced takedowns) later on.
  • Plan for Handover: Since the question specifically addresses if the original creator leaves, plan for a smooth handover. This includes everything we’ve discussed (multiple editors, documentation, version history). Additionally, consider a short training session or demo for the team members who will inherit the agent. Walk them through the agent’s flows in Copilot Studio, show how to edit a topic, how to republish updates, etc. This will give them confidence to manage it. Also, make sure the agent’s ownership is updated if needed. Currently, the original creator remains the “Owner” in the system. If that person’s account is to be deactivated, it may be wise to have an admin transfer any relevant assets or at least note that co-owners are in place. Since admins can see the creator’s name on the agent, proactively communicate to IT that the agent has co-owners who will take over maintenance. This can avoid a scenario where an admin might accidentally disable an agent assuming no one can maintain it.
  • Regular Maintenance Schedule: Treat the agent as a product that needs occasional maintenance. Every few months (or whatever cadence fits your business), review if the agent’s knowledge or instructions need updates. For example, if processes changed or new common questions have emerged, update the agent to cover them. Also verify that all co-authors still have access and that their permissions are up to date (especially if your company uses role-based access that might change with team reorgs). A little proactive upkeep will keep the agent effective and prevent it from becoming obsolete or broken without anyone noticing.

By following the above tips, your Microsoft 365 Copilot agent will be well-positioned to serve users over the long term, regardless of team changes. You’ve built it with a collaborative mindset, documented its inner workings, and set up processes to manage changes responsibly. This not only makes the agent easy to edit and enhance by multiple people, but also ensures it continues to deliver value even as your organisation evolves.


Conclusion: Building a Copilot agent that stands the test of time requires forethought in both technology and teamwork. Using Microsoft’s no-code Copilot Studio, you can quickly create a powerful assistant tailored to your needs. Equally important is opening up the project to your colleagues, setting the right permissions so it’s a shared effort. Invest in documentation and consider leveraging export/import or even coding options to keep control of the agent’s “source.” And always design with clarity and governance in mind. By doing so, you create not just a bot, but a maintainable asset for your organisation – one that any qualified team member can pick up and continue improving, long after the original creator’s tenure. With these steps and best practices, your Copilot agent will remain helpful, accurate, and up-to-date, no matter who comes or goes on the team.

References

[1] Manage shared agents for Microsoft 365 Copilot – Microsoft 365 admin

[2] Use the Copilot Studio Agent Builder to Build Agents

[3] Share agents with other users – Microsoft Copilot Studio

[4] Control how agents are shared – Microsoft Copilot Studio

[5] Publish and Manage Copilot Studio Agent Builder Agents

[6] Export and import agents using solutions – Microsoft Copilot Studio

[7] Phase 4: Testing, deployment, and launch – learn.microsoft.com

[8] Create and deploy an agent with Microsoft 365 Agents SDK

[9] Write effective instructions for declarative agents

Crafting Effective Instructions for Copilot Studio Agents

Copilot Studio is Microsoft’s low-code platform for building AI-powered agents (custom “Copilots”) that extend Microsoft 365 Copilot’s capabilities[1]. These agents are specialized assistants with defined roles, tools, and knowledge, designed to help users with specific tasks or domains. A central element in building a successful agent is its instructions field – the set of written guidelines that define the agent’s behavior, capabilities, and boundaries. Getting this instructions field correct is absolutely critical for the agent to operate as designed.

In this report, we explain why well-crafted instructions are vital, illustrate good vs. bad instruction examples (and why they succeed or fail), and provide a detailed framework and best practices for writing effective instructions in Copilot Studio. We also cover how to test and refine instructions, accommodate different types of agents, and leverage resources to continuously improve your agent instructions.

Overview: Copilot Studio and the Instructions Field

What is Copilot Studio? Copilot Studio is a user-friendly environment (part of Microsoft Power Platform) that enables creators to build and deploy custom Copilot agents without extensive coding[1]. These agents leverage large language models (LLMs) and your configured tools/knowledge to assist users, but they are more scoped and specialized than the general-purpose Microsoft 365 Copilot[2]. For example, you could create an “IT Support Copilot” that helps employees troubleshoot tech issues, or a “Policy Copilot” that answers HR policy questions. Copilot Studio supports different agent types – commonly conversational agents (interactive chatbots that users converse with) and trigger/action agents (which run workflows or tasks based on triggers).

Role of the Instructions Field: Within Copilot Studio, the instructions field is where you define the agent’s guiding principles and behavior rules. Instructions are the central directions and parameters the agent follows[3]. In practice, this field serves as the agent’s “system prompt” or policy:

  • It establishes the agent’s identity, role, and purpose (what the agent is supposed to do and not do)[1].
  • It defines the agent’s capabilities and scope, referencing what tools or data sources to use (and in what situations)[3].
  • It sets the desired tone, style, and format of the agent’s responses (for consistent user experience).
  • It can include step-by-step workflows or decision logic the agent should follow for certain tasks[4].
  • It may impose restrictions or safety rules, such as avoiding certain content or escalating issues per policy[1].

In short, the instructions tell the agent how to behave and how to think when handling user queries or performing its automated tasks. Every time the agent receives a user input (or a trigger fires), the underlying AI references these instructions to decide:

  1. What actions to take – e.g. which tool or knowledge base to consult, based on what the instructions emphasize[3].
  2. How to execute those actions – e.g. filling in tool inputs with user context as instructed[3].
  3. How to formulate the final answer – e.g. style guidelines, level of detail, format (bullet list, table, etc.), as specified in the instructions.

Because the agent’s reasoning is grounded in the instructions, those instructions need to be accurate, clear, and aligned with the agent’s intended design. An agent cannot obey instructions to use tools or data it doesn’t have access to; thus, instructions must also stay within the bounds of the agent’s configured tools/knowledge[3].

Why Getting the Instructions Right is Critical

Writing the instructions field correctly is critical because it directly determines whether your agent will operate as intended. If the instructions are poorly written or wrong, the agent will likely deviate from the desired behavior. Here are key reasons why correct instructions are so important:

  • They are the Foundation of Agent Behavior: The instructions form the foundation or “brain” of your agent. Microsoft’s guidance notes that agent instructions “serve as the foundation for agent behavior, defining personality, capabilities, and operational parameters.”[1]. A well-formulated instructions set essentially hardcodes your agent’s expertise (what it knows), its role (what it should do), and its style (how it interacts). If this foundation is shaky, the agent’s behavior will be unpredictable or ineffective.
  • Ensuring Relevant and Accurate Responses: Copilot agents rely on instructions to produce responses that are relevant, accurate, and contextually appropriate to user queries[5]. Good instructions tell the agent exactly how to use your configured knowledge sources and when to invoke specific actions. Without clear guidance, the AI might rely on generic model knowledge or make incorrect assumptions, leading to hallucinations (made-up info) or off-target answers. In contrast, precise instructions keep the agent’s answers on track and grounded in the right information.
  • Driving the Correct Use of Tools/Knowledge: In Copilot Studio, agents can be given “skills” (API plugins, enterprise data connectors, etc.). The instructions essentially orchestrate these skills. They might say, for example, “If the user asks about an IT issue, use the IT Knowledge Base search tool,” or “When needing current data, call the WebSearch capability.” If these directions aren’t specified or are misspecified, the agent may not utilize the tools correctly (or at all). The instructions are how you, the creator, impart logic to the agent’s decision-making about tools and data. Microsoft documentation emphasizes that agents depend on instructions to figure out which tool or knowledge source to call and how to fill in its inputs[3]. So, getting this right is essential for the agent to actually leverage its configured capabilities in solving user requests.
  • Maintaining Consistency and Compliance: A Copilot agent often needs to follow particular tone or policy rules (e.g., privacy guidelines, company policy compliance). The instructions field is where you encode these. For instance, you can instruct the agent to always use a polite tone, or to only provide answers based on certain trusted data sources. If these rules are not clearly stated, the agent might inadvertently produce responses that violate style expectations or compliance requirements. For example, if an agent should never answer medical questions beyond a provided medical knowledge base, the instructions must say so explicitly; otherwise the agent might try to answer from general training data – a big risk in regulated scenarios. In short, correct instructions protect against undesirable outputs by outlining do’s and don’ts (though as a rule of thumb, phrasing instructions in terms of positive actions is preferred – more on that later).
  • Optimal User Experience: Finally, the quality of the instructions directly translates to the quality of the user’s experience with the agent. With well-crafted instructions, the agent will ask the right clarifying questions, present information in a helpful format, and handle edge cases gracefully – all of which lead to higher user satisfaction. Conversely, bad instructions can cause an agent to be confusing, unhelpful, or even completely off-base. Users may get frustrated if the agent requires too much guidance (because the instructions didn’t prepare it well), or if the agent’s responses are messy or incorrect. Essentially, instructions are how you design the user’s interaction with your agent. As one expert succinctly put it, clear instructions ensure the AI understands the user’s intent and delivers the desired output[5] – which is exactly what users want.

Bottom line: If the instructions field is right, the agent will largely behave and perform as designed – using the correct data, following the intended workflow, and speaking in the intended voice. If the instructions are wrong or incomplete, the agent’s behavior can diverge, leading to mistakes or an experience that doesn’t meet your goals. Now, let’s explore what good instructions look like versus bad instructions, to illustrate these points in practice.

Good vs. Bad Instructions: Examples and Analysis

Writing effective agent instructions is somewhat of an art and science. To understand the difference it makes, consider the following examples of a good instruction set versus a bad instruction set for an agent. We’ll then analyze why the good one works well and why the bad one falls short.

Example of Good Instructions

Imagine we are creating an IT Support Agent that helps employees with common technical issues. A good instructions set for such an agent might look like this (simplified excerpt):

You are an IT support specialist focused on helping employees with common technical issues. You have access to the company’s IT knowledge base and troubleshooting guides.\ Your responsibilities include:\ – Providing step-by-step troubleshooting assistance.\ – Escalating complex issues to the IT helpdesk when necessary.\ – Maintaining a helpful and patient demeanor.\ – Ensuring solutions follow company security policies.\ When responding to requests:

  1. Ask clarifying questions to understand the issue.
  2. Provide clear, actionable solutions or instructions.
  3. Verify whether the solution worked for the user.
  4. If resolved, summarize the fix; if not, consider escalation or next steps.[1]

This is an example of well-crafted instructions. Notice several positive qualities:

  • Clear role and scope: It explicitly states the agent’s role (“IT support specialist”) and what it should do (help with tech issues using company knowledge)[1]. The agent’s domain and expertise are well-defined.
  • Specific responsibilities and guidelines: It lists responsibilities and constraints (step-by-step help, escalate if needed, be patient, follow security policy) in bullet form. This acts as general guidelines for behavior and ensures the agent adheres to important policies (like security rules)[1].
  • Actionable step-by-step approach: Under responding to requests, it breaks down the procedure into an ordered list of steps: ask clarifying questions, then give solutions, then verify, etc.[1]. This provides a clear workflow for the agent to follow on each query. Each step has a concrete action, reducing ambiguity.
  • Positive/constructive tone: The instructions focus on what the agent should do (“ask…”, “provide…”, “verify…”) rather than just what to avoid. This aligns with best practices that emphasize guiding the AI with affirmative actions[4]. (If there are things to avoid, they could be stated too, but in this example the necessary restrictions – like sticking to company guides and policies – are inherently covered.)
  • Aligned with configured capabilities: The instructions mention the knowledge base and troubleshooting guides, which presumably are set up as the agent’s connected data. Thus, the agent is directed to use available resources. (A good instruction set doesn’t tell the agent to do impossible things; here it wouldn’t, say, ask the agent to remote-control a PC unless such an action plugin exists.)

Overall, these instructions would likely lead the agent to behave helpfully and stay within bounds. It’s clear what the agent should do and how.

Example of Bad Instructions

Now consider a contrasting example. Suppose we tried to instruct the same kind of agent with this single instruction line:

“You are an agent that can help the user.”

This is obviously too vague and minimal, but it illustrates a “bad” instructions scenario. The agent is given virtually no guidance except a generic role. There are many issues here:

  • No clarification of domain or scope (help the user with what? anything?).
  • No detail on which resources or tools to use.
  • No workflow or process for handling queries.
  • No guidance on style, tone, or policy constraints. Such an agent would be flying blind. It might respond generically to any question, possibly hallucinate answers because it’s not instructed to stick to a knowledge base, and would not follow a consistent multi-step approach to problems. If a user asked it a technical question, the agent might not know to consult the IT knowledge base (since we never told it to). The result would be inconsistent and likely unsatisfactory.

Bad instructions can also occur in less obvious ways. Often, instructions are “bad” not because they are too short, but because they are unclear, overly complicated, or misaligned. For example, consider this more detailed but flawed instruction example (adapted from an official guidance of what not to do):

“If a user asks about coffee shops, focus on promoting Contoso Coffee in US locations, and list those shops in alphabetical order. Format the response as a series of steps, starting each step with Step 1:, Step 2: in bold. Don’t use a numbered list.”[6]

At first glance it’s detailed, but this is labeled as a weak instruction by Microsoft’s documentation. Why is this considered a bad/weak set of instructions?

  • It mixes multiple directives in one blob: It tells the agent what content to prioritize (Contoso Coffee in US) and prescribes a very specific formatting style (steps with “Step 1:”, but strangely “don’t use a numbered list” simultaneously). This could confuse the model or yield rigid responses. Good instructions would separate concerns (perhaps have a formatting rule separately and a content preference rule separately).
  • It’s too narrow and conditional: “If a user asks about coffee shops…” – what if the user asks something slightly different? The instruction is tied to a specific scenario, rather than a general principle. This reduces the agent’s flexibility or could even be ignored if the query doesn’t exactly match.
  • The presence of a negative directive (“Don’t use a numbered list”) could be stated in a clearer positive way. In general, saying what not to do is sometimes necessary, but overemphasizing negatives can lead the model to fixate incorrectly. (A better version might have been: “Format the list as bullet points rather than a numbered list.”)

In summary, bad instructions are those that lack clarity, completeness, or coherence. They might be too vague (leaving the AI to guess what you intended) or too convoluted/conditional (making it hard for the AI to parse the main intent). Bad instructions can also contradict the agent’s configuration (e.g., telling it to use a data source it doesn’t have) – such instructions will simply be ignored by the agent[3] but they waste precious prompt space and can confuse the model’s reasoning. Another failure mode is focusing only on what not to do without guiding what to do. For instance, an instructions set that says a lot of “Don’t do X, avoid Y, never say Z” and little else, may constrain the agent but not tell it how to succeed – the agent might then either do nothing useful or inadvertently do something outside the unmentioned bounds.

Why the Good Example Succeeds (and the Bad Fails):\ The good instructions provide specificity and structure – the agent knows its role, has a procedure to follow, and boundaries to respect. This reduces ambiguity and aligns with how the Copilot engine decides on actions and outputs[3]. The bad instructions give either no direction or confusing direction, which means the model might revert to its generic training (not your custom data) or produce unpredictable outputs. In essence:

  • Good instructions guide the agent step-by-step to fulfill its purpose, covering various scenarios (normal case, if issue unclear, if issue resolved or needs escalation, etc.).
  • Bad instructions leave gaps or introduce confusion, so the agent may not behave consistently with the designer’s intent.

Next, we’ll delve into common pitfalls to avoid when writing instructions, and then outline best practices and a framework to craft instructions akin to the “good” example above.

Common Pitfalls to Avoid in Agent Instructions

When designing your agent’s instructions field in Copilot Studio, be mindful to avoid these frequent pitfalls:

1. Being Too Vague or Brief: As shown in the bad example, overly minimal instructions (e.g. one-liners like “You are a helpful agent”) do not set your agent up for success. Ambiguity in instructions forces the AI to guess your intentions, often leading to irrelevant or inconsistent behavior. Always provide enough context and detail so that the agent doesn’t have to “infer” what you likely want – spell it out.

2. Overwhelming with Irrelevant Details: The opposite of being vague is packing the instructions with extraneous or scenario-specific detail that isn’t generally applicable. For instance, hardcoding a very specific response format for one narrow case (like the coffee shop example) can actually reduce the agent’s flexibility for other cases. Avoid overly verbose instructions that might distract or confuse the model; keep them focused on the general patterns of behavior you want.

3. Contradictory or Confusing Rules: Ensure your instructions don’t conflict with themselves. Telling the agent “be concise” in one line and then later “provide as much detail as possible” is a recipe for confusion. Similarly, avoid mixing positive and negative instructions that conflict (e.g. “List steps as Step 1, Step 2… but don’t number them” from the bad example). If the logic or formatting guidance is complex, clarify it with examples or break it into simpler rules. Consistency in your directives will lead to consistent agent responses.

4. Focusing on Don’ts Without Do’s: As a best practice, try to phrase instructions proactively (“Do X”) rather than just prohibitions (“Don’t do Y”)[4]. Listing many “don’ts” can box the agent in or lead to odd phrasings as it contorts to avoid forbidden words. It’s often more effective to tell the agent what it should do instead. For example, instead of only saying “Don’t use a casual tone,” a better instruction is “Use a formal, professional tone.” That said, if there are hard no-go areas (like “do not provide medical advice beyond the provided guidelines”), you should include them – just make sure you’ve also told the agent how to handle those cases (e.g., “if asked medical questions outside the guidelines, politely refuse and refer to a doctor”).

5. Not Covering Error Handling or Unknowns: A common oversight is failing to instruct the agent on what to do if it doesn’t have an answer or if a tool returns no result. If not guided, the AI might hallucinate an answer when it actually doesn’t know. Mitigate this by adding instructions like: “If you cannot find the answer in the knowledge base, admit that and ask the user if they want to escalate.” This kind of error handling guidance prevents the agent from stalling or giving false answers[4]. Similarly, if the agent uses tools, instruct it about when to call them and when not to – e.g. “Only call the database search if the query contains a product name” to avoid pointless tool calls[4].

6. Ignoring the Agent’s Configured Scope: Sometimes writers accidentally instruct the agent beyond its capabilities. For example, telling an agent “search the web for latest news” when the agent doesn’t have a web search skill configured. The agent will simply not do that (it can’t), and your instruction is wasted. Always align instructions with the actual skills/knowledge sources configured for the agent[3]. If you update the agent to add new data sources or actions, update the instructions to incorporate them as well.

7. No Iteration or Testing: Treating the first draft of instructions as final is a mistake (we expand on this later). It’s a pitfall to assume you’ve written the perfect prompt on the first try. In reality, you’ll likely discover gaps or ambiguities when you test the agent. Not iterating is a pitfall in itself – it leads to suboptimal agents. Avoid this by planning for multiple refine-and-test cycles.

By being aware of these pitfalls, you can double-check your instructions draft and revise it to dodge these common errors. Now let’s focus on what to do: the best practices and a structured framework for writing high-quality instructions.

Best Practices for Writing Effective Instructions

Writing great instructions for Copilot Studio agents requires clarity, structure, and an understanding of how the AI interprets your prompts. Below are established best practices, gathered from Microsoft’s guidance and successful agent designers:

  • Use Clear, Actionable Language: Write instructions in straightforward terms and use specific action verbs. The agent should immediately grasp what action is expected. Microsoft recommends using precise verbs like “ask,” “search,” “send,” “check,” or “use” when telling the agent what to do[4]. For example, “Search the HR policy database for any mention of parental leave,” is much clearer than “Find info about leave” – the former explicitly tells the agent which resource to use and what to look for. Avoid ambiguity: if your organization uses unique terminology or acronyms, define them in the instructions so the AI knows what they mean[4].
  • Focus on What the Agent Should Do (Positive Instructions): As noted, frame rules in terms of desirable actions whenever possible[4]. E.g., say “Provide a brief summary followed by two recommendations,” instead of “Do not ramble or give too many options.” Positive phrasing guides the model along the happy path. Include necessary restrictions (compliance, safety) but balance them by telling the agent how to succeed within those restrictions.
  • Provide a Structured Template or Workflow: It often helps to break the agent’s task into step-by-step instructions or sections. This could mean outlining the conversation flow in steps (Step 1, Step 2, etc.) or dividing the instructions into logical sections (like “Objective,” “Response Guidelines,” “Workflow Steps,” “Closing”)[4]. Using Markdown formatting (headers, numbered lists, bullet points) in the instructions field is supported, and it can improve clarity for the AI[4]. For instance, you might have:
    • A Purpose section: describing the agent’s goal and overall approach.
    • Rules/Guidelines: bullet points for style and policy (like the do’s and don’ts).
    • A stepwise Workflow: if the agent needs to go through a sequence of actions (as we did in the IT support example with steps 1-4).
    • Perhaps Error Handling instructions: what to do if things go wrong or info is missing.
    • Example interactions (see below). This structured approach helps the model follow your intended order of operations. Each step should be unambiguous and ideally say when to move to the next step (a “transition” condition)[4]. For example, “Step 1: Do X… (if outcome is Y, then proceed to Step 2; if not, respond with Z and end).”
  • Highlight Key Entities and Terms: If your agent will use particular tools or reference specific data sources, call them out clearly by name in the instructions. For example: “Use the <ToolName> action to retrieve inventory data,” or “Consult the PolicyWiki knowledge base for policy questions.” By naming the tool/knowledge, you help the AI choose the correct resource at runtime. In technical terms, the agent matches your words with the names/descriptions of the tools and data sources you attached[3]. So if your knowledge base is called “Contoso FAQ”, instruct “search the Contoso FAQ for relevant answers” – this makes a direct connection. Microsoft’s best practices suggest explicitly referencing capabilities or data sources involved at each step[4]. Also, if your instructions mention any uncommon jargon, define it so the AI doesn’t misunderstand (e.g., “Note: ‘HCS’ refers to the Health & Care Service platform in our context” as seen in a sample[1]).
  • Set the Tone and Style: Don’t forget to tell your agent how to talk to the user. Is the tone friendly and casual, or formal and professional? Should answers be brief or very detailed? State these as guidelines. For example: “Maintain a conversational and encouraging tone, using simple language” or “Respond in a formal style suitable for executive communications.” If formatting is important (like always giving answers in a table or starting with a summary bullet list), include that instruction. E.g., “Present the output as a table with columns X, Y, Z,” or “Whenever listing items, use bullet points for readability.” In our earlier IT agent example, instructions included “provide clear, concise explanations” as a response approach[1]. Such guidance ensures consistency in output regardless of which AI model iteration is behind the scenes.
  • Incorporate Examples (Few-Shot Prompting): For complex agents or those handling nuanced tasks, providing example dialogs or cases in the instructions can significantly improve performance. This technique is known as few-shot prompting. Essentially, you append one or more example interactions (a sample user query and how the agent should respond) in the instructions. This helps the AI understand the pattern or style you expect. Microsoft suggests using examples especially for complex scenarios or edge cases[4]. For instance, if building a legal Q\&A agent, you might give an example Q\&A where the user asks a legal question and the agent responds citing a specific policy clause, to show the desired behavior. Be careful not to include too many examples (which can eat up token space) – use representative ones. In practice, even 1–3 well-chosen examples can guide the model. If your agent requires multi-turn conversational ability (asking clarifying questions, etc.), you might include a short dialogue example illustrating that flow[7][7]. Examples make instructions much more concrete and minimize ambiguity about how to implement the rules.
  • Anticipate and Prevent Common Failures: Based on known LLM behaviors, watch out for issues like:
    • Over-eager tool usage: Sometimes the model might call a tool too early or without needed info. Solution: explicitly instruct conditions for tool use (e.g., “Only use the translation API if the user actually provided text to translate”)[4].
    • Repetition: The model might parrot an example wording in its response. To counter this, encourage it to vary phrasing or provide multiple examples so it generalizes the pattern rather than copying verbatim[4].
    • Over-verbosity: If you fear the agent will give overly long explanations, add a constraint like “Keep answers under 5 sentences when possible” or “Be concise and to-the-point.” Providing an example of a concise answer can reinforce this[4]. Many of these issues can be tuned by small tweaks in instructions. The key is to be aware of them and adjust wording accordingly. For example, to avoid verbose outputs, you might include a bullet: “Limit the response to the essential information; do not elaborate with unnecessary background.”
  • Use Markdown for Emphasis and Clarity: We touched on structure with Markdown headings and lists. Additionally, you can use bold text in instructions to highlight critical rules the agent absolutely must not miss[4]. For instance: “Always confirm with the user before closing the session.” Using bold can give that rule extra weight in the AI’s processing. You can also put specific terms in backticks to indicate things like literal values or code (e.g., “set status to Closed in the ticketing system”). These formatting touches help the AI distinguish instruction content from plain narrative.

Following these best practices will help you create a robust set of instructions. The next step is to approach the writing process systematically. We’ll introduce a simple framework to ensure you cover all bases when drafting instructions for a Copilot agent.

Framework for Crafting Agent Instructions (T-C-R Approach)

It can be helpful to follow a repeatable framework when drafting instructions for an agent. One useful approach is the T-C-R framework: Task – Clarity – Refine[5]:

Using this T-C-R framework ensures you tackle instruction-writing methodically:

  • Task: You don’t forget any part of the agent’s job.
  • Clarity: You articulate exactly what’s expected for each part.
  • Refine: You catch issues and continuously improve the prompt.

It’s similar to how one might approach writing requirements for a software program – be thorough and clear, then test and revise.

Testing and Validation of Agent Instructions

Even the best-written first draft of instructions can behave unexpectedly when put into practice. Therefore, rigorous testing and validation is a crucial phase in developing Copilot Studio agents.

Use the Testing Tools: Copilot Studio provides a Test Panel where you can interact with your agent in real time, and for trigger-based agents, you can use test payloads or scenarios[3]. As soon as you write or edit instructions, test the agent with a variety of inputs:

  • Start with simple, expected queries: Does the agent follow the steps? Does it call the intended tools (you might see this in logs or the response content)? Is the answer well-formatted?
  • Then try edge cases or slightly off-beat inputs: If something is ambiguous or missing in the user’s question, does the agent ask the clarifying question as instructed? If the user asks something outside the agent’s scope, does it handle it gracefully (e.g., with a refusal or a redirect as per instructions)?
  • If your agent has multiple distinct functionalities (say, it both can fetch data and also compose emails), test each function individually.

Validate Against Design Expectations: As you test, compare the agent’s actual behavior to the design you intended. This can be done by creating a checklist of expected behaviors drawn from your instructions. For example: “Did the agent greet the user? ✅”, “Did it avoid giving unsupported medical advice? ✅”, “When I asked a second follow-up question, did it remember context? ✅” etc. Microsoft suggests comparing the agent’s answers to a baseline, like Microsoft 365 Copilot’s answers, to see if your specialized agent is adding the value it should[4]. If your agent isn’t outperforming the generic copilot or isn’t following your rules, that’s a sign the instructions need tweaking or the agent needs additional knowledge.

RAI (Responsible AI) Validation: When you publish an agent, Microsoft 365’s platform will likely run some automated checks for responsible AI compliance (for instance, ensuring no obviously disallowed instructions are present)[4]. Usually, if you stick to professional content and the domain of your enterprise data, this won’t be an issue. But it’s good to double-check that your instructions themselves don’t violate any policies (e.g., telling the agent to do something unethical). This is part of validation – making sure your instructions are not only effective but also compliant.

Iterate Based on Results: It’s rare to get the instructions perfect on the first try. You might observe during testing that the agent does something odd or suboptimal. Use those observations to refine the instructions (this is the “Refine” step of the T-C-R framework). For example, if the agent’s answers are too verbose, you might add a line in instructions: “Be brief in your responses, focusing only on the solution.” Test again and see if that helped. Or if the agent didn’t use a tool when it should have, maybe you need to mention that tool by name more explicitly or adjust the phrasing that cues it. This experimental mindset – tweak, test, tweak, test – is essential. Microsoft’s documentation illustration for declarative agents shows an iterative loop of designing instructions, testing, and modifying instructions to improve outcomes[4][4].

Document Your Tests: As your instructions get more complex, it’s useful to maintain a set of test cases or scenarios with expected outcomes. Each time you refine instructions, run through your test cases to ensure nothing regressed and new changes work as intended. Over time, this becomes a regression test suite for your agent’s behavior.

By thoroughly testing and validating, you ensure the instructions truly yield an agent that operates as designed. Once initial testing is satisfactory, you can move to a pilot deployment or let some end-users try the agent, then gather their feedback – feeding into the next topic: improvement mechanisms.

Iteration and Feedback: Continuous Improvement of Instructions

An agent’s instructions are not a “write once, done forever” artifact. They should be viewed as living documentation that can evolve with user needs and as you discover what works best. Two key processes for continuous improvement are monitoring feedback and iterating instructions over time:

  • Gather User Feedback: After deploying the agent to real users (or a test group), collect feedback on its performance. This can be direct feedback (users rating responses or reporting issues) or indirect, like observing usage logs. Pay attention to questions the agent fails to answer or any time users seem confused by the agent’s output. These are golden clues that the instructions might need adjustment. For example, if users keep asking for clarification on the agent’s answers, maybe your instructions should tell the agent to be more explanatory on first attempt. If users trigger the agent in scenarios it wasn’t originally designed for, you might decide to broaden the instructions (or explicitly handle those out-of-scope cases in the instructions with a polite refusal).
  • Review Analytics and Logs: Copilot Studio (and related Power Platform tools) may provide analytics such as conversation transcripts, success rates of actions, etc. Microsoft advises to “regularly review your agent results and refine custom instructions based on desired outcomes.”[6]. For instance, if analytics show a particular tool call failing frequently, maybe the instructions need to better gate when that tool is used. Or if users drop off after the agent’s first answer, perhaps the agent is not engaging enough – you might tweak the tone or ask a question back in the instructions. Treat these data points as feedback for improvement.
  • Incremental Refinements: Incorporate the feedback into improved instructions, and update the agent. Because Copilot Studio allows you to edit and republish instructions easily[3], you can make iterative changes even after deployment. Just like software updates, push instruction updates to fix “bugs” in agent behavior. Always test changes in a controlled way (in the studio test panel or with a small user group) before rolling out widely.
  • Keep Iterating: The process of testing and refining is cyclical. Your agent can always get better as you discover new user requirements or corner cases. Microsoft’s guidance strongly encourages an iterative approach, as illustrated by their steps: create -> test -> verify -> modify -> test again[4][4]. Over time, these tweaks lead to a very polished set of instructions that anticipates many user needs and failure modes.
  • Version Control Your Instructions: It’s good practice to keep track of changes (what was added, removed, or rephrased in each iteration). This way if a change unexpectedly worsens the agent’s performance, you can rollback or adjust. You might use simple version comments or maintain the instructions text in a version-controlled repository (especially for complex custom agents).

In summary, don’t treat instruction-writing as a one-off task. Embrace user feedback and analytic insights to continually hone your agent. Many successful Copilot agents likely went through numerous instruction revisions. Each iteration brings the agent’s behavior closer to the ideal.

Tailoring Instructions to Different Agent Types and Scenarios

No one-size-fits-all set of instructions will work for every agent – the content and style of the instructions should be tailored to the type of agent you’re building and the scenario it operates in[3]. Consider the following variations and how instructions might differ:

  • Conversational Q\&A Agents: These are agents that engage in a back-and-forth chat with users (for example, a helpdesk chatbot or a personal finance Q\&A assistant). Instructions for conversational agents should prioritize dialog flow, context handling, and user interaction. They often include guidance like how to greet the user, how to ask clarifying questions one at a time, how to not overwhelm the user with too much info at once, and how to confirm if the user’s need was met. The example instructions we discussed (IT support agent, ShowExpert recommendation agent) fall in this category – note how they included steps for asking questions and confirming understanding[4][1]. Also, conversational agents might need instructions on maintaining context over multiple turns (e.g. “remember the user’s last answer about their preference when formulating the next suggestion”).
  • Task/Action (Trigger) Agents: Some Copilot Studio agents aren’t chatting with a user in natural dialogue, but instead get triggered by an event or command and then perform a series of actions silently or output a result. For instance, an agent that, when triggered, gathers data from various sources and emails a report. Instructions for these agents may be more like a script of what to do: step 1 do X, step 2 do Y, etc., with less emphasis on language tone and conversation, and more on correct execution. You’d focus on instructions that detail workflow logic and error handling, since user interaction is minimal. However, you might still include some instruction about how to format the final output or what to log.
  • Declarative vs Custom Agents: In Copilot Studio, Declarative agents use mostly natural language instructions to declare their behavior (with the platform handling orchestration), whereas Custom agents might involve more developer-defined logic or even code. Declarative agent instructions might be more verbose and rich in language (since the model is reading them to drive logic), whereas a custom agent might offload some logic to code and use instructions mainly for higher-level guidance. That said, in both cases the principles of clarity and completeness apply. Declarative agents, in particular, benefit from well-structured instructions since they heavily rely on them for generative reasoning[7].
  • Different Domains Require Different Details: An agent’s domain will dictate what must be included in instructions. For example, a medical information agent should have instructions emphasizing accuracy, sourcing from medical guidelines, and perhaps disclaimers (and definitely instructions not to venture outside provided medical content)[1][1]. A customer service agent might need a friendly empathetic tone and instructions to always ask if the user is satisfied at the end. A coding assistant agent might have instructions to format answers in code blocks and not to provide theoretical info not found in the documentation provided. Always infuse domain-specific best practices into the instruction. If unsure, consult with subject matter experts about what an agent in that domain must or must not do.

In essence, know your agent’s context and tailor the instructions accordingly. Copilot Studio’s own documentation notes that “How best to write your instructions depends on the type of agent and your goals for the agent.”[3]. An easy way to approach this is to imagine a user interacting with your agent and consider what that agent needs to excel in that scenario – then ensure those points are in the instructions.

Resources and Tools for Improving Agent Instructions

Writing effective AI agent instructions is a skill you can develop by learning from others and using available tools. Here are some resources and aids:

  • Official Microsoft Documentation: Microsoft Learn has extensive materials on Copilot Studio and writing instructions. Key articles include “Write agent instructions”[3], “Write effective instructions for declarative agents”[4], and “Optimize prompts with custom instructions”[6]. These provide best practices (many cited in this report) straight from the source. They often include examples, do’s and don’ts, and are updated as the platform evolves. Make it a point to read these guides; they reinforce many of the principles we’ve discussed.
  • Copilot Prompt Gallery/Library: There are community-driven repositories of prompt examples. In the Copilot community, a “Prompt Library” has been referenced[7] which contains sample agent prompts. Browsing such examples can inspire how to structure your instructions. Microsoft’s Copilot Developer Camp content (like the one for ShowExpert we cited) is an excellent, practical walkthrough of iteratively improving instructions[7][7]. Following those labs can give you hands-on practice.
  • GitHub Best Practice Repos: The community has also created best practice guides, such as the Agents Best Practices repo[1]. This provides a comprehensive guide with examples of good instructions for various scenarios (IT support, HR policy, etc.)[1][1]. Seeing multiple examples of “sample agent instructions” can help you discern patterns of effective prompts.
  • Peer and Expert Reviews: If possible, get a colleague to review your instructions. A fresh pair of eyes can spot ambiguities or potential misunderstandings you overlooked. Within a large organization, you might even form a small “prompt review board” when developing important agents – to ensure instructions align with business needs and are clearly written. There are also growing online forums (such as the Microsoft Tech Community for Power Platform/Copilot) where you could ask for advice (without sharing sensitive details).
  • AI Prompt Engineering Tools: Some tools can simulate how an LLM might parse your instructions. For example, prompt analysis tools (often used in general AI prompt engineering) can highlight which words are influencing the model. While not specific to Copilot Studio, experimenting with your instruction text in something like the Azure OpenAI Playground with the same model (if known) can give insight. Keep in mind Copilot Studio has its own orchestration (like combining with user query and tool descriptions), so results outside may not exactly match – but it’s a way to sanity-check if any wording is confusing.
  • Testing Harness: Use the Copilot Studio test chat repeatedly as a tool. Try intentionally weird inputs to see how your agent handles them. If your agent is a Teams bot, you might sideload it in Teams and test the user experience there as well. Treat the test framework as a tool to refine your prompt – it’s essentially a rapid feedback loop.
  • Telemetry and Analytics: Post-deployment, the telemetry (if available) is a tool. Some enterprises integrate Copilot agent interactions with Application Insights or other monitoring. Those logs can reveal how the agent is being used and where it falls short, guiding you to adjust instructions.
  • Keep Example Collections: Over time, accumulate a personal collection of instruction snippets that worked well. You can often reuse patterns (for example, the generic structure of “Your responsibilities include: X, Y, Z” or a nicely phrased workflow step). Microsoft’s examples (like those in this text and docs) are a great starting point.

By leveraging these resources and tools, you can improve not only a given agent’s instructions but your overall skill in writing effective AI instructions.

Staying Updated with Best Practices

The field of generative AI and platforms like Copilot Studio is rapidly evolving. New features, models, or techniques can emerge that change how we should write instructions. It’s important to stay updated on best practices:

  • Follow Official Updates: Keep an eye on the official Microsoft Copilot Studio documentation site and blog announcements. Microsoft often publishes new guidelines or examples as they learn from real-world usage. The documentation pages we referenced have dates (e.g., updated June 2025) – revisiting them periodically can inform you of new tips (for instance, newer versions might have refined advice on formatting or new capabilities you can instruct the agent to use).
  • Community and Forums: Join communities of makers who are building Copilot agents. Microsoft’s Power Platform community forums, LinkedIn groups, or even Twitter (following hashtags like #CopilotStudio) can surface discussions where people share experiences. The Practical 365 blog[2] and the Power Platform Learners YouTube series are examples of community-driven content that can provide insights and updates. Engaging in these communities allows you to ask questions and learn from others’ mistakes and successes.
  • Continuous Learning: Microsoft sometimes offers training modules or events (like hackathons, the Powerful Devs series, etc.) around Copilot Studio. Participating in these can expose you to the latest features. For instance, if Microsoft releases a new type of “skill” that agents can use, there might be new instruction patterns associated with that – you’d want to incorporate those.
  • Experimentation: Finally, don’t hesitate to experiment on your own. Create small test agents to try out new instruction techniques or to see how a particular phrasing affects outcome. The more you play with the system, the more intuitive writing good instructions will become. Keep notes of what you learn and share it where appropriate so others can benefit (and also validate your findings).

By staying informed and agile, you ensure that your agents continue to perform well as the underlying technology or user expectations change over time.


Conclusion: Writing the instructions field for a Copilot Studio agent is a critical task that requires careful thought and iteration. The instructions are effectively the “source code” of your AI agent’s behavior. When done right, they enable the agent to use its tools and knowledge effectively, interact with users appropriately, and achieve the intended outcomes. We’ve examined how good instructions are constructed (clear role, rules, steps, examples) and why bad instructions fail. We established best practices and a T-C-R framework to approach writing instructions systematically. We also emphasized testing and continuous refinement – because even with guidelines, every use case may need fine-tuning. By avoiding common pitfalls and leveraging available resources and feedback loops, you can craft instructions that make your Copilot agent a reliable and powerful assistant. In sum, getting the instructions field correct is crucial because it is the single most important factor in whether your Copilot Studio agent operates as designed or not. With the insights and methods outlined here, you’re well-equipped to write instructions that set your agent up for success. Good luck with your Copilot agent, and happy prompting!

References

[1] GitHub – luishdemetrio/agentsbestpractices

[2] A Microsoft 365 Administrator’s Beginner’s Guide to Copilot Studio

[3] Write agent instructions – Microsoft Copilot Studio

[4] Write effective instructions for declarative agents

[5] From Scribbles to Spells: Perfecting Instructions in Copilot Studio

[6] Optimize prompts with custom instructions – Microsoft Copilot Studio

[7] Level 1 – Simple agent instructions – Copilot Developer Camp

Robert.Agent now recommends improved questions

bp1

I continue to work on my autonomous email agent created with Copilot Studio. a recent addition is that now you might get a response that includes something like this at the end of the information returned:

image

It is a suggestion for an improved prompt to generate better answers based on the original question.

The reason I created this was I noticed many submissions were not writing ‘good’ prompts. In fact, most submissions seem better suited to search engines than for AI. The easy solution was to get Copilot to suggest how to ask better questions.

Give it a go and let me know what you think.

How I Built a Free Microsoft 365 Copilot Chat Agent to Instantly Search My Blog!

Video URL = https://www.youtube.com/watch?v=_A1pSltpcmg

In this video, I walk you through my step-by-step process for creating a powerful, no-cost Microsoft 365 Copilot chat agent that searches my blog and delivers instant, well-formatted answers to technical questions. Watch as I demonstrate how to set up the agent, configure it to use your own public website as a knowledge source, and leverage AI to boost productivity—no extra licenses required! Whether you want to streamline your workflow, help your team access information faster, or just see what’s possible with Microsoft 365’s built-in AI, this guide will show you how to get started and make the most of your content. if you want a copy of the ‘How to’ document for this video then use this link – https://forms.office.com/r/fqJXdCPAtU

Impact of Microsoft 365 Copilot Licensing on Copilot Studio Agent Responses in Microsoft Teams

bp1

 

Executive Summary

The deployment of Copilot Studio agents within Microsoft Teams introduces a nuanced dynamic concerning data access and response completeness, particularly when interacting with users holding varying Microsoft 365 Copilot licenses. This report provides a comprehensive analysis of these interactions, focusing on the differential access to work data and the agent’s notification behavior regarding partial answers.

A primary finding is that a user possessing a Microsoft 365 Copilot license will indeed receive more comprehensive and contextually relevant responses from a Copilot Studio agent. This enhanced completeness is directly attributable to Microsoft 365 Copilot’s inherent capability to leverage the Microsoft Graph, enabling access to a user’s authorized organizational data, including content from SharePoint, OneDrive, and Exchange.1 Conversely, users without this license will experience limitations in accessing such personalized work data, resulting in responses that are less complete, more generic, or exclusively derived from publicly available information or pre-defined knowledge sources.3

A critical observation is that Copilot Studio agents are not designed to explicitly notify users when a response is partial or incomplete due to licensing constraints or insufficient data access permissions. Instead, the agent’s operational model involves silently omitting any content from knowledge sources that the querying user is not authorized to access.4 In situations where the agent cannot retrieve pertinent information, it typically defaults to generic fallback messages, such as “I’m sorry. I’m not sure how to help with that. Can you try rephrasing?”.5 This absence of explicit, context-specific notification poses a notable challenge for managing user expectations and ensuring a transparent user experience.

Furthermore, while it is technically feasible to make Copilot Studio agents accessible to users without a full Microsoft 365 Copilot license, interactions that involve accessing shared tenant data (e.g., content from SharePoint or via Copilot connectors) will incur metered consumption charges. These charges are typically billed through Copilot Studio’s pay-as-you-go model.3 In stark contrast, users with a Microsoft 365 Copilot license benefit from “zero-rated usage” for these types of interactions when conducted within Microsoft 365 services, eliminating additional costs for accessing internal organizational data.6 These findings underscore the importance of strategic licensing, robust governance, and clear user communication for effective AI agent deployment.

Introduction

The integration of artificial intelligence (AI) agents into enterprise workflows is rapidly transforming how organizations operate, particularly within collaborative platforms like Microsoft Teams. Platforms such as Microsoft Copilot Studio empower businesses to develop and deploy intelligent conversational agents that enhance employee productivity, streamline information retrieval, and automate routine tasks. As these AI capabilities become increasingly central to organizational efficiency, a thorough understanding of their operational characteristics, especially concerning data interaction and user experience, becomes paramount.

This report is specifically designed to provide a definitive and comprehensive analysis of how Copilot Studio agents behave when deployed within Microsoft Teams. The central inquiry revolves around the impact of varying Microsoft 365 Copilot licensing statuses on an agent’s ability to access and utilize enterprise work data. A key objective is to clarify whether a licensed user receives a more complete response compared to a non-licensed user and, crucially, if the agent provides any notification when a response is partial due to data access limitations. This detailed examination aims to equip IT administrators and decision-makers with the necessary insights for strategic planning, deployment, and governance of AI solutions within their enterprise environments.

Understanding Copilot Studio Agents and Data Grounding

Microsoft Copilot Studio is a robust, low-code graphical tool engineered for the creation of sophisticated conversational AI agents and their underlying automated processes, known as agent flows.7 These agents are highly adaptable, capable of interacting with users across numerous digital channels, with Microsoft Teams being a prominent deployment environment.7 Beyond simple question-and-answer functionalities, these agents can be configured to execute complex tasks, address common organizational inquiries, and significantly enhance productivity by integrating with diverse data sources. This integration is facilitated through a range of prebuilt connectors or custom plugins, allowing for tailored access to specific datasets.7 A notable capability of Copilot Studio agents is their ability to extend the functionalities of Microsoft 365 Copilot, enabling the delivery of customized responses and actions that are deeply rooted in specific enterprise data and scenarios.7

How Agents Access Data: The Principle of User-Based Permissions and the Role of Microsoft Graph

A fundamental principle governing how Copilot agents, including those developed within Copilot Studio and deployed through Microsoft 365 Copilot, access information is their strict adherence to the end-user’s existing permissions. This means that the agent operates within the security context of the individual user who is interacting with it.4 Consequently, the agent will only retrieve and present data that the user initiating the query is explicitly authorized to access.1 This design choice is a deliberate architectural decision to embed security and data privacy at the core of the Copilot framework, ensuring that the system is engineered to prevent unauthorized data access by design, leveraging existing Microsoft 365 security models. This robust, security-by-design approach significantly mitigates the critical risk of unintended data exfiltration, a paramount concern for enterprises adopting AI solutions. For IT administrators, this implies a reliance on established Microsoft 365 permission structures for data security when deploying Copilot Studio agents, rather than needing to implement entirely new, AI-specific permission layers for content accessed via the Microsoft Graph. This establishes a strong foundation of trust in the platform’s ability to handle sensitive organizational data.

Microsoft 365 Copilot achieves this secure data grounding by leveraging the Microsoft Graph, which acts as the gateway to a user’s personalized work data. This encompasses a broad spectrum of information, including emails, chat histories, and documents stored within the Microsoft 365 ecosystem.1 This grounding mechanism ensures that organizational data boundaries, security protocols, compliance requirements, and privacy standards are meticulously preserved throughout the interaction.1 The agent respects the end user’s information and sensitivity privileges, meaning if the user lacks access to a particular knowledge source, the agent will not include content from it when generating a response.4

Distinction between Public/Web Data and Enterprise Work Data

Copilot Studio agents can be configured to draw knowledge from publicly available websites, serving as a broad knowledge base.10 When web search is enabled, the agent can fetch information from services like Bing, thereby enhancing the quality and breadth of responses grounded in public web content.11 This allows agents to provide general information or answers based on external, non-proprietary sources.

In contrast, enterprise work data, which includes sensitive and proprietary information residing in SharePoint, OneDrive, and Exchange, is accessed exclusively through the Microsoft Graph. Access to this internal data is strictly governed by the individual user’s explicit permissions, creating a clear delineation between publicly available information and internal organizational knowledge.1 This distinction is fundamental to understanding the varying levels of response completeness based on licensing. The agent’s ability to access and synthesize information from these disparate sources is contingent upon the user’s permissions and, as will be discussed, their specific Microsoft 365 Copilot licensing.

Impact of Microsoft 365 Copilot Licensing on Agent Responses

The licensing structure for Microsoft Copilot profoundly influences the depth and completeness of responses provided by Copilot Studio agents, particularly when those agents are designed to interact with an organization’s internal data.

Licensed User Experience: Comprehensive Access to Work Data

Users who possess a Microsoft 365 Copilot license gain access to a fully integrated AI-powered productivity tool. This tool seamlessly combines large language models with the user’s existing data within the Microsoft Graph and across various Microsoft 365 applications, including Word, Excel, PowerPoint, Outlook, and Teams.1 This deep integration is the cornerstone for delivering highly personalized and comprehensive responses, directly grounded in the user’s work emails, chat histories, and documents.1 The system is designed to provide real-time intelligent assistance, enhancing creativity, productivity, and skills.9

Furthermore, the Microsoft 365 Copilot license encompasses the usage rights for agents developed in Copilot Studio when deployed within Microsoft 365 products such as Microsoft Teams, SharePoint, and Microsoft 365 Copilot Chat. Crucially, interactions involving classic answers, generative answers, or tenant Microsoft Graph grounding for these licensed users are designated as “zero-rated usage”.6 This means that these specific types of interactions do not incur additional charges against Copilot Studio message meters or message packs. This comprehensive inclusion allows licensed users to fully harness the potential of these agents for retrieving information from their authorized internal data sources without incurring unexpected consumption costs. The Microsoft 365 Copilot license therefore functions not just as a feature unlocker but also as a significant cost-efficiency mechanism, particularly for high-frequency interactions with internal enterprise data. Organizations with a substantial user base expected to frequently interact with internal data via Copilot Studio agents should conduct a thorough Total Cost of Ownership (TCO) analysis, as the perceived higher per-user cost of a Microsoft 365 Copilot license might be strategically offset by avoiding unpredictable and potentially substantial pay-as-you-go charges.

Non-Licensed User Experience: Limitations in Accessing Work Data

Users who do not possess the Microsoft 365 Copilot add-on license will not benefit from the same deep, integrated access to their personalized work data via the Microsoft Graph. While these users may still be able to interact with Copilot Studio agents (particularly if the agent’s knowledge base relies on public information or pre-defined, non-Graph-dependent instructions), their capacity to receive responses comprehensively grounded in their specific enterprise work data is significantly restricted.3 This establishes a tiered system for data access within the Copilot ecosystem, where the richness and completeness of an agent’s response are directly linked to the user’s individual licensing status and their underlying data access rights within the organization.

A critical distinction arises for users who have an eligible Microsoft 365 subscription but lack the full Copilot add-on, often categorized as “Microsoft 365 Copilot Chat” users. If such a user interacts with an agent that accesses shared tenant data (e.g., content from SharePoint or through Copilot connectors), these interactions will trigger metered consumption charges, which are tracked via Copilot Studio meters.3 This transforms a functional limitation (less complete answers) into a direct financial consequence. The ability to access some internal data comes at a per-message cost. This means organizations must meticulously evaluate the financial implications of deploying agents to a mixed-license user base. If non-licensed users frequently query internal data via these agents, the cumulative pay-as-you-go (PAYG) charges could become substantial and unpredictable, making the “partial answer” scenario potentially a “costly answer” scenario.

Agents that exclusively draw information from instructions or public websites, however, do not incur these additional costs for any user.3 For individuals with no Copilot license or even a foundational Microsoft 365 subscription, access to Copilot features and its extensibility options, including agents leveraging M365 data, may not be guaranteed or might be entirely unavailable.3 A potential point of user experience friction arises because an agent might appear discoverable or “addable” within the Teams interface, creating an expectation of full functionality, even if the underlying licensing restricts its actual utility for that user.8 This discrepancy between apparent availability and actual capability can lead to significant user frustration and an increase in support requests.

The following table summarizes the comparative data access and cost implications across different license types:

Comparative Data Access and Cost by License Type
License Type Access to Personalized Work Data (Microsoft Graph) Access to Shared Tenant Data (SharePoint, Connectors) Access to Public/Instruction-based Data Additional Usage Charges for Agent Interactions Response Completeness (Relative)
Microsoft 365 Copilot (Add-on) Comprehensive Comprehensive (Zero-rated) Yes No High (rich, contextually grounded)
Microsoft 365 Copilot Chat (Included w/ eligible M365) Limited/No Yes (Metered charges apply via Copilot Studio meters) Yes Yes (for shared tenant data interactions) Moderate (limited by work data access)
No Copilot License/No M365 Subscription No Not guaranteed/No Yes (if agent accessible) N/A (likely no access) Low (limited to public/instructional data)

Agent Behavior Regarding Partial Answers and Notifications

A critical aspect of user experience with AI agents is how they communicate limitations or incompleteness in their responses. The analysis reveals specific behaviors of Copilot Studio agents in this regard.

Absence of Explicit Partial Answer Notifications

The available information consistently indicates that Copilot Studio agents are not designed to provide explicit notifications to users when a response is partial or incomplete due to the user’s lack of permissions to access underlying knowledge sources.4 Instead, the agent’s operational model dictates that it simply omits any content that the querying user is not authorized to access. This means the user receives a response that is, by design, incomplete from the perspective of the agent’s full knowledge base, but without any direct indication of this omission.

This design choice is a deliberate trade-off, prioritizing stringent data security and privacy protocols. It ensures that the agent never inadvertently reveals the existence of restricted information or the specific reason for its omission to an unauthorized user, thereby preventing potential information leakage or inference attacks. However, this creates a significant information asymmetry: end-users are left unaware of why an answer might be incomplete or why the agent could not fully address their query. They lack the context to understand if the limitation stems from a permission issue, a limitation of the agent’s knowledge, or a technical fault. This places a substantial burden on IT administrators and agent owners to proactively manage user expectations. Without transparent communication regarding the scope and limitations of agents for different user profiles, users may perceive the agent as unreliable, inconsistent, or broken, potentially leading to decreased adoption rates and an increase in support requests.

Generic Error Messages and Implicit Limitations

When a Copilot Studio agent encounters a scenario where it cannot fulfill a query comprehensively, whether due to inaccessible data, a lack of relevant information in its knowledge sources, or other technical issues, it typically defaults to generic, non-specific responses. A common example cited is “I’m sorry. I’m not sure how to help with that. Can you try rephrasing?”.5 Crucially, this message does not explicitly attribute the inability to provide a full answer to licensing limitations or specific data access permissions.

Other forms of service denial can manifest if the agent’s underlying capacity limits are reached. For instance, an agent might display a message stating, “This agent is currently unavailable. It has reached its usage limit. Please try again later”.12 While this is a clear notification of service unavailability, it pertains to a broader capacity issue rather than the specific scenario of partial data due to user permissions. When an agent responds with vague messages in situations where the underlying cause is a data access limitation, the actual reason for the failure remains opaque to the user. This effectively turns the agent’s decision-making and data retrieval process into a “black box” from the end-user’s perspective regarding data access. This lack of transparency directly hinders effective user interaction and self-service, as users cannot intelligently rephrase their questions, understand if they need a different license, or determine if they should seek information elsewhere.

Information for Makers/Admins vs. End-User Experience

Copilot Studio provides robust analytics capabilities designed for agent makers and administrators to monitor and assess agent performance.13 These analytics offer valuable insights into the quality of generative answers, capable of identifying responses that are “incomplete, irrelevant, or not fully grounded”.13 This diagnostic information is crucial for the continuous improvement of the agent.

However, a key distinction is that these analytics results are strictly confined to the administrative and development interfaces; “Users of agents don’t see analytics results; they’re available to agent makers and admins only”.13 This means that while administrators can discern

why an agent might be providing incomplete answers (e.g., due to data access issues), this critical diagnostic information is not conveyed to the end-user. This reinforces the need for clear guidance on what types of questions agents can answer for different user profiles and what data sources they are grounded in.

Licensing and Cost Implications for Agent Usage

Understanding the licensing models for Copilot Studio and Microsoft 365 Copilot is essential for managing the financial implications of deploying AI agents, especially in environments with diverse user licensing.

Overview of Copilot Studio Licensing Models

Microsoft Copilot Studio offers a flexible licensing framework comprising three primary models: Pay-as-you-go, Message Packs, and inclusion within the Microsoft 365 Copilot license.6 The Pay-as-you-go model provides highly flexible consumption-based billing at $0.01 per message, requiring no upfront commitment and allowing organizations to scale usage dynamically based on actual consumption.6 Alternatively, Message Packs offer a prepaid capacity, with a standard pack providing 25,000 messages per month for $200.6 For additional capacity beyond message packs, organizations are recommended to sign up for pay-as-you-go to ensure business continuity.6

Significantly, the Microsoft 365 Copilot license, an add-on priced at $30 per user per month, includes the usage rights for Copilot Studio agents when utilized within core Microsoft 365 products such as Teams, SharePoint, and Copilot Chat. Crucially, interactions involving classic answers, generative answers, or tenant Microsoft Graph grounding for these licensed users are “zero-rated,” meaning they do not consume from Copilot Studio message meters or incur additional charges.6 This provides a distinct cost advantage for organizations with a high number of Microsoft 365 Copilot licensed users.

It is important to differentiate between a Copilot Studio user license (which is free of charge) and the Microsoft 365 Copilot license. The free Copilot Studio user license is primarily for individuals who need access to create and manage agents.14 This does not imply free

consumption of agent responses for all users, particularly when those agents interact with enterprise data. This distinction is vital for IT administrators to communicate clearly within their organizations to prevent false expectations about “free” AI agent usage and potentially unexpected costs or functional limitations for end-users.

Discussion of Metered Charges for Non-Licensed Users Accessing Shared Tenant Data

While a dedicated Copilot Studio user license is primarily for authoring and managing agents 14 and not strictly required for interacting with a published agent, the user’s Microsoft 365 Copilot license status profoundly impacts the cost structure when the agent accesses shared tenant data.3 For users who possess an eligible Microsoft 365 subscription but do not have the Microsoft 365 Copilot add-on (i.e., those utilizing “Microsoft 365 Copilot Chat”), interactions with agents that retrieve information grounded in shared tenant data (such as SharePoint content or data via Copilot connectors) will trigger metered consumption charges. These charges are tracked and billed based on Copilot Studio meters.3 This is explicitly stated: “If people that the agent is shared with are not licensed with a Microsoft 365 Copilot license, they will start consuming on a PAYG subscription per message they receive from the agent”.8 Conversely, agents that rely exclusively on pre-defined instructions or publicly available website content do not incur these additional costs for any user, regardless of their Copilot license status.3

A significant governance concern arises when users share agents. If users share their agent with SharePoint content attached to it, the system may propose to “break the SharePoint permission on the assets attached and share the SharePoint resources directly with the audience group”.8 When combined with the metered PAYG model for non-licensed users accessing shared tenant data, this creates a potent dual risk. A well-meaning but uninformed user could inadvertently share an agent linked to sensitive internal data with a broad audience, potentially circumventing existing SharePoint permissions and exposing data, while simultaneously triggering unexpected and significant metered charges for those non-licensed users who then interact with the agent. This highlights a severe governance vulnerability, despite Microsoft’s statement that “security fears are gone” due to access inheritance.8 The acknowledgment of a “roadmap to address this security gap” 16 indicates that this remains an active area of concern for Microsoft.

Capacity Enforcement and Service Denial

Organizations must understand that Copilot Studio’s purchased capacity, particularly through message packs, is enforced on a monthly basis, and any unused messages do not roll over to the subsequent month.6 Should an organization’s actual usage exceed its purchased capacity, technical enforcement mechanisms will be triggered, which “might result in service denial”.6 This can manifest to the end-user as an agent becoming unavailable, accompanied by a message such as “This agent is currently unavailable. It has reached its usage limit. Please try again later”.12 This underscores the critical importance of proactive capacity management to ensure service continuity and avoid disruptions to user access.

The following table provides a detailed breakdown of Copilot Studio licensing and its associated usage cost implications:

License Type Primary Purpose Cost Model Agent Usage of Personalized Work Data (Microsoft Graph) Agent Usage of Shared Tenant Data (SharePoint, Connectors) Agent Usage of Public/Instructional Data Capacity Enforcement Target User Type
Microsoft 365 Copilot (Add-on) Full M365 Integration & AI $30/user/month (add-on) Zero-rated Zero-rated (for licensed user’s interactions) Zero-rated N/A (unlimited for licensed features) Frequent users of M365 apps
Microsoft 365 Copilot Chat (Included w/ eligible M365) Web-based Copilot Chat & limited work data access Included with M365 subscription N/A Metered charges apply (via Copilot Studio meters) No extra charges N/A (unlimited for web, metered for work) Occasional Copilot users
Copilot Studio Message Packs Pre-purchased message capacity for agents $200/tenant/month (25,000 messages) Consumes message packs Consumes message packs Consumes message packs Monthly enforcement (unused don’t carry over) Broad internal/external agent users
Copilot Studio Pay-as-you-go On-demand message capacity for agents $0.01/message Consumes PAYG Consumes PAYG Consumes PAYG Monthly enforcement (based on actual usage) Flexible/scalable agent users
Copilot Studio Licensing and Usage Cost Implications

Key Considerations for IT Administrators and Deployment

The complexities of licensing, data access, and agent behavior necessitate strategic planning and robust management by IT administrators to ensure successful deployment and optimal user experience.

Managing User Expectations Regarding Agent Capabilities Based on Licensing

Given the tiered data access model and the agent’s silent omission of inaccessible content, it is paramount for IT administrators to proactively and clearly communicate the precise capabilities and inherent limitations of Copilot Studio agents to different user groups, explicitly linking these to their licensing status. This communication strategy must encompass educating users on the types of questions agents can answer comprehensively (e.g., those based on public information or general, universally accessible company policies) versus those queries that necessitate a Microsoft 365 Copilot license for personalized, internal data grounding. Setting accurate expectations can significantly mitigate user frustration and enhance perceived agent utility.17

Strategies for Data Governance and Access Control for Copilot Studio Agents

It is crucial to continually reinforce and leverage the fundamental principle of user-based permissions for data access within the Copilot ecosystem.1 This means that existing security policies and permission structures within SharePoint, OneDrive, and the broader Microsoft Graph environment remain the authoritative control points. Organizations must implement and rigorously enforce Data Loss Prevention (DLP) policies within the Power Platform. These policies are vital for granularly controlling how Copilot Studio agents interact with external APIs and sensitive internal data.16 Administrators should also remain vigilant about the acknowledged “security gap” related to API plugins and monitor Microsoft’s roadmap for addressing these improvements.16

Careful management of agent sharing permissions is non-negotiable. Administrators must be acutely aware of the potential for agents to prompt users to “break permissions” on SharePoint content when sharing, which could inadvertently broaden data access beyond intended boundaries.4 Comprehensive training for agent creators on the implications of sharing agents linked to internal data sources is essential. Administrators possess granular control over agent availability and access within the Microsoft 365 admin center, allowing for precise deployment to “All users,” “No users,” or “Specific users or groups”.18 This administrative control point is critical for ensuring that agents are only discoverable and usable by their intended audience, aligning with organizational security policies.

Best Practices for Deploying Agents in Mixed-License Environments

To optimize agent deployment and user experience in environments with mixed licensing, several best practices are recommended:

  • Purpose-Driven Agent Design: Design agents with a clear understanding of their intended audience and the data sources they will access. For broad deployment across a mixed-license user base, prioritize agents primarily grounded in public information, general company FAQs, or non-sensitive, universally accessible internal data. For agents requiring personalized work data access, specifically target their deployment to Microsoft 365 Copilot licensed users.
  • Proactive Cost Monitoring: Establish robust mechanisms for actively monitoring Copilot Studio message consumption, particularly if non-licensed users are interacting with agents that access shared tenant data. This proactive monitoring is crucial for avoiding unexpected and potentially significant pay-as-you-go charges.6
  • Comprehensive User Training and Education: Develop and deliver comprehensive training programs that clearly outline the capabilities and limitations of AI agents, the direct impact of licensing on data access, and what users can realistically expect from agent interactions based on their specific access levels. This proactive education is key to mitigating user frustration stemming from partial answers.
  • Structured Admin Approval Workflows: Implement mandatory admin approval processes for the submission and deployment of all Copilot Studio agents, especially those configured to access internal organizational data. This ensures that agents are compliant with company policies, properly configured for data access, and thoroughly tested before broad release.17
  • Strategic Environment Management: Consider establishing separate Power Platform environments within the tenant for different categories of agents (e.g., internal-facing vs. external-facing, or agents with varying levels of data sensitivity). This strategy enhances governance, simplifies access control, and helps prevent unintended data interactions across different use cases.8 It is also important to ensure that the “publish Copilots with AI features” setting is enabled for makers building agents with generative AI capabilities.16

Conclusion

This report confirms that Microsoft 365 Copilot licensing directly and significantly impacts the completeness and richness of responses provided by Copilot Studio agents, primarily by governing a user’s access to personalized work data via the Microsoft Graph. Licensed users benefit from comprehensive, contextually grounded answers, while non-licensed users face inherent limitations in accessing this internal data.

A critical finding is the absence of explicit notifications from Copilot Studio agents when a response is partial or incomplete due to licensing constraints or insufficient data access permissions. The agent employs a “silent omission” mechanism. While this approach benefits security by preventing unauthorized disclosure of data existence, it creates an information asymmetry for the end-user, who receives an incomplete answer without explanation.

Furthermore, the analysis reveals significant cost implications: interactions by non-licensed users with agents that access shared tenant data will incur metered consumption charges, contrasting sharply with the “zero-rated usage” for Microsoft 365 Copilot licensed users. This highlights that licensing directly affects not only functionality but also operational expenditure.

To optimize agent deployment and user experience, the following recommendations are provided:

  • Proactive User Communication: Organizations must implement comprehensive communication strategies to clearly articulate the capabilities and limitations of AI agents based on user licensing. This includes setting realistic expectations for response completeness and data access to prevent frustration and build trust in the AI solutions.
  • Robust Data Governance: It is imperative to strengthen existing data governance frameworks, including Data Loss Prevention (DLP) policies within the Power Platform, and to meticulously manage agent sharing controls. This proactive approach is crucial for mitigating security risks and controlling unexpected costs in environments with mixed license types.
  • Strategic Licensing Evaluation: IT leaders should conduct a thorough total cost of ownership analysis to evaluate the long-term financial benefits of broader Microsoft 365 Copilot adoption for users who frequently require access to internal organizational data through AI agents. This analysis should weigh the upfront license costs against the unpredictable nature of pay-as-you-go charges that would otherwise accumulate.
  • Continuous Monitoring and Refinement: Leverage Copilot Studio’s built-in analytics to continuously monitor agent performance, identify instances of incomplete or ungrounded responses, and use these observations to refine agent configurations, optimize knowledge sources, and further enhance user education.
Works cited
  1. What is Microsoft 365 Copilot? | Microsoft Learn, accessed on July 3, 2025, https://learn.microsoft.com/en-us/copilot/microsoft-365/microsoft-365-copilot-overview
  2. Retrieve grounding data using the Microsoft 365 Copilot Retrieval API, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/api-reference/copilotroot-retrieval
  3. Licensing and Cost Considerations for Copilot Extensibility Options …, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/cost-considerations
  4. Publish and Manage Copilot Studio Agent Builder Agents | Microsoft Learn, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-365-copilot/extensibility/copilot-studio-agent-builder-publish
  5. Agent accessed via Teams not able to access Sharepoint : r/copilotstudio – Reddit, accessed on July 3, 2025, https://www.reddit.com/r/copilotstudio/comments/1l1gm82/agent_accessed_via_teams_not_able_to_access/
  6. Copilot Studio licensing – Learn Microsoft, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/billing-licensing
  7. Overview – Microsoft Copilot Studio | Microsoft Learn, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/fundamentals-what-is-copilot-studio
  8. Copilot agents on enterprise level : r/microsoft_365_copilot – Reddit, accessed on July 3, 2025, https://www.reddit.com/r/microsoft_365_copilot/comments/1l7du4v/copilot_agents_on_enterprise_level/
  9. Microsoft 365 Copilot – Service Descriptions, accessed on July 3, 2025, https://learn.microsoft.com/en-us/office365/servicedescriptions/office-365-platform-service-description/microsoft-365-copilot
  10. Quickstart: Create and deploy an agent – Microsoft Copilot Studio, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/fundamentals-get-started
  11. Data, privacy, and security for web search in Microsoft 365 Copilot and Microsoft 365 Copilot Chat | Microsoft Learn, accessed on July 3, 2025, https://learn.microsoft.com/en-us/copilot/microsoft-365/manage-public-web-access
  12. Understand error codes – Microsoft Copilot Studio, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/error-codes
  13. FAQ for analytics – Microsoft Copilot Studio, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/faqs-analytics
  14. Assign licenses and manage access to Copilot Studio – Learn Microsoft, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/requirements-licensing
  15. Access to agents in M365 Copilot Chat for all business users? : r/microsoft_365_copilot, accessed on July 3, 2025, https://www.reddit.com/r/microsoft_365_copilot/comments/1i3gu63/access_to_agents_in_m365_copilot_chat_for_all/
  16. A Microsoft 365 Administrator’s Beginner’s Guide to Copilot Studio, accessed on July 3, 2025, https://practical365.com/copilot-studio-beginner-guide/
  17. Connect and configure an agent for Teams and Microsoft 365 Copilot, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-copilot-studio/publication-add-bot-to-microsoft-teams
  18. Manage agents for Microsoft 365 Copilot in the Microsoft 365 admin center, accessed on July 3, 2025, https://learn.microsoft.com/en-us/microsoft-365/admin/manage/manage-copilot-agents-integrated-apps?view=o365-worldwide