CIA Brief 20250809

image

Listen to an audio overview of a document with Microsoft 365 Copilot in Word –

https://techcommunity.microsoft.com/blog/Microsoft365InsiderBlog/listen-to-an-audio-overview-of-a-document-with-microsoft-365-copilot-in-word/4439362

Microsoft recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Enterprise Low-Code Application Platforms –

https://www.microsoft.com/en-us/power-platform/blog/power-apps/microsoft-recognized-as-a-leader-in-the-2025-gartner-magic-quadrant-for-enterprise-low-code-application-platforms/

Uncover shadow AI, block threats, and protect data with Microsoft Entra Internet Access –

https://techcommunity.microsoft.com/blog/microsoft-entra-blog/uncover-shadow-ai-block-threats-and-protect-data-with-microsoft-entra-internet-a/4440787

OpenAI GPT-5 is now in public preview for GitHub Copilot –

https://github.blog/changelog/2025-08-07-openai-gpt-5-is-now-in-public-preview-for-github-copilot/

Multi-tenant endpoint security policies distribution is now in Public Preview –

https://techcommunity.microsoft.com/blog/microsoftdefenderatpblog/multi-tenant-endpoint-security-policies-distribution-is-now-in-public-preview/4439929

Microsoft incorporates OpenAI’s GPT-5 into consumer, developer and enterprise offerings –

https://news.microsoft.com/source/features/ai/openai-gpt-5

Available today: GPT-5 in Microsoft 365 Copilot –

https://www.microsoft.com/en-us/microsoft-365/blog/2025/08/07/available-today-gpt-5-in-microsoft-365-copilot/

Empowering teen students to achieve more with Copilot Chat and Microsoft 365 Copilot –

https://www.microsoft.com/en-us/education/blog/2025/05/empowering-teen-students-to-achieve-more-with-copilot-chat-and-microsoft-365-copilot/

Breaking down silos: A unified approach to identity and network access security –

https://techcommunity.microsoft.com/blog/microsoft-entra-blog/breaking-down-silos-a-unified-approach-to-identity-and-network-access-security/4400196

Announcing Public Preview: Phishing Triage Agent in Microsoft Defender –

https://techcommunity.microsoft.com/blog/microsoftthreatprotectionblog/announcing-public-preview-phishing-triage-agent-in-microsoft-defender/4438301

Request more access in Word, Excel, and PowerPoint for the web –

https://techcommunity.microsoft.com/blog/Microsoft365InsiderBlog/request-more-access-in-word-excel-and-powerpoint-for-the-web/4429019

Elevate your protection with expanded Microsoft Defender Experts coverage –

https://techcommunity.microsoft.com/blog/microsoftsecurityexperts/elevate-your-protection-with-expanded-microsoft-defender-experts-coverage/4439134

Self-adaptive reasoning for science –

https://www.microsoft.com/en-us/research/blog/self-adaptive-reasoning-for-science/

Table Talk: Sentinel’s New ThreatIntel Tables Explained –

https://techcommunity.microsoft.com/blog/microsoftsentinelblog/table-talk-sentinel%E2%80%99s-new-threatintel-tables-explained/4440273

Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy –

https://techcommunity.microsoft.com/blog/microsoft-security-blog/hacking-made-easy-patching-made-optional-a-modern-cyber-tragedy/4440267

Security leadership in the age of constant disruption –

https://blogs.windows.com/windowsexperience/2025/08/05/security-leadership-in-the-age-of-constant-disruption/

Project Ire autonomously identifies malware at scale –

https://www.microsoft.com/en-us/research/blog/project-ire-autonomously-identifies-malware-at-scale/

What’s New in Microsoft Teams | July 2025 –

https://techcommunity.microsoft.com/blog/microsoftteamsblog/what%E2%80%99s-new-in-microsoft-teams–july-2025/4438302

Microsoft Entra Suite delivers 131% ROI by unifying identity and network access –

https://www.microsoft.com/en-us/security/blog/2025/08/04/microsoft-entra-suite-delivers-131-roi-by-unifying-identity-and-network-access/

New governance tools for hybrid access and identity verification –

https://techcommunity.microsoft.com/blog/microsoft-entra-blog/new-governance-tools-for-hybrid-access-and-identity-verification/4422534

After hours

GPT-5: Our best model for work – https://www.youtube.com/watch?v=2jqS7JD0hrY

Editorial

If you found this valuable, the I’d appreciate a ‘like’ or perhaps a donation at https://ko-fi.com/ciaops. This helps me know that people enjoy what I have created and provides resources to allow me to create more content. If you have any feedback or suggestions around this, I’m all ears. You can also find me via email director@ciaops.com and on X (Twitter) at https://www.twitter.com/directorcia.

If you want to be part of a dedicated Microsoft Cloud community with information and interactions daily, then consider becoming a CIAOPS Patron – www.ciaopspatron.com.

Watch out for the next CIA Brief next week

Crafting Effective Instructions for Copilot Studio Agents

Copilot Studio is Microsoft’s low-code platform for building AI-powered agents (custom “Copilots”) that extend Microsoft 365 Copilot’s capabilities[1]. These agents are specialized assistants with defined roles, tools, and knowledge, designed to help users with specific tasks or domains. A central element in building a successful agent is its instructions field – the set of written guidelines that define the agent’s behavior, capabilities, and boundaries. Getting this instructions field correct is absolutely critical for the agent to operate as designed.

In this report, we explain why well-crafted instructions are vital, illustrate good vs. bad instruction examples (and why they succeed or fail), and provide a detailed framework and best practices for writing effective instructions in Copilot Studio. We also cover how to test and refine instructions, accommodate different types of agents, and leverage resources to continuously improve your agent instructions.

Overview: Copilot Studio and the Instructions Field

What is Copilot Studio? Copilot Studio is a user-friendly environment (part of Microsoft Power Platform) that enables creators to build and deploy custom Copilot agents without extensive coding[1]. These agents leverage large language models (LLMs) and your configured tools/knowledge to assist users, but they are more scoped and specialized than the general-purpose Microsoft 365 Copilot[2]. For example, you could create an “IT Support Copilot” that helps employees troubleshoot tech issues, or a “Policy Copilot” that answers HR policy questions. Copilot Studio supports different agent types – commonly conversational agents (interactive chatbots that users converse with) and trigger/action agents (which run workflows or tasks based on triggers).

Role of the Instructions Field: Within Copilot Studio, the instructions field is where you define the agent’s guiding principles and behavior rules. Instructions are the central directions and parameters the agent follows[3]. In practice, this field serves as the agent’s “system prompt” or policy:

  • It establishes the agent’s identity, role, and purpose (what the agent is supposed to do and not do)[1].
  • It defines the agent’s capabilities and scope, referencing what tools or data sources to use (and in what situations)[3].
  • It sets the desired tone, style, and format of the agent’s responses (for consistent user experience).
  • It can include step-by-step workflows or decision logic the agent should follow for certain tasks[4].
  • It may impose restrictions or safety rules, such as avoiding certain content or escalating issues per policy[1].

In short, the instructions tell the agent how to behave and how to think when handling user queries or performing its automated tasks. Every time the agent receives a user input (or a trigger fires), the underlying AI references these instructions to decide:

  1. What actions to take – e.g. which tool or knowledge base to consult, based on what the instructions emphasize[3].
  2. How to execute those actions – e.g. filling in tool inputs with user context as instructed[3].
  3. How to formulate the final answer – e.g. style guidelines, level of detail, format (bullet list, table, etc.), as specified in the instructions.

Because the agent’s reasoning is grounded in the instructions, those instructions need to be accurate, clear, and aligned with the agent’s intended design. An agent cannot obey instructions to use tools or data it doesn’t have access to; thus, instructions must also stay within the bounds of the agent’s configured tools/knowledge[3].

Why Getting the Instructions Right is Critical

Writing the instructions field correctly is critical because it directly determines whether your agent will operate as intended. If the instructions are poorly written or wrong, the agent will likely deviate from the desired behavior. Here are key reasons why correct instructions are so important:

  • They are the Foundation of Agent Behavior: The instructions form the foundation or “brain” of your agent. Microsoft’s guidance notes that agent instructions “serve as the foundation for agent behavior, defining personality, capabilities, and operational parameters.”[1]. A well-formulated instructions set essentially hardcodes your agent’s expertise (what it knows), its role (what it should do), and its style (how it interacts). If this foundation is shaky, the agent’s behavior will be unpredictable or ineffective.
  • Ensuring Relevant and Accurate Responses: Copilot agents rely on instructions to produce responses that are relevant, accurate, and contextually appropriate to user queries[5]. Good instructions tell the agent exactly how to use your configured knowledge sources and when to invoke specific actions. Without clear guidance, the AI might rely on generic model knowledge or make incorrect assumptions, leading to hallucinations (made-up info) or off-target answers. In contrast, precise instructions keep the agent’s answers on track and grounded in the right information.
  • Driving the Correct Use of Tools/Knowledge: In Copilot Studio, agents can be given “skills” (API plugins, enterprise data connectors, etc.). The instructions essentially orchestrate these skills. They might say, for example, “If the user asks about an IT issue, use the IT Knowledge Base search tool,” or “When needing current data, call the WebSearch capability.” If these directions aren’t specified or are misspecified, the agent may not utilize the tools correctly (or at all). The instructions are how you, the creator, impart logic to the agent’s decision-making about tools and data. Microsoft documentation emphasizes that agents depend on instructions to figure out which tool or knowledge source to call and how to fill in its inputs[3]. So, getting this right is essential for the agent to actually leverage its configured capabilities in solving user requests.
  • Maintaining Consistency and Compliance: A Copilot agent often needs to follow particular tone or policy rules (e.g., privacy guidelines, company policy compliance). The instructions field is where you encode these. For instance, you can instruct the agent to always use a polite tone, or to only provide answers based on certain trusted data sources. If these rules are not clearly stated, the agent might inadvertently produce responses that violate style expectations or compliance requirements. For example, if an agent should never answer medical questions beyond a provided medical knowledge base, the instructions must say so explicitly; otherwise the agent might try to answer from general training data – a big risk in regulated scenarios. In short, correct instructions protect against undesirable outputs by outlining do’s and don’ts (though as a rule of thumb, phrasing instructions in terms of positive actions is preferred – more on that later).
  • Optimal User Experience: Finally, the quality of the instructions directly translates to the quality of the user’s experience with the agent. With well-crafted instructions, the agent will ask the right clarifying questions, present information in a helpful format, and handle edge cases gracefully – all of which lead to higher user satisfaction. Conversely, bad instructions can cause an agent to be confusing, unhelpful, or even completely off-base. Users may get frustrated if the agent requires too much guidance (because the instructions didn’t prepare it well), or if the agent’s responses are messy or incorrect. Essentially, instructions are how you design the user’s interaction with your agent. As one expert succinctly put it, clear instructions ensure the AI understands the user’s intent and delivers the desired output[5] – which is exactly what users want.

Bottom line: If the instructions field is right, the agent will largely behave and perform as designed – using the correct data, following the intended workflow, and speaking in the intended voice. If the instructions are wrong or incomplete, the agent’s behavior can diverge, leading to mistakes or an experience that doesn’t meet your goals. Now, let’s explore what good instructions look like versus bad instructions, to illustrate these points in practice.

Good vs. Bad Instructions: Examples and Analysis

Writing effective agent instructions is somewhat of an art and science. To understand the difference it makes, consider the following examples of a good instruction set versus a bad instruction set for an agent. We’ll then analyze why the good one works well and why the bad one falls short.

Example of Good Instructions

Imagine we are creating an IT Support Agent that helps employees with common technical issues. A good instructions set for such an agent might look like this (simplified excerpt):

You are an IT support specialist focused on helping employees with common technical issues. You have access to the company’s IT knowledge base and troubleshooting guides.\ Your responsibilities include:\ – Providing step-by-step troubleshooting assistance.\ – Escalating complex issues to the IT helpdesk when necessary.\ – Maintaining a helpful and patient demeanor.\ – Ensuring solutions follow company security policies.\ When responding to requests:

  1. Ask clarifying questions to understand the issue.
  2. Provide clear, actionable solutions or instructions.
  3. Verify whether the solution worked for the user.
  4. If resolved, summarize the fix; if not, consider escalation or next steps.[1]

This is an example of well-crafted instructions. Notice several positive qualities:

  • Clear role and scope: It explicitly states the agent’s role (“IT support specialist”) and what it should do (help with tech issues using company knowledge)[1]. The agent’s domain and expertise are well-defined.
  • Specific responsibilities and guidelines: It lists responsibilities and constraints (step-by-step help, escalate if needed, be patient, follow security policy) in bullet form. This acts as general guidelines for behavior and ensures the agent adheres to important policies (like security rules)[1].
  • Actionable step-by-step approach: Under responding to requests, it breaks down the procedure into an ordered list of steps: ask clarifying questions, then give solutions, then verify, etc.[1]. This provides a clear workflow for the agent to follow on each query. Each step has a concrete action, reducing ambiguity.
  • Positive/constructive tone: The instructions focus on what the agent should do (“ask…”, “provide…”, “verify…”) rather than just what to avoid. This aligns with best practices that emphasize guiding the AI with affirmative actions[4]. (If there are things to avoid, they could be stated too, but in this example the necessary restrictions – like sticking to company guides and policies – are inherently covered.)
  • Aligned with configured capabilities: The instructions mention the knowledge base and troubleshooting guides, which presumably are set up as the agent’s connected data. Thus, the agent is directed to use available resources. (A good instruction set doesn’t tell the agent to do impossible things; here it wouldn’t, say, ask the agent to remote-control a PC unless such an action plugin exists.)

Overall, these instructions would likely lead the agent to behave helpfully and stay within bounds. It’s clear what the agent should do and how.

Example of Bad Instructions

Now consider a contrasting example. Suppose we tried to instruct the same kind of agent with this single instruction line:

“You are an agent that can help the user.”

This is obviously too vague and minimal, but it illustrates a “bad” instructions scenario. The agent is given virtually no guidance except a generic role. There are many issues here:

  • No clarification of domain or scope (help the user with what? anything?).
  • No detail on which resources or tools to use.
  • No workflow or process for handling queries.
  • No guidance on style, tone, or policy constraints. Such an agent would be flying blind. It might respond generically to any question, possibly hallucinate answers because it’s not instructed to stick to a knowledge base, and would not follow a consistent multi-step approach to problems. If a user asked it a technical question, the agent might not know to consult the IT knowledge base (since we never told it to). The result would be inconsistent and likely unsatisfactory.

Bad instructions can also occur in less obvious ways. Often, instructions are “bad” not because they are too short, but because they are unclear, overly complicated, or misaligned. For example, consider this more detailed but flawed instruction example (adapted from an official guidance of what not to do):

“If a user asks about coffee shops, focus on promoting Contoso Coffee in US locations, and list those shops in alphabetical order. Format the response as a series of steps, starting each step with Step 1:, Step 2: in bold. Don’t use a numbered list.”[6]

At first glance it’s detailed, but this is labeled as a weak instruction by Microsoft’s documentation. Why is this considered a bad/weak set of instructions?

  • It mixes multiple directives in one blob: It tells the agent what content to prioritize (Contoso Coffee in US) and prescribes a very specific formatting style (steps with “Step 1:”, but strangely “don’t use a numbered list” simultaneously). This could confuse the model or yield rigid responses. Good instructions would separate concerns (perhaps have a formatting rule separately and a content preference rule separately).
  • It’s too narrow and conditional: “If a user asks about coffee shops…” – what if the user asks something slightly different? The instruction is tied to a specific scenario, rather than a general principle. This reduces the agent’s flexibility or could even be ignored if the query doesn’t exactly match.
  • The presence of a negative directive (“Don’t use a numbered list”) could be stated in a clearer positive way. In general, saying what not to do is sometimes necessary, but overemphasizing negatives can lead the model to fixate incorrectly. (A better version might have been: “Format the list as bullet points rather than a numbered list.”)

In summary, bad instructions are those that lack clarity, completeness, or coherence. They might be too vague (leaving the AI to guess what you intended) or too convoluted/conditional (making it hard for the AI to parse the main intent). Bad instructions can also contradict the agent’s configuration (e.g., telling it to use a data source it doesn’t have) – such instructions will simply be ignored by the agent[3] but they waste precious prompt space and can confuse the model’s reasoning. Another failure mode is focusing only on what not to do without guiding what to do. For instance, an instructions set that says a lot of “Don’t do X, avoid Y, never say Z” and little else, may constrain the agent but not tell it how to succeed – the agent might then either do nothing useful or inadvertently do something outside the unmentioned bounds.

Why the Good Example Succeeds (and the Bad Fails):\ The good instructions provide specificity and structure – the agent knows its role, has a procedure to follow, and boundaries to respect. This reduces ambiguity and aligns with how the Copilot engine decides on actions and outputs[3]. The bad instructions give either no direction or confusing direction, which means the model might revert to its generic training (not your custom data) or produce unpredictable outputs. In essence:

  • Good instructions guide the agent step-by-step to fulfill its purpose, covering various scenarios (normal case, if issue unclear, if issue resolved or needs escalation, etc.).
  • Bad instructions leave gaps or introduce confusion, so the agent may not behave consistently with the designer’s intent.

Next, we’ll delve into common pitfalls to avoid when writing instructions, and then outline best practices and a framework to craft instructions akin to the “good” example above.

Common Pitfalls to Avoid in Agent Instructions

When designing your agent’s instructions field in Copilot Studio, be mindful to avoid these frequent pitfalls:

1. Being Too Vague or Brief: As shown in the bad example, overly minimal instructions (e.g. one-liners like “You are a helpful agent”) do not set your agent up for success. Ambiguity in instructions forces the AI to guess your intentions, often leading to irrelevant or inconsistent behavior. Always provide enough context and detail so that the agent doesn’t have to “infer” what you likely want – spell it out.

2. Overwhelming with Irrelevant Details: The opposite of being vague is packing the instructions with extraneous or scenario-specific detail that isn’t generally applicable. For instance, hardcoding a very specific response format for one narrow case (like the coffee shop example) can actually reduce the agent’s flexibility for other cases. Avoid overly verbose instructions that might distract or confuse the model; keep them focused on the general patterns of behavior you want.

3. Contradictory or Confusing Rules: Ensure your instructions don’t conflict with themselves. Telling the agent “be concise” in one line and then later “provide as much detail as possible” is a recipe for confusion. Similarly, avoid mixing positive and negative instructions that conflict (e.g. “List steps as Step 1, Step 2… but don’t number them” from the bad example). If the logic or formatting guidance is complex, clarify it with examples or break it into simpler rules. Consistency in your directives will lead to consistent agent responses.

4. Focusing on Don’ts Without Do’s: As a best practice, try to phrase instructions proactively (“Do X”) rather than just prohibitions (“Don’t do Y”)[4]. Listing many “don’ts” can box the agent in or lead to odd phrasings as it contorts to avoid forbidden words. It’s often more effective to tell the agent what it should do instead. For example, instead of only saying “Don’t use a casual tone,” a better instruction is “Use a formal, professional tone.” That said, if there are hard no-go areas (like “do not provide medical advice beyond the provided guidelines”), you should include them – just make sure you’ve also told the agent how to handle those cases (e.g., “if asked medical questions outside the guidelines, politely refuse and refer to a doctor”).

5. Not Covering Error Handling or Unknowns: A common oversight is failing to instruct the agent on what to do if it doesn’t have an answer or if a tool returns no result. If not guided, the AI might hallucinate an answer when it actually doesn’t know. Mitigate this by adding instructions like: “If you cannot find the answer in the knowledge base, admit that and ask the user if they want to escalate.” This kind of error handling guidance prevents the agent from stalling or giving false answers[4]. Similarly, if the agent uses tools, instruct it about when to call them and when not to – e.g. “Only call the database search if the query contains a product name” to avoid pointless tool calls[4].

6. Ignoring the Agent’s Configured Scope: Sometimes writers accidentally instruct the agent beyond its capabilities. For example, telling an agent “search the web for latest news” when the agent doesn’t have a web search skill configured. The agent will simply not do that (it can’t), and your instruction is wasted. Always align instructions with the actual skills/knowledge sources configured for the agent[3]. If you update the agent to add new data sources or actions, update the instructions to incorporate them as well.

7. No Iteration or Testing: Treating the first draft of instructions as final is a mistake (we expand on this later). It’s a pitfall to assume you’ve written the perfect prompt on the first try. In reality, you’ll likely discover gaps or ambiguities when you test the agent. Not iterating is a pitfall in itself – it leads to suboptimal agents. Avoid this by planning for multiple refine-and-test cycles.

By being aware of these pitfalls, you can double-check your instructions draft and revise it to dodge these common errors. Now let’s focus on what to do: the best practices and a structured framework for writing high-quality instructions.

Best Practices for Writing Effective Instructions

Writing great instructions for Copilot Studio agents requires clarity, structure, and an understanding of how the AI interprets your prompts. Below are established best practices, gathered from Microsoft’s guidance and successful agent designers:

  • Use Clear, Actionable Language: Write instructions in straightforward terms and use specific action verbs. The agent should immediately grasp what action is expected. Microsoft recommends using precise verbs like “ask,” “search,” “send,” “check,” or “use” when telling the agent what to do[4]. For example, “Search the HR policy database for any mention of parental leave,” is much clearer than “Find info about leave” – the former explicitly tells the agent which resource to use and what to look for. Avoid ambiguity: if your organization uses unique terminology or acronyms, define them in the instructions so the AI knows what they mean[4].
  • Focus on What the Agent Should Do (Positive Instructions): As noted, frame rules in terms of desirable actions whenever possible[4]. E.g., say “Provide a brief summary followed by two recommendations,” instead of “Do not ramble or give too many options.” Positive phrasing guides the model along the happy path. Include necessary restrictions (compliance, safety) but balance them by telling the agent how to succeed within those restrictions.
  • Provide a Structured Template or Workflow: It often helps to break the agent’s task into step-by-step instructions or sections. This could mean outlining the conversation flow in steps (Step 1, Step 2, etc.) or dividing the instructions into logical sections (like “Objective,” “Response Guidelines,” “Workflow Steps,” “Closing”)[4]. Using Markdown formatting (headers, numbered lists, bullet points) in the instructions field is supported, and it can improve clarity for the AI[4]. For instance, you might have:
    • A Purpose section: describing the agent’s goal and overall approach.
    • Rules/Guidelines: bullet points for style and policy (like the do’s and don’ts).
    • A stepwise Workflow: if the agent needs to go through a sequence of actions (as we did in the IT support example with steps 1-4).
    • Perhaps Error Handling instructions: what to do if things go wrong or info is missing.
    • Example interactions (see below). This structured approach helps the model follow your intended order of operations. Each step should be unambiguous and ideally say when to move to the next step (a “transition” condition)[4]. For example, “Step 1: Do X… (if outcome is Y, then proceed to Step 2; if not, respond with Z and end).”
  • Highlight Key Entities and Terms: If your agent will use particular tools or reference specific data sources, call them out clearly by name in the instructions. For example: “Use the <ToolName> action to retrieve inventory data,” or “Consult the PolicyWiki knowledge base for policy questions.” By naming the tool/knowledge, you help the AI choose the correct resource at runtime. In technical terms, the agent matches your words with the names/descriptions of the tools and data sources you attached[3]. So if your knowledge base is called “Contoso FAQ”, instruct “search the Contoso FAQ for relevant answers” – this makes a direct connection. Microsoft’s best practices suggest explicitly referencing capabilities or data sources involved at each step[4]. Also, if your instructions mention any uncommon jargon, define it so the AI doesn’t misunderstand (e.g., “Note: ‘HCS’ refers to the Health & Care Service platform in our context” as seen in a sample[1]).
  • Set the Tone and Style: Don’t forget to tell your agent how to talk to the user. Is the tone friendly and casual, or formal and professional? Should answers be brief or very detailed? State these as guidelines. For example: “Maintain a conversational and encouraging tone, using simple language” or “Respond in a formal style suitable for executive communications.” If formatting is important (like always giving answers in a table or starting with a summary bullet list), include that instruction. E.g., “Present the output as a table with columns X, Y, Z,” or “Whenever listing items, use bullet points for readability.” In our earlier IT agent example, instructions included “provide clear, concise explanations” as a response approach[1]. Such guidance ensures consistency in output regardless of which AI model iteration is behind the scenes.
  • Incorporate Examples (Few-Shot Prompting): For complex agents or those handling nuanced tasks, providing example dialogs or cases in the instructions can significantly improve performance. This technique is known as few-shot prompting. Essentially, you append one or more example interactions (a sample user query and how the agent should respond) in the instructions. This helps the AI understand the pattern or style you expect. Microsoft suggests using examples especially for complex scenarios or edge cases[4]. For instance, if building a legal Q\&A agent, you might give an example Q\&A where the user asks a legal question and the agent responds citing a specific policy clause, to show the desired behavior. Be careful not to include too many examples (which can eat up token space) – use representative ones. In practice, even 1–3 well-chosen examples can guide the model. If your agent requires multi-turn conversational ability (asking clarifying questions, etc.), you might include a short dialogue example illustrating that flow[7][7]. Examples make instructions much more concrete and minimize ambiguity about how to implement the rules.
  • Anticipate and Prevent Common Failures: Based on known LLM behaviors, watch out for issues like:
    • Over-eager tool usage: Sometimes the model might call a tool too early or without needed info. Solution: explicitly instruct conditions for tool use (e.g., “Only use the translation API if the user actually provided text to translate”)[4].
    • Repetition: The model might parrot an example wording in its response. To counter this, encourage it to vary phrasing or provide multiple examples so it generalizes the pattern rather than copying verbatim[4].
    • Over-verbosity: If you fear the agent will give overly long explanations, add a constraint like “Keep answers under 5 sentences when possible” or “Be concise and to-the-point.” Providing an example of a concise answer can reinforce this[4]. Many of these issues can be tuned by small tweaks in instructions. The key is to be aware of them and adjust wording accordingly. For example, to avoid verbose outputs, you might include a bullet: “Limit the response to the essential information; do not elaborate with unnecessary background.”
  • Use Markdown for Emphasis and Clarity: We touched on structure with Markdown headings and lists. Additionally, you can use bold text in instructions to highlight critical rules the agent absolutely must not miss[4]. For instance: “Always confirm with the user before closing the session.” Using bold can give that rule extra weight in the AI’s processing. You can also put specific terms in backticks to indicate things like literal values or code (e.g., “set status to Closed in the ticketing system”). These formatting touches help the AI distinguish instruction content from plain narrative.

Following these best practices will help you create a robust set of instructions. The next step is to approach the writing process systematically. We’ll introduce a simple framework to ensure you cover all bases when drafting instructions for a Copilot agent.

Framework for Crafting Agent Instructions (T-C-R Approach)

It can be helpful to follow a repeatable framework when drafting instructions for an agent. One useful approach is the T-C-R framework: Task – Clarity – Refine[5]:

Using this T-C-R framework ensures you tackle instruction-writing methodically:

  • Task: You don’t forget any part of the agent’s job.
  • Clarity: You articulate exactly what’s expected for each part.
  • Refine: You catch issues and continuously improve the prompt.

It’s similar to how one might approach writing requirements for a software program – be thorough and clear, then test and revise.

Testing and Validation of Agent Instructions

Even the best-written first draft of instructions can behave unexpectedly when put into practice. Therefore, rigorous testing and validation is a crucial phase in developing Copilot Studio agents.

Use the Testing Tools: Copilot Studio provides a Test Panel where you can interact with your agent in real time, and for trigger-based agents, you can use test payloads or scenarios[3]. As soon as you write or edit instructions, test the agent with a variety of inputs:

  • Start with simple, expected queries: Does the agent follow the steps? Does it call the intended tools (you might see this in logs or the response content)? Is the answer well-formatted?
  • Then try edge cases or slightly off-beat inputs: If something is ambiguous or missing in the user’s question, does the agent ask the clarifying question as instructed? If the user asks something outside the agent’s scope, does it handle it gracefully (e.g., with a refusal or a redirect as per instructions)?
  • If your agent has multiple distinct functionalities (say, it both can fetch data and also compose emails), test each function individually.

Validate Against Design Expectations: As you test, compare the agent’s actual behavior to the design you intended. This can be done by creating a checklist of expected behaviors drawn from your instructions. For example: “Did the agent greet the user? ✅”, “Did it avoid giving unsupported medical advice? ✅”, “When I asked a second follow-up question, did it remember context? ✅” etc. Microsoft suggests comparing the agent’s answers to a baseline, like Microsoft 365 Copilot’s answers, to see if your specialized agent is adding the value it should[4]. If your agent isn’t outperforming the generic copilot or isn’t following your rules, that’s a sign the instructions need tweaking or the agent needs additional knowledge.

RAI (Responsible AI) Validation: When you publish an agent, Microsoft 365’s platform will likely run some automated checks for responsible AI compliance (for instance, ensuring no obviously disallowed instructions are present)[4]. Usually, if you stick to professional content and the domain of your enterprise data, this won’t be an issue. But it’s good to double-check that your instructions themselves don’t violate any policies (e.g., telling the agent to do something unethical). This is part of validation – making sure your instructions are not only effective but also compliant.

Iterate Based on Results: It’s rare to get the instructions perfect on the first try. You might observe during testing that the agent does something odd or suboptimal. Use those observations to refine the instructions (this is the “Refine” step of the T-C-R framework). For example, if the agent’s answers are too verbose, you might add a line in instructions: “Be brief in your responses, focusing only on the solution.” Test again and see if that helped. Or if the agent didn’t use a tool when it should have, maybe you need to mention that tool by name more explicitly or adjust the phrasing that cues it. This experimental mindset – tweak, test, tweak, test – is essential. Microsoft’s documentation illustration for declarative agents shows an iterative loop of designing instructions, testing, and modifying instructions to improve outcomes[4][4].

Document Your Tests: As your instructions get more complex, it’s useful to maintain a set of test cases or scenarios with expected outcomes. Each time you refine instructions, run through your test cases to ensure nothing regressed and new changes work as intended. Over time, this becomes a regression test suite for your agent’s behavior.

By thoroughly testing and validating, you ensure the instructions truly yield an agent that operates as designed. Once initial testing is satisfactory, you can move to a pilot deployment or let some end-users try the agent, then gather their feedback – feeding into the next topic: improvement mechanisms.

Iteration and Feedback: Continuous Improvement of Instructions

An agent’s instructions are not a “write once, done forever” artifact. They should be viewed as living documentation that can evolve with user needs and as you discover what works best. Two key processes for continuous improvement are monitoring feedback and iterating instructions over time:

  • Gather User Feedback: After deploying the agent to real users (or a test group), collect feedback on its performance. This can be direct feedback (users rating responses or reporting issues) or indirect, like observing usage logs. Pay attention to questions the agent fails to answer or any time users seem confused by the agent’s output. These are golden clues that the instructions might need adjustment. For example, if users keep asking for clarification on the agent’s answers, maybe your instructions should tell the agent to be more explanatory on first attempt. If users trigger the agent in scenarios it wasn’t originally designed for, you might decide to broaden the instructions (or explicitly handle those out-of-scope cases in the instructions with a polite refusal).
  • Review Analytics and Logs: Copilot Studio (and related Power Platform tools) may provide analytics such as conversation transcripts, success rates of actions, etc. Microsoft advises to “regularly review your agent results and refine custom instructions based on desired outcomes.”[6]. For instance, if analytics show a particular tool call failing frequently, maybe the instructions need to better gate when that tool is used. Or if users drop off after the agent’s first answer, perhaps the agent is not engaging enough – you might tweak the tone or ask a question back in the instructions. Treat these data points as feedback for improvement.
  • Incremental Refinements: Incorporate the feedback into improved instructions, and update the agent. Because Copilot Studio allows you to edit and republish instructions easily[3], you can make iterative changes even after deployment. Just like software updates, push instruction updates to fix “bugs” in agent behavior. Always test changes in a controlled way (in the studio test panel or with a small user group) before rolling out widely.
  • Keep Iterating: The process of testing and refining is cyclical. Your agent can always get better as you discover new user requirements or corner cases. Microsoft’s guidance strongly encourages an iterative approach, as illustrated by their steps: create -> test -> verify -> modify -> test again[4][4]. Over time, these tweaks lead to a very polished set of instructions that anticipates many user needs and failure modes.
  • Version Control Your Instructions: It’s good practice to keep track of changes (what was added, removed, or rephrased in each iteration). This way if a change unexpectedly worsens the agent’s performance, you can rollback or adjust. You might use simple version comments or maintain the instructions text in a version-controlled repository (especially for complex custom agents).

In summary, don’t treat instruction-writing as a one-off task. Embrace user feedback and analytic insights to continually hone your agent. Many successful Copilot agents likely went through numerous instruction revisions. Each iteration brings the agent’s behavior closer to the ideal.

Tailoring Instructions to Different Agent Types and Scenarios

No one-size-fits-all set of instructions will work for every agent – the content and style of the instructions should be tailored to the type of agent you’re building and the scenario it operates in[3]. Consider the following variations and how instructions might differ:

  • Conversational Q\&A Agents: These are agents that engage in a back-and-forth chat with users (for example, a helpdesk chatbot or a personal finance Q\&A assistant). Instructions for conversational agents should prioritize dialog flow, context handling, and user interaction. They often include guidance like how to greet the user, how to ask clarifying questions one at a time, how to not overwhelm the user with too much info at once, and how to confirm if the user’s need was met. The example instructions we discussed (IT support agent, ShowExpert recommendation agent) fall in this category – note how they included steps for asking questions and confirming understanding[4][1]. Also, conversational agents might need instructions on maintaining context over multiple turns (e.g. “remember the user’s last answer about their preference when formulating the next suggestion”).
  • Task/Action (Trigger) Agents: Some Copilot Studio agents aren’t chatting with a user in natural dialogue, but instead get triggered by an event or command and then perform a series of actions silently or output a result. For instance, an agent that, when triggered, gathers data from various sources and emails a report. Instructions for these agents may be more like a script of what to do: step 1 do X, step 2 do Y, etc., with less emphasis on language tone and conversation, and more on correct execution. You’d focus on instructions that detail workflow logic and error handling, since user interaction is minimal. However, you might still include some instruction about how to format the final output or what to log.
  • Declarative vs Custom Agents: In Copilot Studio, Declarative agents use mostly natural language instructions to declare their behavior (with the platform handling orchestration), whereas Custom agents might involve more developer-defined logic or even code. Declarative agent instructions might be more verbose and rich in language (since the model is reading them to drive logic), whereas a custom agent might offload some logic to code and use instructions mainly for higher-level guidance. That said, in both cases the principles of clarity and completeness apply. Declarative agents, in particular, benefit from well-structured instructions since they heavily rely on them for generative reasoning[7].
  • Different Domains Require Different Details: An agent’s domain will dictate what must be included in instructions. For example, a medical information agent should have instructions emphasizing accuracy, sourcing from medical guidelines, and perhaps disclaimers (and definitely instructions not to venture outside provided medical content)[1][1]. A customer service agent might need a friendly empathetic tone and instructions to always ask if the user is satisfied at the end. A coding assistant agent might have instructions to format answers in code blocks and not to provide theoretical info not found in the documentation provided. Always infuse domain-specific best practices into the instruction. If unsure, consult with subject matter experts about what an agent in that domain must or must not do.

In essence, know your agent’s context and tailor the instructions accordingly. Copilot Studio’s own documentation notes that “How best to write your instructions depends on the type of agent and your goals for the agent.”[3]. An easy way to approach this is to imagine a user interacting with your agent and consider what that agent needs to excel in that scenario – then ensure those points are in the instructions.

Resources and Tools for Improving Agent Instructions

Writing effective AI agent instructions is a skill you can develop by learning from others and using available tools. Here are some resources and aids:

  • Official Microsoft Documentation: Microsoft Learn has extensive materials on Copilot Studio and writing instructions. Key articles include “Write agent instructions”[3], “Write effective instructions for declarative agents”[4], and “Optimize prompts with custom instructions”[6]. These provide best practices (many cited in this report) straight from the source. They often include examples, do’s and don’ts, and are updated as the platform evolves. Make it a point to read these guides; they reinforce many of the principles we’ve discussed.
  • Copilot Prompt Gallery/Library: There are community-driven repositories of prompt examples. In the Copilot community, a “Prompt Library” has been referenced[7] which contains sample agent prompts. Browsing such examples can inspire how to structure your instructions. Microsoft’s Copilot Developer Camp content (like the one for ShowExpert we cited) is an excellent, practical walkthrough of iteratively improving instructions[7][7]. Following those labs can give you hands-on practice.
  • GitHub Best Practice Repos: The community has also created best practice guides, such as the Agents Best Practices repo[1]. This provides a comprehensive guide with examples of good instructions for various scenarios (IT support, HR policy, etc.)[1][1]. Seeing multiple examples of “sample agent instructions” can help you discern patterns of effective prompts.
  • Peer and Expert Reviews: If possible, get a colleague to review your instructions. A fresh pair of eyes can spot ambiguities or potential misunderstandings you overlooked. Within a large organization, you might even form a small “prompt review board” when developing important agents – to ensure instructions align with business needs and are clearly written. There are also growing online forums (such as the Microsoft Tech Community for Power Platform/Copilot) where you could ask for advice (without sharing sensitive details).
  • AI Prompt Engineering Tools: Some tools can simulate how an LLM might parse your instructions. For example, prompt analysis tools (often used in general AI prompt engineering) can highlight which words are influencing the model. While not specific to Copilot Studio, experimenting with your instruction text in something like the Azure OpenAI Playground with the same model (if known) can give insight. Keep in mind Copilot Studio has its own orchestration (like combining with user query and tool descriptions), so results outside may not exactly match – but it’s a way to sanity-check if any wording is confusing.
  • Testing Harness: Use the Copilot Studio test chat repeatedly as a tool. Try intentionally weird inputs to see how your agent handles them. If your agent is a Teams bot, you might sideload it in Teams and test the user experience there as well. Treat the test framework as a tool to refine your prompt – it’s essentially a rapid feedback loop.
  • Telemetry and Analytics: Post-deployment, the telemetry (if available) is a tool. Some enterprises integrate Copilot agent interactions with Application Insights or other monitoring. Those logs can reveal how the agent is being used and where it falls short, guiding you to adjust instructions.
  • Keep Example Collections: Over time, accumulate a personal collection of instruction snippets that worked well. You can often reuse patterns (for example, the generic structure of “Your responsibilities include: X, Y, Z” or a nicely phrased workflow step). Microsoft’s examples (like those in this text and docs) are a great starting point.

By leveraging these resources and tools, you can improve not only a given agent’s instructions but your overall skill in writing effective AI instructions.

Staying Updated with Best Practices

The field of generative AI and platforms like Copilot Studio is rapidly evolving. New features, models, or techniques can emerge that change how we should write instructions. It’s important to stay updated on best practices:

  • Follow Official Updates: Keep an eye on the official Microsoft Copilot Studio documentation site and blog announcements. Microsoft often publishes new guidelines or examples as they learn from real-world usage. The documentation pages we referenced have dates (e.g., updated June 2025) – revisiting them periodically can inform you of new tips (for instance, newer versions might have refined advice on formatting or new capabilities you can instruct the agent to use).
  • Community and Forums: Join communities of makers who are building Copilot agents. Microsoft’s Power Platform community forums, LinkedIn groups, or even Twitter (following hashtags like #CopilotStudio) can surface discussions where people share experiences. The Practical 365 blog[2] and the Power Platform Learners YouTube series are examples of community-driven content that can provide insights and updates. Engaging in these communities allows you to ask questions and learn from others’ mistakes and successes.
  • Continuous Learning: Microsoft sometimes offers training modules or events (like hackathons, the Powerful Devs series, etc.) around Copilot Studio. Participating in these can expose you to the latest features. For instance, if Microsoft releases a new type of “skill” that agents can use, there might be new instruction patterns associated with that – you’d want to incorporate those.
  • Experimentation: Finally, don’t hesitate to experiment on your own. Create small test agents to try out new instruction techniques or to see how a particular phrasing affects outcome. The more you play with the system, the more intuitive writing good instructions will become. Keep notes of what you learn and share it where appropriate so others can benefit (and also validate your findings).

By staying informed and agile, you ensure that your agents continue to perform well as the underlying technology or user expectations change over time.


Conclusion: Writing the instructions field for a Copilot Studio agent is a critical task that requires careful thought and iteration. The instructions are effectively the “source code” of your AI agent’s behavior. When done right, they enable the agent to use its tools and knowledge effectively, interact with users appropriately, and achieve the intended outcomes. We’ve examined how good instructions are constructed (clear role, rules, steps, examples) and why bad instructions fail. We established best practices and a T-C-R framework to approach writing instructions systematically. We also emphasized testing and continuous refinement – because even with guidelines, every use case may need fine-tuning. By avoiding common pitfalls and leveraging available resources and feedback loops, you can craft instructions that make your Copilot agent a reliable and powerful assistant. In sum, getting the instructions field correct is crucial because it is the single most important factor in whether your Copilot Studio agent operates as designed or not. With the insights and methods outlined here, you’re well-equipped to write instructions that set your agent up for success. Good luck with your Copilot agent, and happy prompting!

References

[1] GitHub – luishdemetrio/agentsbestpractices

[2] A Microsoft 365 Administrator’s Beginner’s Guide to Copilot Studio

[3] Write agent instructions – Microsoft Copilot Studio

[4] Write effective instructions for declarative agents

[5] From Scribbles to Spells: Perfecting Instructions in Copilot Studio

[6] Optimize prompts with custom instructions – Microsoft Copilot Studio

[7] Level 1 – Simple agent instructions – Copilot Developer Camp

Need to Know podcast–Episode 351

In Episode 351 of the CIAOPS “Need to Know” podcast, we explore how small MSPs can scale through shared knowledge. From peer collaboration and co-partnering strategies to AI-driven security frameworks and Microsoft 365 innovations, this episode delivers actionable insights for SMB IT professionals looking to grow smarter, not harder.

Brought to you by www.ciaopspatron.com

you can listen directly to this episode at:

https://ciaops.podbean.com/e/episode-351-learning-is-a-superpower/

Subscribe via iTunes at:

https://itunes.apple.com/au/podcast/ciaops-need-to-know-podcasts/id406891445?mt=2

or Spotify:

https://open.spotify.com/show/7ejj00cOuw8977GnnE2lPb

Don’t forget to give the show a rating as well as send me any feedback or suggestions you may have for the show.

Resources

CIAOPS Need to Know podcast – CIAOPS – Need to Know podcasts | CIAOPS

X – https://www.twitter.com/directorcia

Join my Teams shared channel – Join my Teams Shared Channel – CIAOPS

CIAOPS Merch store – CIAOPS

Become a CIAOPS Patron – CIAOPS Patron

CIAOPS Blog – CIAOPS – Information about SharePoint, Microsoft 365, Azure, Mobility and Productivity from the Computer Information Agency

CIAOPS Brief – CIA Brief – CIAOPS

CIAOPS Labs – CIAOPS Labs – The Special Activities Division of the CIAOPS

Support CIAOPS – https://ko-fi.com/ciaops

Get your M365 questions answered via email

Show Notes

Security & Threat Intelligence

Secret Blizzard AiTM Campaign: Microsoft uncovers a phishing campaign targeting diplomats. https://www.microsoft.com/en-us/security/blog/2025/07/31/frozen-in-transit-secret-blizzards-aitm-campaign-against-diplomats

Multi-modal Threat Protection: Defender’s advanced capabilities against complex threats. https://techcommunity.microsoft.com/blog/microsoftdefenderforoffice365blog/protection-against-multi-modal-attacks-with-microsoft-defender/4438786

AI Security Essentials: Microsoft’s approach to AI-related security concerns. https://techcommunity.microsoft.com/blog/microsoft-security-blog/ai-security-essentials-what-companies-worry-about-and-how-microsoft-helps/4436639

macOS TCC Vulnerability: Spotlight-based flaw analysis. https://www.microsoft.com/en-us/security/blog/2025/07/28/sploitlight-analyzing-a-spotlight-based-macos-tcc-vulnerability/

Copilot Security Assessment: Microsoft’s framework for secure AI deployments. https://security-for-ai-assessment.microsoft.com/

Identity & Access Management

Modern Identity Defense: New threat detection tools from Microsoft. https://www.microsoft.com/en-us/security/blog/2025/07/31/modernize-your-identity-defense-with-microsoft-identity-threat-detection-and-response/

AI Agents & Identity: How AI is reshaping identity management. https://techcommunity.microsoft.com/blog/microsoft-entra-blog/ai-agents-and-the-future-of-identity-what%E2%80%99s-on-the-minds-of-your-peers/4436815

Token Protection in Entra: Preview of enhanced conditional access. https://learn.microsoft.com/en-us/entra/identity/conditional-access/concept-token-protection

Microsoft Earnings & Business Updates

FY25 Q4 Earnings: Strong growth in cloud and AI revenue. https://www.microsoft.com/en-us/Investor/earnings/FY-2025-Q4/press-release-webcast

Copilot & AI Enhancements

Copilot Without Recording: Use Copilot in Teams without meeting recordings. https://support.microsoft.com/en-us/office/use-copilot-without-recording-a-teams-meeting-a59cb88c-0f6b-4a20-a47a-3a1c9a818bd9

Copilot Search Features: Acronyms and bookmarks walkthrough. https://www.youtube.com/watch?v=nftEC73Cjxo

Copilot Search Launch: General availability announcement. https://techcommunity.microsoft.com/blog/microsoft365copilotblog/announcing-microsoft-365-copilot-search-general-availability-a-new-era-of-search/4435537

Productivity & Power Platform

PowerPoint Tips: 7 hidden features to elevate your presentations. https://techcommunity.microsoft.com/blog/microsoft365insiderblog/take-your-presentation-skills-to-the-next-level-with-these-7-lesser-known-powerp/4433700

New Power Apps: Generative AI meets enterprise-grade trust. https://www.microsoft.com/en-us/power-platform/blog/power-apps/introducing-the-new-power-apps-generative-power-meets-enterprise-grade-trust/

CIA Brief 20250803

image

Protection against multi-modal attacks with Microsoft Defender –

https://techcommunity.microsoft.com/blog/microsoftdefenderforoffice365blog/protection-against-multi-modal-attacks-with-microsoft-defender/4438786

Copilot Search: Acronyms and bookmarks –

https://www.youtube.com/watch?v=nftEC73Cjxo

Modernize your identity defense with Microsoft Identity Threat Detection and Response –

https://www.microsoft.com/en-us/security/blog/2025/07/31/modernize-your-identity-defense-with-microsoft-identity-threat-detection-and-response/

Frozen in transit: Secret Blizzard’s AiTM campaign against diplomats –

https://www.microsoft.com/en-us/security/blog/2025/07/31/frozen-in-transit-secret-blizzards-aitm-campaign-against-diplomats/

AI agents and the future of identity: What’s on the minds of your peers? –

https://techcommunity.microsoft.com/blog/microsoft-entra-blog/ai-agents-and-the-future-of-identity-what%E2%80%99s-on-the-minds-of-your-peers/4436815

Enhance your LMS with the power of Microsoft 365 –

https://techcommunity.microsoft.com/blog/educationblog/preview-the-new-microsoft-365-lti%C2%AE-for-your-lms/4434606

Earnings Release FY25 Q4 –

https://www.microsoft.com/en-us/Investor/earnings/FY-2025-Q4/press-release-webcast

Use Copilot without recording a Teams meeting –

https://support.microsoft.com/en-us/office/use-copilot-without-recording-a-teams-meeting-a59cb88c-0f6b-4a20-a47a-3a1c9a818bd9

How Microsoft’s customers and partners accelerated AI Transformation in FY25 to innovate with purpose and shape their future success –

https://blogs.microsoft.com/blog/2025/07/28/how-microsofts-customers-and-partners-accelerated-ai-transformation-in-fy25-to-innovate-with-purpose-and-shape-their-future-success/

AI Security Essentials: What Companies Worry About and How Microsoft Helps –

https://techcommunity.microsoft.com/blog/microsoft-security-blog/ai-security-essentials-what-companies-worry-about-and-how-microsoft-helps/4436639

Sploitlight: Analyzing a Spotlight-based macOS TCC vulnerability –

https://www.microsoft.com/en-us/security/blog/2025/07/28/sploitlight-analyzing-a-spotlight-based-macos-tcc-vulnerability/

Security for AI Assessment | Microsoft 365 Copilot –

https://security-for-ai-assessment.microsoft.com/

After hours

Team Water – https://www.youtube.com/watch?v=KRhofr57Na8

Editorial

If you found this valuable, the I’d appreciate a ‘like’ or perhaps a donation at https://ko-fi.com/ciaops. This helps me know that people enjoy what I have created and provides resources to allow me to create more content. If you have any feedback or suggestions around this, I’m all ears. You can also find me via email director@ciaops.com and on X (Twitter) at https://www.twitter.com/directorcia.

If you want to be part of a dedicated Microsoft Cloud community with information and interactions daily, then consider becoming a CIAOPS Patron – www.ciaopspatron.com.

Watch out for the next CIA Brief next week

Updated PowerShell PnP connection script

image

One of the challenging things about manipulating SharePoint items with PowerShell was that you need to use PnP PowerShell. I have always found this tricky to get working and connect and it seems things have changed again.

Now when you want to use PnP PowerShell you can only do so using a Azure AD app! This is additionally challenging if you want to do that manually so I modified by free connection script:

https://github.com/directorcia/Office365/blob/master/o365-connect-pnp.ps1

to allow the creation of the Azure AD app as part of the process and to also allow you to specify an existing Azure Ad app that was already created by the process as the way to connect. The documentation is here:

https://github.com/directorcia/Office365/wiki/SharePoint-Online-PnP-Connection-Script

but in essence, you run the script the first time and it will create an Azure AD app for you like this:

image

Subsequent times you can simply use the apps again by specifying the ClientID GUID in the command line to make the connection. If you don’t then the script will create a new Azure AD app. So create an Azure AD app the first time and use that same app going forward I suggest. Of course, consult the full online documentation for all the details.

Hopefully, this makes it a little easier to use PnP PowerShell in your environment.

Robert.Agent now recommends improved questions

bp1

I continue to work on my autonomous email agent created with Copilot Studio. a recent addition is that now you might get a response that includes something like this at the end of the information returned:

image

It is a suggestion for an improved prompt to generate better answers based on the original question.

The reason I created this was I noticed many submissions were not writing ‘good’ prompts. In fact, most submissions seem better suited to search engines than for AI. The easy solution was to get Copilot to suggest how to ask better questions.

Give it a go and let me know what you think.

Analysis of Intune Windows 10/11 Compliance Policy Settings for Strong Security

This report examines each setting in the provided Intune Windows 10/11 compliance policy JSON and evaluates whether it represents best practice for strong security on a Windows device. For each setting, we explain its purpose, configuration options, and why the chosen value helps ensure maximum security.


Device Health Requirements (Boot Security & Encryption)

Require BitLockerBitLocker Drive Encryption is mandated on the OS drive (Require BitLocker: Yes). BitLocker uses the system’s TPM to encrypt all data on disk and locks encryption keys unless the system’s integrity is verified at boot[1]. The policy setting “Require BitLocker” ensures that data at rest is protected – if a laptop is lost or stolen, an unauthorized person cannot read the disk contents without proper authorization[1]. Options: Not configured (default, don’t check encryption) or Require (device must be encrypted with BitLocker)[1]. Setting this to “Require” is considered best practice for strong security, as unencrypted devices pose a high data breach risk[1]. In our policy JSON, BitLocker is indeed required[2], aligning with industry recommendations to encrypt all sensitive devices.

Require Secure Boot – This ensures the PC is using UEFI Secure Boot (Require Secure Boot: Yes). Secure Boot forces the system to boot only trusted, signed bootloaders. During startup, the UEFI firmware will verify that bootloader and critical kernel files are signed by a trusted authority and have not been modified[1]. If any boot file is tampered with (e.g. by a bootkit or rootkit malware), Secure Boot will prevent the OS from booting[1]. Options: Not configured (don’t enforce) or Require (must boot in secure mode)[1]. The policy requires Secure Boot[2], which is a best-practice security measure to maintain boot-time integrity. This setting helps ensure the device boots to a trusted state and is not running malicious firmware or bootloaders[1]. Requiring Secure Boot is recommended in frameworks like Microsoft’s security baselines and the CIS benchmarks for Windows, provided the hardware supports it (most modern PCs do)[1].

Require Code Integrity – Code integrity (a Device Health Attestation setting) validates the integrity of Windows system binaries and drivers each time they are loaded into memory. Enforcing this (Require code integrity: Yes) means that if any system file or driver is unsigned or has been altered by malware, the device will be reported as non-compliant[1]. Essentially, it helps detect kernel-level rootkits or unauthorized modifications to critical system components. Options: Not configured or Require (must enforce code integrity)[1]. The policy requires code integrity to be enabled[2], which is a strong security practice. This setting complements Secure Boot by continuously verifying system integrity at runtime, not just at boot. Together, Secure Boot and Code Integrity reduce the risk of persistent malware or unauthorized OS tweaks going undetected[1].

By enabling BitLocker, Secure Boot, and Code Integrity, the compliance policy ensures devices have a trusted startup environment and encrypted storage – foundational elements of a secure endpoint. These Device Health requirements align with best practices like Microsoft’s recommended security baselines (which also require BitLocker and Secure Boot) and are critical to protect against firmware malware, bootkits, and data theft[1][1]. Note: Devices that lack a TPM or do not support Secure Boot will be marked noncompliant, meaning this policy effectively excludes older, less secure hardware from the compliant device pool – which is intentional for a high-security stance.

Device OS Version Requirements

Minimum OS version – This policy defines the oldest Windows OS build allowed on a device. In the JSON, the Minimum OS version is set to 10.0.19043.10000 (which corresponds roughly to Windows 10 21H1 with a certain patch level)[2]. Any Windows device reporting an OS version lower than this (e.g. 20H2 or an unpatched 21H1) will be marked non-compliant. The purpose is to block outdated Windows versions that lack recent security fixes. End users on older builds will be prompted to upgrade to regain compliance[1]. Options: admin can specify any version string; leaving it blank means no minimum enforcement[1]. Requiring a minimum OS version is a best practice to ensure devices have received important security patches and are not running end-of-life releases[1]. The chosen minimum (10.0.19043) suggests that Windows 10 versions older than 21H1 are not allowed, which is reasonable for strong security since Microsoft no longer supports very old builds. This helps reduce vulnerabilities – for example, a device stuck on an early 2019 build would miss years of defenses (like improved ransomware protection in later releases). The policy’s min OS requirement aligns with guidance to keep devices updated to at least the N-1 Windows version or newer.

Maximum OS version – In this policy, no maximum OS version is configured (set to “Not configured”)[2]. That means devices running newer OS versions than the admin initially tested are not automatically flagged noncompliant. This is usually best, because setting a max OS version is typically used only to temporarily block very new OS upgrades that might be unapproved. Leaving it not configured (no upper limit) is often a best practice unless there’s a known issue with a future Windows release[1]. In terms of strong security, not restricting the maximum OS allows devices to update to the latest Windows 10/11 feature releases, which usually improves security. (If an organization wanted to pause Windows 11 adoption, they might set a max version to 10.x temporarily, but that’s a business decision, not a security improvement.) So the policy’s approach – no max version limit – is fine and does align with security best practice in most cases, as it encourages up-to-date systems rather than preventing them.

Why enforce OS versions? Keeping OS versions current ensures known vulnerabilities are patched. For example, requiring at least build 19043 means any device on 19042 or earlier (which have known exposures fixed in 19043+) will be blocked until updated[1]. This reduces the attack surface. The compliance policy will show a noncompliant device “OS version too low” with guidance to upgrade[1], helping users self-remediate. Overall, the OS version rules in this policy push endpoints to stay on supported, secure Windows builds, which is a cornerstone of strong device security.

*(The policy also lists “Minimum/Maximum OS version for *mobile devices” with the same values (10.0.19043.10000 / Not configured)[2]. This likely refers to Windows 10 Mobile or Holographic devices. It’s largely moot since Windows 10 Mobile is deprecated, but having the same minimum for “mobile” ensures something like a HoloLens or Surface Hub also requires an up-to-date OS. In our case, both fields mirror the desktop OS requirement, which is fine.)

Configuration Manager Compliance (Co-Management)

Require device compliance from Configuration Manager – This setting is Not configured in the JSON (i.e. it’s left at default)[2]. It applies only if the Windows device is co-managed with Microsoft Endpoint Configuration Manager (ConfigMgr/SCCM) in addition to Intune. Options: Not configured (Intune ignores ConfigMgr’s compliance state) or Require (device must also meet all ConfigMgr compliance policies)[1].

In our policy, leaving it not configured means Intune will not check ConfigMgr status – effectively the device only has to satisfy the Intune rules to be marked compliant. Is this best practice? For purely Intune-managed environments, yes – if you aren’t using SCCM baselines, there’s no need to require this. If an organization is co-managed and has on-premises compliance settings in SCCM (like additional security baselines or antivirus status monitored by SCCM), a strong security stance might enable this to ensure those are met too[1]. However, enabling it without having ConfigMgr compliance policies could needlessly mark devices noncompliant as “not reporting” (Intune would wait for a ConfigMgr compliance signal that might not exist).

So, the best practice depends on context: In a cloud-only or lightly co-managed setup, leaving this off (Not Configured) is correct[1]. If the organization heavily uses Configuration Manager to enforce other critical security settings, then best practice would be to turn this on so Intune treats any SCCM failure as noncompliance. Since this policy likely assumes modern management primarily through Intune, Not configured is appropriate and not a security gap. (Admins should ensure that either Intune covers all needed checks, or if not, integrate ConfigMgr compliance by requiring it. Here Intune’s own checks are quite comprehensive.)

System Security: Password Requirements

A very important part of device security is controlling access with strong credentials. This policy enforces a strict device password/PIN policy under the “System Security” category:

  • Require a password to unlockYes (Required). This means the device cannot be unlocked without a password or PIN. Users must authenticate on wake or login[1]. Options: Not configured (no compliance check on whether a device has a lock PIN/password set) or Require (device must have a lock screen password/PIN)[1]. Requiring a password is absolutely a baseline security requirement – a device with no lock screen PIN is extremely vulnerable (anyone with physical access could get in). The policy correctly sets this to Require[2]. Intune will flag any device without a password as noncompliant, likely forcing the user to set a Windows Hello PIN or password. This is undeniably best practice; all enterprise devices should be password/PIN protected.
  • Block simple passwordsYes (Block). “Simple passwords” refers to very easy PINs like 0000 or 1234 or repeating characters. The setting is Simple passwords: Block[1]. When enabled, Intune will require that the user’s PIN/passcode is not one of those trivial patterns. Options: Not configured (allow any PIN) or Block (disallow common simple PINs)[1]. Best practice is to block simple PINs because those are easily guessable if someone steals the device. This policy does so[2], meaning a PIN like “1111” or “12345” would not be considered compliant. Instead, users must choose less predictable codes. This is a straightforward security best practice (also recommended by Microsoft’s baseline and many standards) to defeat casual guessing attacks.
  • Password typeAlphanumeric. This setting specifies what kinds of credentials are acceptable. “Alphanumeric” in Intune means the user must set a password or PIN that includes a mix of letters and numbers (not just digits)[1]. The other options are “Device default” (which on Windows typically allows a PIN of just numbers) or explicitly Numeric (only numbers allowed)[1]. Requiring Alphanumeric effectively forces a stronger Windows Hello PIN – it must include at least one letter or symbol in addition to digits. The policy sets this to Alphanumeric[2], which is a stronger stance than a simple numeric PIN. It expands the space of possible combinations, making it much harder for an attacker to brute-force or guess a PIN. This is aligned with best practice especially if using shorter PIN lengths – requiring letters and numbers significantly increases PIN entropy. (If a device only allows numeric PINs, a 6-digit PIN has a million possibilities; an alphanumeric 6-character PIN has far more.) By choosing Alphanumeric, the admin is opting for maximum complexity in credentials.
    • Note: When Alphanumeric is required, Intune enables additional complexity rules (next setting) like requiring symbols, etc. If instead it was set to “Numeric”, those complexity sub-settings would not apply. So this choice unlocks the strongest password policy options[1].
  • Password complexity requirementsRequire digits, lowercase, uppercase, and special characters. This policy is using the most stringent complexity rule available. Under Intune, for alphanumeric passwords/PINs you can require various combinations: the default is “digits & lowercase letters”; but here it’s set to “require digits, lowercase, uppercase, and special characters”[1]. That means the user’s password (or PIN, if using Windows Hello PIN as an alphanumeric PIN) must include at least one lowercase letter, one uppercase letter, one number, and one symbol. This is essentially a classic complex password policy. Options: a range from requiring just some character types up to all four categories[1]. Requiring all four types is generally seen as a strict best practice for high security (it aligns with many compliance standards that mandate a mix of character types in passwords). The idea is to prevent users from choosing, say, all letters or all numbers; a mix of character types increases password strength. Our policy indeed sets the highest complexity level[2]. This ensures credentials are harder to crack via brute force or dictionary attacks, albeit at the cost of memorability. It’s worth noting modern NIST guidance allows passphrases (which might not have all char types) as an alternative, but in many organizations, this “at least one of each” rule remains a common security practice for device passwords.
  • Minimum password length14 characters. This defines the shortest password or PIN allowed. The compliance policy requires the device’s unlock PIN/password to be 14 or more characters long[1]. Fourteen is a relatively high minimum; by comparison, many enterprise policies set min length 8 or 10. By enforcing 14, this policy is going for very strong password length, which is consistent with guidance for high-security environments (some standards suggest 12+ or 14+ characters for administrative or highly sensitive accounts). Options: 1–16 characters can be set (the admin chooses a number)[1]. Longer is stronger – increasing length exponentially strengthens resistance to brute-force cracking. At 14 characters with the complexity rules above, the space of possible passwords is enormous, making targeted cracking virtually infeasible. This is absolutely a best practice for strong security, though 14 might be considered slightly beyond typical user-friendly lengths. It aligns with guidance like using passphrases or very long PINs for device unlock. Our policy’s 14-char minimum[2] indicates a high level of security assurance (for context, the U.S. DoD STIGs often require 15 character passwords on Windows – 14 is on par with such strict standards).
  • Maximum minutes of inactivity before password is required15 minutes. This controls the device’s idle timeout, i.e. how long a device can sit idle before it auto-locks and requires re-authentication. The policy sets 15 minutes[2]. Options: The admin can define a number of minutes; when not set, Intune doesn’t enforce an inactivity lock (though Windows may have its own default)[1]. Requiring a password after 15 minutes of inactivity is a common security practice to balance security with usability. It means if a user steps away, at most 15 minutes can pass before the device locks itself and demands a password again. Shorter timers (5 or 10 min) are more secure (less window for an attacker to sit at a logged-in machine), whereas longer (30+ min) are more convenient but risk someone opportunistically using an unlocked machine. 15 minutes is a reasonable best-practice value for enterprises – it’s short enough to limit unauthorized access, yet not so short that it frustrates users excessively. Many security frameworks recommend 15 minutes or less for session locks. This policy’s 15-minute setting is in line with those recommendations and thus supports a strong security posture. It ensures a lost or unattended laptop will lock itself in a timely manner, reducing the chance for misuse.
  • Password expiration (days)365 days. This setting forces users to change their device password after a set period. Here it is one year[2]. Options: 1–730 days or not configured[1]. Requiring password change every 365 days is a moderate approach to password aging. Traditional policies often used 90 days, but that can lead to “password fatigue.” Modern NIST guidelines actually discourage frequent forced changes (unless there’s evidence of compromise) because overly frequent changes can cause users to choose weaker passwords or cycle old ones. However, annual expiration (365 days) is relatively relaxed and can be seen as a best practice in some environments to ensure stale credentials eventually get refreshed[1]. It’s basically saying “change your password once a year.” Many organizations still enforce yearly or biannual password changes as a precaution. In terms of strong security, this setting provides some safety net (in case a password was compromised without the user knowing, it won’t work indefinitely). It’s not as critical as the other settings; one could argue that with a 14-char complex password, forced expiration isn’t strictly necessary. But since it’s set, it reflects a security mindset of not letting any password live forever. Overall, 365 days is a reasonable compromise – it’s long enough that users can memorize a strong password, and short enough to ensure a refresh if by chance a password leaked over time. This is largely aligned with best practice, though some newer advice would allow no expiration if other controls (like multifactor auth) are in place. In a high-security context, annual changes remain common policy.
  • Number of previous passwords to prevent reuse5. This means when a password is changed (due to expiration or manual change), the user cannot reuse any of their last 5 passwords[1]. Options: Typically can set a value like 1–50 previous passwords to disallow. The policy chose 5[2]. This is a standard part of password policy – preventing reuse of recent passwords helps ensure that when users do change their password, they don’t just alternate between a couple of favorites. A history of 5 is pretty typical in best practices (common ranges are 5–10) to enforce genuine password updates. This setting is definitely a best practice in any environment with password expiration – otherwise users might just swap back and forth between two passwords. By disallowing the last 5, it will take at least 6 cycles (in this case 6 years, given 365-day expiry) before one could reuse an old password, by which time it’s hoped that password would have lost any exposure or the user comes up with a new one entirely. The policy’s value of 5 is fine and commonly recommended.
  • Require password when device returns from idle stateYes (Required). This particularly applies to mobile or Holographic devices, but effectively it means a password is required upon device wake from an idle or sleep state[1]. On Windows PCs, this corresponds to the “require sign-in on wake” setting. Since our idle timeout is 15 minutes, this ensures that when the device is resumed (after sleeping or being idle past that threshold), the user must sign in again. Options: Not configured or Require[1]. The policy sets it to Require[2], which is certainly what we want – it’d be nonsensical to have all the above password rules but then not actually lock on wake! In short, this enforces that the password/PIN prompt appears after the idle period or sleep, which is absolutely a best practice. (Without this, a device could potentially wake up without a login prompt, which would undermine the idle timeout.) Windows desktop devices are indeed impacted by this on next sign-in after an idle, as noted in docs[1]. So this setting ties the loop on the secure password policy: not only must devices have strong credentials, but those credentials must be re-entered after a period of inactivity, ensuring continuous protection.

Summary of Password Policy: The compliance policy highly prioritizes strong access control. It mandates a login on every device (no password = noncompliant), and that login must be complex (not guessable, not short, contains diverse characters). The combination of Alphanumeric, 14+ chars, all character types, no simple PINs is about as strict as Windows Intune allows for user sign-in credentials[1][2]. This definitely meets the definition of best practice for strong security – it aligns with standards like CIS benchmarks which also suggest enforcing password complexity and length. Users might need to use passphrases or a mix of PIN with letters to meet this, but that is intended. The idle lock at 15 minutes and requirement to re-authenticate on wake ensure that even an authorized session can’t be casually accessed if left alone for long. The annual expiration and password history add an extra layer to prevent long-term use of any single password or recycling of old credentials, which is a common corporate security requirement.

One could consider slight adjustments: e.g., some security frameworks (like NIST SP 800-63) would possibly allow no expiration if the password is sufficiently long and unique (to avoid users writing it down or making minor changes). However, given this is a “strong security” profile, the chosen settings err on the side of caution, which is acceptable. Another improvement for extreme security could be shorter idle time (like 5 minutes) to lock down faster, but 15 minutes is generally acceptable and strikes a balance. Overall, these password settings significantly harden the device against unauthorized access and are consistent with best practices.

Encryption of Data Storage on Device

Require encryption of data storage on deviceYes (Required). Separate from the BitLocker requirement in Device Health, Intune also has a general encryption compliance rule. Enabling this means the device’s drives must be encrypted (with BitLocker, in the case of Windows) or else it’s noncompliant[1]. In our policy, “Encryption: Require” is set[2]. Options: Not configured or Require[1]. This is effectively a redundant safety net given BitLocker is also specifically required. According to Microsoft, the “Encryption of data storage” check looks for any encryption present (on the OS drive), and specifically on Windows it checks BitLocker status via a device report[1]. It’s slightly less robust than the Device Health attestation for BitLocker (which needs a reboot to register, etc.), but it covers the scenario generally[1].

From a security perspective, requiring device encryption is unquestionably best practice. It ensures that if a device’s drive isn’t encrypted (for example, BitLocker not enabled or turned off), the device will be flagged. This duplicates the BitLocker rule; having both doesn’t hurt – in fact, Microsoft documentation suggests the simpler encryption compliance might catch the state even if attestation hasn’t updated (though the BitLocker attestation is more reliable for TPM verification of encryption)[1].

In practice, an admin could use one or the other. This policy enables both, which indicates a belt-and-suspenders approach: either way, an unencrypted device will not slip through. This is absolutely aligned with strong security – all endpoints must have storage encryption, mitigating the risk of data exposure from lost or stolen hardware. Modern best practices (e.g. CIS, regulatory requirements like GDPR for laptops with personal data) often mandate full-disk encryption; here it’s enforced twice. The documentation even notes that relying on the BitLocker-specific attestation is more robust (it checks at the TPM level and knows the device booted with BitLocker enabled)[1][1]. The generic encryption check is a bit more broad but for Windows equates to BitLocker anyway. The key point is the policy requires encryption, which we already confirmed is a must-have security control. If BitLocker was somehow not supported on a device (very rare on Windows 10/11, since even Home edition has device encryption now), that device would simply fail compliance – again, meaning only devices capable of encryption and actually encrypted are allowed, which is appropriate for a secure environment.

(Note: Since both “Require BitLocker” and “Require encryption” are turned on, an Intune admin should be aware that a device might show two noncompliance messages for essentially the same issue if BitLocker is off. Users would see that they need to turn on encryption to comply. Once BitLocker is enabled and the device rebooted, both checks will pass[1][1]. The rationale for using both might be to ensure that even if the more advanced attestation didn’t report, the simpler check would catch it.)

Device Security Settings (Firewall, TPM, AV, Anti-spyware)

This section of the policy ensures that essential security features of Windows are active:

  • FirewallRequire. The policy mandates that the Windows Defender Firewall is enabled on the device (Firewall: Require)[1]. This means Intune will mark the device noncompliant if the firewall is turned off or if a user/app tries to disable it. Options: Not configured (do not check firewall status) or Require (firewall must be on)[1]. Requiring the firewall is definitely best practice – a host-based firewall is a critical first line of defense against network-based attacks. The Windows Firewall helps block unwanted inbound connections and can enforce outbound rules as well. By ensuring it’s always on (and preventing users from turning it off), the policy guards against scenarios where an employee might disable the firewall and expose the machine to threats[1]. This setting aligns with Microsoft recommendations and CIS Benchmarks, which also advise that Windows Firewall be enabled on all profiles. Our policy sets it to Require[2], which is correct for strong security. (One thing to note: if there were any conflicting GPO or config that turns the firewall off or allows all traffic, Intune would consider that noncompliant even if Intune’s own config profile tries to enable it[1] – essentially, Intune checks the effective state. Best practice is to avoid conflicts and keep the firewall defaults to block inbound unless necessary[1].)
  • Trusted Platform Module (TPM)Require. This check ensures the device has a TPM chip present and enabled (TPM: Require)[1]. Intune will look for a TPM security chip and mark the device noncompliant if none is found or it’s not active. Options: Not configured (don’t verify TPM) or Require (TPM must exist)[1]. TPM is a hardware security module used for storing cryptographic keys (like BitLocker keys) and for platform integrity (measured boot). Requiring a TPM is a strong security stance because it effectively disallows devices that lack modern hardware security support. Most Windows 10/11 PCs do have TPM 2.0 (Windows 11 even requires it), so this is feasible and aligns with best practices. It ensures features like BitLocker are using TPM protection and that the device can do hardware attestation. The policy sets TPM to required[2], which is a best practice consistent with Microsoft’s own baseline (they recommend excluding non-TPM machines, as those are typically older or less secure). By enforcing this, you guarantee that keys and sensitive operations can be hardware-isolated. A device without TPM could potentially store BitLocker keys in software (less secure) or not support advanced security like Windows Hello with hardware-backed credentials. So from a security viewpoint, this is the right call. Any device without a TPM (or with it disabled) will need remediation or replacement, which is acceptable in a high-security environment. This reflects a zero-trust hardware approach: only modern, TPM-equipped devices can be trusted fully[1].
  • AntivirusRequire. The compliance policy requires that antivirus protection is active and up-to-date on the device (Antivirus: Require)[1]. Intune checks the Windows Security Center status for antivirus. If no antivirus is registered, or if the AV is present but disabled/out-of-date, the device is noncompliant[1]. Options: Not configured (don’t check AV) or Require (must have AV on and updated)[1]. It’s hard to overstate the importance of this: running a reputable, active antivirus/antimalware is absolutely best practice on Windows. The policy’s requirement means every device must have an antivirus engine running and not report any “at risk” state. Windows Defender Antivirus or a third-party AV that registers with Security Center will satisfy this. If a user has accidentally turned off real-time protection or if the AV signatures are old, Intune will flag it[1]. Enforcing AV is a no-brainer for strong security. This matches all industry guidance (e.g., CIS Controls highlight the need for anti-malware on all endpoints). Our policy does enforce it[2].
  • AntispywareRequire. Similar to antivirus, this ensures anti-spyware (malware protection) is on and healthy (Antispyware: Require)[1]. In modern Windows terms, “antispyware” is essentially covered by Microsoft Defender Antivirus as well (Defender handles viruses, spyware, all malware). But Intune treats it as a separate compliance item to check in Security Center. This setting being required means the anti-malware software’s spyware detection component (like Defender’s real-time protection for spyware/PUPs) must also be enabled and not outdated[1]. Options: Not configured or Require, analogous to antivirus[1]. The policy sets it to Require[2]. This is again best practice – it ensures comprehensive malware protection is in place. In effect, having both AV and antispyware required just double-checks that the endpoint’s security suite is fully active. If using Defender, it covers both; if using a third-party suite, as long as it reports to Windows Security Center for both AV and antispyware status, it will count. This redundancy helps catch any scenario where maybe virus scanning is on but spyware definitions are off (though that’s rare with unified products). For our purposes, requiring antispyware is simply reinforcing the “must have anti-malware” rule – clearly aligned with strong security standards.

Collectively, these Device Security settings (Firewall, TPM, AV, antispyware) ensure that critical protective technologies are in place on every device:

  • The firewall requirement guards against network attacks and unauthorized connections[1].
  • The TPM requirement ensures hardware-based security for encryption and identity[1].
  • The AV/antispyware requirements ensure continuous malware defense and that no device is left unprotected against viruses or spyware[1].

All are definitely considered best practices. In fact, running without any of these (no firewall, no AV, etc.) would be considered a serious security misconfiguration. This policy wisely enforces all of them. Any device not meeting these (e.g., someone attempts to disable Defender Firewall or uninstall AV) will get swiftly flagged, which is exactly what we want in a secure environment.

*(Side note: The policy’s reliance on Windows Security Center means it’s vendor-agnostic; e.g., if an organization uses Symantec or another AV, as long as that product reports a good status to Security Center, Intune will see the device as compliant for AV/antispyware. If a third-party AV is used that *disables* Windows Defender, that’s fine because Security Center will show another AV is active. The compliance rule will still require that one of them is active. So this is a flexible but strict enforcement of “you must have one”.)*

Microsoft Defender Anti-malware Requirements

The policy further specifies settings under Defender (Microsoft Defender Antivirus) to tighten control of the built-in anti-malware solution:

  • Microsoft Defender AntimalwareRequire. This means the Microsoft Defender Antivirus service must be running and cannot be turned off by the user[1]. If the device’s primary AV is Defender (as is default on Windows 10/11 when no other AV is installed), this ensures it stays on. Options: Not configured (Intune doesn’t ensure Defender is on) or Require (Defender AV must be enabled)[1]. Our policy sets it to Require[2], which is a strong choice. If a third-party AV is present, how does this behave? Typically, when a third-party AV is active, Defender goes into a passive mode but is still not “disabled” in Security Center terms – or it might hand over status. This setting primarily aims to prevent someone from turning off Defender without another AV in place. Requiring Defender antivirus to be on is a best practice if your organization relies on Defender as the standard AV. It ensures no one (intentionally or accidentally) shuts off Windows’ built-in protection[1]. It essentially overlaps with the “Antivirus: Require” setting, but is more specific. The fact that both are set implies this environment expects to use Microsoft Defender on all machines (which is common for many businesses). In a scenario where a user installed a 3rd party AV that doesn’t properly report to Security Center, having this required might actually conflict (because Defender might register as off due to third-party takeover, thus Intune might mark noncompliant). But assuming standard behavior, if third-party AV is present and reporting, Security Center usually shows “Another AV is active” – Intune might consider the AV check passed but the “Defender Antimalware” specifically could possibly see Defender as not the active engine and flag it. In any case, for strong security, the ideal is to have a consistent AV (Defender) across all devices. So requiring Defender is a fine security best practice, and our policy reflects that intention. It aligns with Microsoft’s own baseline for Intune when organizations standardize on Defender. If you weren’t standardized on Defender, you might leave this not configured and just rely on the generic AV requirement. Here it’s set, indicating a Defender-first strategy for antimalware.
  • Microsoft Defender Antimalware minimum version4.18.0.0. This setting specifies the lowest acceptable version of the Defender Anti-Malware client. The policy has defined 4.18.0.0 as the minimum[2]. Effect: If a device has an older Defender engine below that version, it’s noncompliant. Version 4.18.x is basically the Defender client that ships with Windows 10 and above (Defender’s engine is updated through Windows Update periodically, but the major/minor version has been 4.18 for a long time). By setting 4.18.0.0, essentially any Windows 10/11 with Defender should meet it (since 4.18 was introduced years ago). This catches only truly outdated Defender installations (perhaps if a machine had not updated its Defender platform in a very long time, or is running Windows 8.1/7, which had older Defender versions – though those OS wouldn’t be in a Win10 policy anyway). Options: Admin can input a specific version string, or leave blank (no version enforcement)[1]. The policy chose 4.18.0.0, presumably because that covers all modern Windows builds (for example, Windows 10 21H2 uses Defender engine 4.18.x). Requiring a minimum Defender version is a good practice to ensure the anti-malware engine itself isn’t outdated. Microsoft occasionally releases new engine versions with improved capabilities; if a machine somehow fell way behind (e.g., an offline machine that missed engine updates), it could have known issues or be missing detection techniques. By enforcing a minimum, you compel those devices to update their Defender platform. Version 4.18.0.0 is effectively the baseline for Windows 10, so this is a reasonable choice. It’s likely every device will already have a later version (like 4.18.210 or similar). As a best practice, some organizations might set this to an even more recent build number if they want to ensure a certain monthly platform update is installed. In any case, including this setting in the policy shows thoroughness – it’s making sure Defender isn’t an old build. This contributes to security by catching devices that might have the Defender service but not the latest engine improvements. Since the policy’s value is low (4.18.0.0), practically all supported Windows 10/11 devices comply, but it sets a floor that excludes any unsupported OS or really old install. This aligns with best practice: keep security software up-to-date, both signatures and the engine. (The admin should update this minimum version over time if needed – e.g., if Microsoft releases Defender 4.19 or 5.x in the future, they might raise the bar.)
  • Microsoft Defender security intelligence up-to-dateRequire. This is basically ensuring Defender’s virus definitions (security intelligence) are current (Security intelligence up-to-date: Yes)[1]. If Defender’s definitions are out of date, Intune will mark noncompliant. “Up-to-date” typically means the signature is not older than a certain threshold (usually a few days, defined by Windows Security Center’s criteria). Options: Not configured (don’t check definitions currency) or Require (must have latest definitions)[1]. It’s set to Require in our policy[2]. This is clearly a best practice – an antivirus is only as good as its latest definitions. Ensuring that the AV has the latest threat intelligence is critical. This setting will catch devices that, for instance, haven’t gone online in a while or are failing to update Defender signatures. Those devices would be at risk from newer malware until they update. By marking them noncompliant, it forces an admin/user to take action (e.g. connect to the internet to get updates)[1]. This contributes directly to security, keeping anti-malware defenses sharp. It aligns with common security guidelines that AV should be kept current. Since Windows usually updates Defender signatures daily (or more), this compliance rule likely treats a device as noncompliant if signatures are older than ~3 days (Security Center flag). This policy absolutely should have this on, and it does – another check in the box for strong security practice.
  • Real-time protectionRequire. This ensures that Defender’s real-time protection is enabled (Realtime protection: Require)[1]. Real-time protection means the antivirus actively scans files and processes as they are accessed, rather than only running periodic scans. If a user had manually turned off real-time protection (which Windows allows for troubleshooting, or sometimes malware tries to disable it), this compliance rule would flag the device. Options: Not configured or Require[1]. Our policy requires it[2]. This is a crucial setting: real-time protection is a must for proactive malware defense. Without it, viruses or spyware could execute without immediate detection, and you’d only catch them on the next scan (if at all). Best practice is to never leave real-time protection off except perhaps briefly to install certain software, and even then, compliance would catch that and mark the device not compliant with policy. So turning this on is definitely part of a strong security posture. The policy correctly enforces it. It matches Microsoft’s baseline and any sane security policy – you want continuous scanning for threats in real time. The Intune CSP for this ensures that the toggle in Windows Security (“Real-time protection”) stays on[1]. Even if a user is local admin, turning it off will flip the device to noncompliant (and possibly trigger Conditional Access to cut off corporate resource access), strongly incentivizing them not to do that. Good move.

In summary, the Defender-specific settings in this policy double-down on malware protection:

  • The Defender AV engine must be active (and presumably they expect to use Defender on all devices)[1].
  • Defender must stay updated – both engine version and malware definitions[1][1].
  • Real-time scanning must be on at all times[1].

These are all clearly best practices for endpoint security. They ensure the built-in Windows security is fully utilized. The overlap with the general “Antivirus/Antenna” checks means there’s comprehensive coverage. Essentially, if a device doesn’t have Defender, the general AV required check would catch it; if it does have Defender, these specific settings enforce its quality and operation. No device should be running with outdated or disabled Defender in a secure environment, and this compliance policy guarantees that.

(If an organization did use a third-party AV instead of Defender, they might not use these Defender-specific settings. The presence of these in the JSON indicates alignment with using Microsoft Defender as the standard. That is indeed a good practice nowadays, as Defender has top-tier ratings and seamless integration. Many “best practice” guides, including government blueprints, now assume Defender is the AV to use, due to its strong performance and integration with Defender for Endpoint.)

Microsoft Defender for Endpoint (MDE) – Device Threat Risk Level

Finally, the policy integrates with Microsoft Defender for Endpoint (MDE) by using the setting:

  • Require the device to be at or under the machine risk scoreMedium. This ties into MDE’s threat intelligence, which assesses each managed device’s risk level (based on detected threats on that endpoint). The compliance policy is requiring that a device’s risk level be Medium or lower to be considered compliant[1]. If MDE flags a device as High risk, Intune will mark it noncompliant and can trigger protections (like Conditional Access blocking that device). Options: Not configured (don’t use MDE risk in compliance) or one of Clear, Low, Medium, High as the maximum allowed threat level[1]. The chosen value “Medium” means: any device with a threat rated High is noncompliant, while devices with Low or Medium threats are still compliant[1]. (Clear would be the most strict – requiring absolutely no threats; High would be least strict – tolerating even high threats)[1].

Setting this to Medium is a somewhat balanced security stance. Let’s interpret it: MDE categorizes threats on devices (malware, suspicious activity) into risk levels. By allowing up to Medium, the policy is saying if a device has only low or medium-level threats, we still consider it compliant; but if it has any high-level threat, that’s unacceptable. High usually indicates serious malware outbreaks or multiple alerts, whereas low may indicate minimal or contained threats. From a security best-practice perspective, using MDE’s risk as a compliance criterion is definitely recommended – it adds an active threat-aware dimension to compliance. The choice of Medium as the cutoff is probably to avoid overly frequent lockouts for minor issues, while still reacting to major incidents.

Many security experts would advocate for even stricter: e.g. require Low or Clear (meaning even medium threats would cause noncompliance), especially in highly secure environments where any malware is concerning. In fact, Microsoft’s documentation notes “Clear is the most secure, as the device can’t have any threats”[1]. Medium is a reasonable compromise – it will catch machines with serious infections but not penalize ones that had a low-severity event that might have already been remediated. For example, if a single low-level adware was detected and quarantined, risk might be low and the device remains compliant; but if ransomware or multiple high-severity alerts are active, risk goes high and the device is blocked until cleaned[1].

In our policy JSON, it’s set to Medium[2], which is in line with many best practice guides (some Microsoft baseline recommendations also use Medium as the default, to balance security and usability). This is still considered a strong security practice because any device under an active high threat will immediately be barred. It leverages real-time threat intelligence from Defender for Endpoint to enhance compliance beyond just configuration. That means even if a device meets all the config settings above, it could still be blocked if it’s actively compromised – which is exactly what we want. It’s an important part of a Zero Trust approach: continuously monitor device health and risk, not just initial compliance.

One could tighten this to Low for maximum security (meaning even medium threats cause noncompliance). If an organization has low tolerance for any malware, they might do that. However, Medium is often chosen to avoid too many disruptions. For our evaluation: The inclusion of this setting at all is a best practice (many might forget to use it). The threshold of Medium is acceptable for strong security, catching big problems while allowing IT some leeway to investigate mediums without immediate lockout. And importantly, if set to Medium, only devices with severe threats (like active malware not neutralized) will be cut off, which likely correlates with devices that indeed should be isolated until fixed.

To summarize, the Defender for Endpoint integration means this compliance policy isn’t just checking the device’s configuration, but also its security posture in real-time. This is a modern best practice: compliance isn’t static. The policy ensures that if a device is under attack or compromised (per MDE signals), it will lose its compliant status and thus can be auto-remediated or blocked from sensitive resources[1]. This greatly strengthens the security model. Medium risk tolerance is a balanced choice – it’s not the absolute strictest, but it is still a solid security stance and likely appropriate to avoid false positives blocking users unnecessarily.

(Note: Organizations must have Microsoft Defender for Endpoint properly set up and the devices onboarded for this to work. Given it’s in the policy, we assume that’s the case, which is itself a security best practice – having EDR (Endpoint Detection & Response) on all endpoints.)

Actions for Noncompliance and Additional Considerations

The JSON policy likely includes Actions for noncompliance (the blueprint shows an action “Mark device noncompliant (1)” meaning immediate)[2]. By default, Intune always marks a device as noncompliant if it fails a setting – which is what triggers Conditional Access or other responses. The policy can also be configured to send email notifications, or after X days perform device retire/wipe, etc. The snippet indicates the default action to mark noncompliant is at day 1 (immediately)[2]. This is standard and aligns with security best practice – you want noncompliant devices to be marked as such right away. Additional actions (like notifying user, or disabling the device) could be considered but are not listed.

It’s worth noting a few maintenance and dependency points:

  • Updating the Policy: As new Windows versions release, the admin should review the Minimum OS version field and advance it when appropriate (for example, when Windows 10 21H1 becomes too old, they might raise the minimum to 21H2 or Windows 11). Similarly, the Defender minimum version can be updated over time. Best practice is to review compliance policies at least annually (or along with major new OS updates)[1][1] to keep them effective.
  • Device Support: Some settings have hardware prerequisites (TPM, Secure Boot, etc.). In a strong security posture, devices that don’t meet these (older hardware) should ideally be phased out. This policy enforces that by design. If an organization still has a few legacy devices without TPM, they might temporarily drop the TPM requirement or grant an exception group – but from a pure security standpoint, it’s better to upgrade those devices.
  • User Impact and Change Management: Enforcing these settings can pose adoption challenges. For example, requiring a 14-character complex password might generate more IT support queries or user friction initially. It is best practice to accompany such policy with user education and perhaps rollout in stages. The policy as given is quite strict, so ensuring leadership backing and possibly implementing self-service password reset (to handle expiry) would be wise. These aren’t policy settings per se, but operational best practices.
  • Complementary Policies: A compliance policy like this ensures baseline security configuration, but it doesn’t directly configure the settings on the device (except for password requirement which the user is prompted to set). It checks and reports compliance. To actually turn on things like BitLocker or firewall if they’re off, one uses Configuration Profiles or Endpoint Security policies in Intune. Best practice is to pair compliance policies with configuration profiles that enable the desired settings. For instance, enabling BitLocker via an Endpoint Security policy and then compliance verifies it’s on. The question focuses on compliance policy, so our scope is those checks, but it’s assumed the organization will also deploy policies to turn on BitLocker, firewall, Defender, etc., making it easy for devices to become compliant.
  • Protected Characteristics: Every setting here targets technical security and does not discriminate or involve user personal data, so no concerns there. From a privacy perspective, the compliance data is standard device security posture info.

Conclusion

Overall, each setting in this Windows compliance policy aligns with best practices for securing Windows 10/11 devices. The policy requires strong encryption, up-to-date and secure OS versions, robust password/PIN policies, active firewall and anti-malware, and even ties into advanced threat detection (Defender for Endpoint)[2][2]. These controls collectively harden the devices against unauthorized access, data loss, malware infections, and unpatched vulnerabilities.

Almost all configurations are set to their most secure option (e.g., requiring vs not, or maximum complexity) as one would expect in a high-security baseline:

  • Data protection is ensured by BitLocker encryption on disk[1].
  • Boot integrity is assured via Secure Boot and Code Integrity[1].
  • Only modern, supported OS builds are allowed[1].
  • Users must adhere to a strict password policy (complex, long, regularly changed)[1].
  • Critical security features (firewall, AV, antispyware, TPM) must be in place[1][1].
  • Endpoint Defender is kept running in real-time and up-to-date[1].
  • Devices under serious threat are quarantined via noncompliance[1].

All these are considered best practices by standards such as the CIS Benchmark for Windows and government cybersecurity guidelines (for example, the ASD Essential Eight in Australia, which this policy closely mirrors, calls for application control, patching, and admin privilege restriction – many of which this policy supports by ensuring fundamental security hygiene on devices).

Are there any settings that might not align with best practice? Perhaps the only debatable one is the 365-day password expiration – modern NIST guidelines suggest you don’t force changes on a schedule unless needed. However, many organizations still view an annual password change as reasonable policy in a defense-in-depth approach. It’s a mild requirement and not draconian, so it doesn’t significantly detract from security; if anything, it adds a periodic refresh which can be seen as positive (with the understanding that user education is needed to avoid predictable changes). Thus, we wouldn’t call it a wrong practice – it’s an accepted practice in many “strong security” environments, even if some experts might opt not to expire passwords arbitrarily. Everything else is straightforwardly as per best practice or even exceeding typical baseline requirements (e.g., 14 char min is quite strong).

Improvements or additions: The policy as given is already thorough. An organization could consider tightening the Defender for Endpoint risk level to Low (meaning only absolutely clean devices are compliant) if they wanted to be extra careful – but that could increase operational noise if minor issues trigger noncompliance too often[1]. They could also reduce the idle timeout to, say, 5 or 10 minutes for devices in very sensitive environments (15 minutes is standard, though stricter is always an option). Another possible addition: enabling Jailbreak detection – not applicable for Windows (it’s more for mobile OS), Windows doesn’t have a jailbreak setting beyond what we covered (DHA covers some integrity). Everything major in Windows compliance is covered here.

One more setting outside of this device policy that’s a tenant-wide setting is “Mark devices with no compliance policy as noncompliant”, which we would assume is enabled at the Intune tenant level for strong security (so that any device that somehow doesn’t get this policy is still not trusted)[3]. The question didn’t include that, but it’s a part of best practices – likely the organization would have set it to Not compliant at the tenant setting to avoid unmanaged devices slipping through[3].

In conclusion, each listed setting is configured in line with strong security best practices for Windows devices. The policy reflects an aggressive security posture: it imposes strict requirements that greatly reduce the risk of compromise. Devices that meet all these conditions will be quite well-hardened against common threats. Conversely, any device failing these checks is rightfully flagged for remediation, which helps the IT team maintain a secure fleet. This compliance policy, especially when combined with Conditional Access (to prevent noncompliant devices from accessing corporate data) and proper configuration policies (to push these settings onto devices), provides an effective enforcement of security standards across the Windows estate[3][3]. It aligns with industry guidelines and should substantially mitigate risks such as data breaches, malware incidents, and unauthorized access. Each setting plays a role: from protecting data encryption and boot process to enforcing user credentials and system health – together forming a comprehensive security baseline that is indeed consistent with best practices.

[1][2]

References

[1] Windows compliance settings in Microsoft Intune

[2] Windows 10/11 Compliance Policy | ASD’s Blueprint for Secure Cloud

[3] Device compliance policies in Microsoft Intune

Using AI Tools vs. Search Engines: A Comprehensive Guide

In today’s digital workspace, AI-powered assistants like Microsoft 365 Copilot and traditional search engines serve different purposes and excel in different scenarios. This guide explains why you should not treat an AI tool such as Copilot as a general web search engine, and details when to use AI over a normal search process. We also provide example Copilot prompts that outperform typical search queries in answering common questions.


Understanding AI Tools (Copilot) vs. Traditional Search Engines

AI tools like Microsoft 365 Copilot are conversational, context-aware assistants, whereas search engines are designed for broad information retrieval. Copilot is an AI-powered tool that helps with work tasks, generating responses in real-time using both internet content and your work content (emails, documents, etc.) that you have permission to access[1]. It is embedded within Microsoft 365 apps (Word, Excel, Outlook, Teams, etc.), enabling it to produce outputs relevant to what you’re working on. For example, Copilot can draft a document in Word, suggest formulas in Excel, summarize an email thread in Outlook, or recap a meeting in Teams, all by understanding the context in those applications[1]. It uses large language models (like GPT-4) combined with Microsoft Graph (your organizational data) to provide personalized assistance[1].

On the other hand, a search engine (like Google or Bing) is a software system specifically designed to search the World Wide Web for information based on keywords in a query[2]. A search engine crawls and indexes billions of web pages and, when you ask a question, it returns a list of relevant documents or links ranked by algorithms. The search engine’s goal is to help you find relevant information sources – you then read or navigate those sources to get your answer.

Key differences in how they operate:

  • Result Format: A traditional search engine provides you with a list of website links, snippets, or media results. You must click through to those sources to synthesize an answer. In contrast, Copilot provides a direct answer or content output (e.g. a summary, draft, or insight), often in a conversational format, without requiring you to manually open multiple documents. It can combine information from multiple sources (including your files and the web) into a single cohesive response on the spot[3].
  • Context and Personalization: Search engines can use your location or past behavior for minor personalization, but largely they respond the same way to anyone asking a given query. Copilot, however, is deeply personalized to your work context – it can pull data from your emails, documents, meetings, and chats via Microsoft Graph to tailor its responses[1]. For example, if you ask “Who is my manager and what is our latest project update?”, Copilot can look up your manager’s name from your Office 365 profile and retrieve the latest project info from your internal files or emails, giving a personalized answer. A public search engine would not know these personal details.
  • Understanding of Complex Language: Both modern search engines and AI assistants handle natural language, but Copilot (AI) can engage in a dialogue. You can ask Copilot follow-up questions or make iterative requests in a conversation, refining what you need, which is not how one interacts with a search engine. Copilot can remember context from earlier in the conversation for additional queries, as long as you stay in the same chat session or document, enabling complex multi-step interactions (e.g., first “Summarize this report,” then “Now draft an email to the team with those key points.”). A search engine treats each query independently and doesn’t carry over context from previous searches.
  • Learning and Adaptability: AI tools can adapt outputs based on user feedback or organization-specific training. Copilot uses advanced AI (LLMs) which can be “prompted” to adjust style or content. For instance, you can tell Copilot “rewrite this in a formal tone” or “exclude budget figures in the summary”, and it will attempt to comply. Traditional search has no such direct adaptability in generating content; it can only show different results if you refine your keywords.
  • Output Use Cases: Perhaps the biggest difference is in what you use them for: Copilot is aimed at productivity tasks and analysis within your workflow, while search is aimed at information lookup. If you need to compose, create, or transform content, an AI assistant shines. If you need to find where information resides on the web, a search engine is the go-to tool. The next sections will dive deeper into these distinctions, especially why Copilot is not a straight replacement for a search engine.

Limitations of Using Copilot as a Search Engine

While Copilot is powerful, you should not use it as a one-to-one substitute for a search engine. There are several reasons and limitations that explain why:

  • Accuracy and “Hallucinations”: AI tools sometimes generate incorrect information very confidently – a phenomenon often called hallucination. They do not simply fetch verified facts; instead, they predict answers based on patterns in training data. A recent study found that generative AI search tools were inaccurate about 60% of the time when answering factual queries, often presenting wrong information with great confidence[4]. In that evaluation, Microsoft’s Copilot (in a web search context) was about 70% completely inaccurate in responding to certain news queries[4]. In contrast, a normal search engine would have just pointed to the actual news articles. This highlights that Copilot may give an answer that sounds correct but isn’t, especially on topics outside your work context or beyond its training. Using Copilot as a general fact-finder can thus be risky without verification.
  • Lack of Source Transparency: When you search the web, you get a list of sources and can evaluate the credibility of each (e.g., you see it’s from an official website, a recent date, etc.). With Copilot, the answer comes fused together, and although Copilot does provide citations in certain interfaces (for instance, Copilot in Teams chat will show citations for the sources it used[1]), it’s not the same as scanning multiple different sources yourself. If you rely on Copilot alone, you might miss the nuance and multi-perspective insight that multiple search results would offer. In short, Copilot might tell you “According to the data, Project Alpha increased sales by 5%”, whereas a search engine would show you the report or news release so you can verify that 5% figure in context. Over-reliance on AI’s one-shot answer could be misleading if the answer is incomplete or taken out of context.
  • Real-Time Information and Knowledge Cutoff: Search engines are constantly updated – they crawl news sites, blogs, and the entire web continuously, meaning if something happened minutes ago, a search engine will likely surface it. Copilot’s AI model has a knowledge cutoff (it doesn’t automatically know information published after a certain point unless it performs a live web search on-demand). Microsoft 365 Copilot can fetch information from Bing when needed, but this is an optional feature under admin control[3][3], and Copilot has to decide to invoke it. If web search is disabled or if Copilot doesn’t recognize that it should look online, it will answer from its existing knowledge base and your internal data alone. Thus, for breaking news or very recent events, Copilot might give outdated info or no info at all, whereas a web search would be the appropriate tool. Even with web search enabled, Copilot generates a query behind the scenes and might not capture the exact detail you want, whereas you could manually refine a search engine query. In summary, Copilot is not as naturally in tune with the latest information as a dedicated search engine[5].
  • Breadth of Information: Copilot is bounded by what it has been trained on and what data you provide to it. It is excellent on enterprise data you have access to and general knowledge up to its training date, but it is not guaranteed to know about every obscure topic on the internet. A search engine indexes virtually the entire public web; if you need something outside of Copilot’s domain (say, a niche academic paper or a specific product review), a traditional search is more likely to find it. If you ask Copilot an off-topic question unrelated to your work or its training, it might struggle or give a generic answer. It’s not an open portal to all human knowledge in the way Google is.
  • Multiple Perspectives and Depth: Some research questions or decisions benefit from seeing diverse sources. For example, before making a decision you might want to read several opinions or analyses. Copilot will tend to produce a single synthesized answer or narrative. If you only use that, you could miss out on alternative viewpoints or conflicting data that a search could reveal. Search engines excel at exploratory research – scanning results can give you a quick sense of consensus or disagreement on a topic, something an AI’s singular answer won’t provide.
  • Interaction Style: Using Copilot is a conversation, which is powerful but can also be a limitation when you just need a quick fact with zero ambiguity. Sometimes, you might know exactly what you’re looking for (“ISO standard number for PDF/A format”, for instance). Typing that into a search engine will instantly yield the precise fact. Asking Copilot might result in a verbose answer or an attempt to be helpful beyond what you need. For quick, factoid-style queries (dates, definitions, simple facts), a search engine or a structured Q\&A database might be faster and cleaner.
  • Cost and Access: While not a technical limitation, it’s worth noting that Copilot (and similar AI services) often comes with licensing costs or usage limits[6]. Microsoft 365 Copilot is a premium feature for businesses or certain Microsoft 365 plans. Conducting a large number of general searches through Copilot could be inefficient cost-wise if a free search engine could do the job. In some consumer scenarios, Copilot access might even be limited (for example, personal Microsoft accounts have a capped number of Copilot uses per month without an upgrade[6]). So, from a practical standpoint, you wouldn’t want to spend your limited Copilot queries on trivial lookups that Bing or Google could handle at no cost.
  • Ethical and Compliance Factors: Copilot is designed to respect organizational data boundaries – it won’t show you content from your company that you don’t have permission to access[1]. On the flip side, if you try to use it like a search engine to dig up information you shouldn’t access, it won’t bypass security (which is a good thing). A search engine might find publicly available info on a topic, but Copilot won’t violate privacy or compliance settings to fetch data. Also, in an enterprise, all Copilot interactions are auditable by admins for security[3]. This means your queries are logged internally. If you were using Copilot to search the web for personal reasons, that might be visible to your organization’s IT – another reason to use a personal device or external search for non-work-related queries.

Bottom line: Generative AI tools like Copilot are not primarily fact-finding tools – they are assistants for generating and manipulating content. Use them for what they’re good at (as we’ll detail next), and use traditional search when you need authoritative information discovery, multiple source verification, or the latest updates. If you do use Copilot to get information, be prepared to double-check important facts against a reliable source.


When to Use AI Tools (Copilot) vs. When to Use Search Engines

Given the differences and limitations above, there are distinct scenarios where using an AI assistant like Copilot is advantageous, and others where a traditional search is better. Below are detailed reasons and examples for each, to guide you on which tool to use for a given need:

Scenarios Where Copilot (AI) Excels:
  • Synthesizing Information and Summarization: When you have a large amount of information and need a concise summary or insight, Copilot shines. For instance, if you have a lengthy internal report or a 100-thread email conversation, Copilot can instantly generate a summary of key points or decisions. This saves you from manually reading through tons of text. One of Copilot’s standout uses is summarizing content; reviewers noted the ability to condense long PDFs into bulleted highlights as “indispensable, offering a significant boost in productivity”[7]. A search engine can’t summarize your private documents – that’s a job for AI.
  • Using Internal and Contextual Data: If your question involves data that is internal to your organization or personal workflow, use Copilot. No search engine can index your company’s SharePoint files or your Outlook inbox (those are private). Copilot, however, can pull from these sources (with proper permissions) to answer questions. For example, *“What decision did

References

[1] What is Microsoft 365 Copilot? | Microsoft Learn

[2] AI vs. Search Engine – What’s the Difference? | This vs. That

[3] Data, privacy, and security for web search in Microsoft 365 Copilot and …

[4] AI search engines fail accuracy test, study finds 60% error rate

[5] AI vs. Traditional Web Search: How Search Is Evolving – Kensium

[6] Microsoft 365 Copilot Explained: Features, Limitations and your choices

[7] Microsoft Copilot Review: Best Features for Smarter Workflows – Geeky …