How to Configure Microsoft 365 Business Premium to Block AI Browsers: A Complete Guide to Stopping Comet and Other Agentic Browsers

Executive Summary

In December 2025, Gartner issued an urgent advisory recommending that organizations “block all AI browsers for the foreseeable future” due to critical cybersecurity risks.AI browsers like Perplexity’s Comet and OpenAI’s ChatGPT Atlas introduce threats including irreversible data loss, prompt injection vulnerabilities, and unauthorized credential access.With 27.7% of organizations already having at least one user with an AI browser installed,the time to act is now. [computerworld.com]

This comprehensive guide provides step-by-step instructions for configuring Microsoft 365 Business Premium (M365 BP), specifically Microsoft Defender for Cloud Apps, to detect, monitor, and block AI-enabled browsers like Comet from accessing your enterprise resources.


Understanding the AI Browser Threat Landscape

Why AI Browsers Are Dangerous

According to Gartner analysts, “The real issue is that the loss of sensitive data to AI services can be irreversible and untraceable. Organizations may never recover lost data.” [computerworld.com]

Key Security Concerns:

  1. Autonomous Actions Without Oversight – AI browsers can autonomously navigate websites, fill out forms, and complete transactions while authenticated, creating accountability concerns for erroneous or malicious actions [computerworld.com]
  2. Traditional Controls Are Inadequate – “Traditional controls are inadequate for the new risks introduced by AI browsers, and solutions are only beginning to emerge,” according to Gartner’s senior director analyst Evgeny Mirolyubov [computerworld.com]
  3. Multi-Modal Communication Gaps – A major gap exists in inspecting multi-modal communications with browsers, including voice commands to AI browsers [computerworld.com]
  4. Immature Security Posture – Discovered vulnerabilities highlight broader concerns about the maturity of AI browser technology, with solutions likely taking “a matter of years rather than months” to mature [computerworld.com]

Prerequisites and Licensing Requirements

Required Licenses

To implement comprehensive AI browser blocking, you need: [wolkenman….dpress.com]

License OptionWhat’s Included
Microsoft 365 Business Premium + E5 Security Add-onDefender for Cloud Apps + Defender for Endpoint
Microsoft 365 E5 / A5 / G5Full suite including Conditional Access App Control
Enterprise Mobility + Security E5Defender for Cloud Apps + Defender for Endpoint
Microsoft 365 F5 Security & ComplianceAll required components
Microsoft 365 Business Premium + Defender for Cloud Apps Add-onMinimum required configuration

Technical Prerequisites

Before implementing blocking policies, ensure: [learn.microsoft.com], [learn.microsoft.com]

  • Microsoft Defender for Cloud Apps license (standalone or bundled)
  • Microsoft Entra ID P1 license (standalone or bundled)
  • Microsoft Defender for Endpoint deployed and configured
  • Cloud Protection enabled in Defender for Endpoint [learn.microsoft.com]
  • Network Protection enabled in Defender for Endpoint [learn.microsoft.com]
  • Admin permissions – Global Administrator or Security Administrator role
  • Microsoft Defender Browser Protection extension installed on non-Edge browsers [learn.microsoft.com]

Multi-Layered Defense Strategy

Blocking AI browsers requires a comprehensive, defense-in-depth approach using multiple Microsoft 365 security layers:


Configuration Guide: Step-by-Step Implementation

Phase 1: Enable Cloud Discovery for AI Applications

Objective: Gain visibility into which AI browsers and applications are being used in your organization.

Step 1.1: Access Cloud Discovery Dashboard

  1. Navigate to Microsoft Defender Portal (https://security.microsoft.com)
  2. Go to Cloud AppsCloud DiscoveryDashboard
  3. Set the time range to Last 90 days for comprehensive analysis [wolkenman….dpress.com]

Step 1.2: Filter for Generative AI Applications

  1. In the Cloud Discovery dashboard, click Category filter
  2. Select “Generative AI” from the category list [wolkenman….dpress.com]
  3. Review discovered AI applications with their risk scores
  4. Note applications with High Risk status (red indicators) [wolkenman….dpress.com]

Step 1.3: Identify AI Model Providers and MCP Servers

Beyond browsers, also identify: [wolkenman….dpress.com]

  • AI – Model Providers (Azure OpenAI API, Google Gemini API, Anthropic Claude API)
  • AI – MCP Servers (Model Context Protocol servers)

Navigate to: Cloud AppsCloud App Catalog → Filter by “AI – Model Providers” and “AI – MCP Servers”


Phase 2: Configure Defender for Endpoint Integration

Objective: Enable automatic blocking of unsanctioned apps through network-level enforcement.

Step 2.1: Enable Enforce App Access

  1. In Microsoft Defender Portal, navigate to:
  2. Toggle “Automatically block unsanctioned apps” to ON
  3. This creates automatic indicators in Defender for Endpoint when apps are marked as unsanctioned [wolkenman….dpress.com]

Step 2.2: Verify Network Protection Status

Ensure Network Protection is enabled for all browsers: [wolkenman….dpress.com]

  1. Navigate to SettingsEndpointsConfiguration Management
  2. Go to Enforcement ScopeNetwork Protection
  3. Verify status is set to “Block mode” (not just Audit mode)
  4. Apply to All devices or specific device groups

Why This Matters: Network Protection ensures that blocks work across all browsers (Chrome, Firefox, etc.), not just Microsoft Edge. [wolkenman….dpress.com]


Phase 3: Unsanction and Block Comet Browser

Objective: Mark Comet and other AI browsers as unsanctioned to trigger automatic blocking.

Step 3.1: Search for Comet in Cloud App Catalog

  1. Go to Cloud AppsCloud App Catalog
  2. Use the search function to find “Comet” or “Perplexity”
  3. Click on the application to review its risk assessment

Note: If Comet hasn’t been discovered yet in your environment, you can still add custom URLs for blocking (see Phase 6).

Step 3.2: Unsanction the Application

  1. Click the three dots (⋮) at the end of the application row
  2. Select “Unsanctioned” [learn.microsoft.com]
  3. A confirmation dialog will appear indicating the app will be blocked by Defender for Endpoint [wolkenman….dpress.com]
  4. Click Confirm

Step 3.3: Verify Indicator Creation

  1. Navigate to SettingsEndpointsIndicatorsURLs/Domains [wolkenman….dpress.com]
  2. Confirm that domains associated with Comet appear with action “Block execution”
  3. Processing may take 5-15 minutes

Example domains that may be blocked:

  • *.perplexity.ai
  • comet.perplexity.ai
  • Related CDN and API endpoints

Phase 4: Create Conditional Access Policies

Objective: Route traffic through Defender for Cloud Apps proxy for deep inspection and control.

Step 4.1: Create Base Conditional Access Policy

  1. Sign in to Microsoft Entra Admin Center (https://entra.microsoft.com)
  2. Navigate to ProtectionConditional AccessPolicies
  3. Click + New policy [learn.microsoft.com]

Step 4.2: Configure Policy Settings

Policy Name: Block AI Browsers via Session Control

Assignments: [learn.microsoft.com]

SettingConfiguration
UsersSelect All users (exclude break-glass accounts)
Target ResourcesSelect Office 365, SharePoint Online, Exchange Online
ConditionsOptional: Add device platform, location filters

Access Controls: [learn.microsoft.com]

  • Under Session → Select “Use Conditional Access App Control”
  • Choose “Use custom policy”
  • Click Select

Enable Policy: Set to Report-only initially for testing [learn.microsoft.com]

Step 4.3: Save and Validate

  1. Click Create
  2. Wait 5-10 minutes for policy propagation
  3. Test with a pilot user account

Critical Note: Ensure the “Microsoft Defender for Cloud Apps – Session Controls” application is NOT blocked by other Conditional Access policies, or session controls will fail. [learn.microsoft.com]


Phase 5: Create Session Policies to Block AI Browser User Agents

Objective: Create real-time session policies that identify and block AI browsers based on user-agent strings and behavioral patterns.

Step 5.1: Create Access Policy for User-Agent Blocking

This is one of the most effective methods to block specific browsers like Comet. [securityhq.com]

  1. In Microsoft Defender Portal, navigate to:
  2. Click Create policyAccess policy [learn.microsoft.com]

Step 5.2: Configure Access Policy Details

Basic Information: [learn.microsoft.com]

FieldValue
Policy NameBlock AI Browsers - Comet and Similar Agents
Policy SeverityHigh
CategoryAccess control
DescriptionBlocks access attempts from AI-enabled browsers including Comet, Atlas, and other agentic browsers based on user-agent detection

Step 5.3: Set Activity Filters

Activities matching all of the following: [learn.microsoft.com]

  1. App: Select applications to protect
    • Office 365
    • Exchange Online
    • SharePoint Online
    • Microsoft Teams
    • OneDrive for Business
  2. Client app: Select Browser [learn.microsoft.com]
  3. User agent tag:
    • Contains “Comet”
    • Or create custom user-agent filter (see Step 5.4)
  4. Device type: (Optional) Apply to specific device types

Step 5.4: Create Custom User-Agent String Filters

While Defender for Cloud Apps doesn’t expose direct user-agent string matching in the UI by default, you can leverage activity filters: [securityhq.com]

Known AI Browser User-Agent Patterns to Block:

User-Agent patterns (Create separate policies or use contains logic):
- Contains "Comet"
- Contains "Perplexity"
- Contains "axios" (common in automated tools)
- Contains "ChatGPT" (for Atlas browser)
- Contains "AI-Browser"
- Contains "agentic"

Advanced Method – Using Session Policy with Inspection:

  1. Create a Session Policy instead of Access Policy
  2. Set Session control type: to “Block activities” [learn.microsoft.com]
  3. Under Activity type, select relevant activities
  4. In Inspection method, configure content inspection rules

Step 5.5: Set Actions

Actions:

  • Select “Block”
  • Enable “Notify users” with custom message:
Access Denied: AI-Enabled Browser Detected

Your organization's security policy prohibits the use of AI-enabled browsers 
(such as Comet, Atlas, or similar tools) to access corporate resources due to 
data security and compliance requirements.

Please use Microsoft Edge, Chrome, or Firefox to access this resource.

If you believe this is an error, contact your IT helpdesk.

Step 5.6: Enable Governance Actions

  • Select “Send email to user”
  • Select “Alert severity” as High
  • Enable “Create an alert for each matching event”

Step 5.7: Activate the Policy

  1. Review all settings
  2. Click Create
  3. Policy becomes active immediately
  4. Monitor via Activity Log for matches

Phase 6: Block Comet Domains via Custom Indicators

Objective: Manually add Comet-related domains to Defender for Endpoint indicators for network-level blocking.

Step 6.1: Identify Comet-Related Domains

Based on Perplexity’s infrastructure, key domains include: [computerworld.com]

Primary Domains:
- perplexity.ai
- www.perplexity.ai
- comet.perplexity.ai
- api.perplexity.ai

CDN and Supporting Infrastructure:
- *.perplexity.ai (wildcard)
- assets.perplexity.ai
- cdn.perplexity.ai

Step 6.2: Create URL/Domain Indicators

  1. Navigate to SettingsEndpointsIndicatorsURLs/Domains
  2. Click + Add item

For each domain, configure:

FieldValue
Indicatorperplexity.ai
ActionBlock
ScopeAll device groups (or specific groups)
TitleBlock Perplexity Comet Browser
DescriptionBlocks access to Perplexity Comet AI browser per organizational security policy
SeverityHigh
Generate alertYes
  1. Click Save
  2. Repeat for all identified domains

Step 6.3: Test Domain Blocking

  1. From a test device with Defender for Endpoint installed
  2. Navigate to https://www.perplexity.ai in any browser
  3. You should see: [wolkenman….dpress.com]
This site has been blocked by your organization
Microsoft Defender SmartScreen blocked this unsafe site

This web page was blocked by Microsoft Defender Application Control
perplexity.ai has been blocked by your IT administrator


Phase 7: Create Cloud Discovery Policies for Alerting

Objective: Set up automated alerts when AI browsers are detected in your environment.

Step 7.1: Create App Discovery Policy

  1. Navigate to Cloud AppsPoliciesPolicy Management
  2. Click Create policyApp discovery policy [learn.microsoft.com]

Step 7.2: Configure Discovery Policy

Policy Template: Use “New risky app” template or create custom [learn.microsoft.com]

FieldConfiguration
Policy NameAlert on New AI Browser Detection
CategoryCloud discovery
Risk scoreHigh and Medium
App categorySelect “Generative AI”
Traffic volumeGreater than 10 MB (adjust as needed)

Filters:

  • App category equals Generative AI
  • Risk score less than or equal to 6 (out of 10)
  • App tag equals Unsanctioned

Governance Actions:

  • Send email to security team
  • Create alert with High severity

Testing and Validation

Validation Checklist

Monitoring and Reporting

Activity Log Monitoring:

  1. Cloud AppsActivity Log
  2. Filter by:
    • Policy: Select your AI browser blocking policies
    • Action taken: Block
    • Date range: Last 7 days

Defender for Endpoint Alerts:

  1. Incidents & AlertsAlerts
  2. Filter by:
    • Category: Custom indicator block
    • Title: Contains “Perplexity” or “Comet”

Advanced Configuration Options

Option 1: Device Compliance Requirements

Combine AI browser blocking with device compliance:

  1. In Conditional Access policy, add ConditionsDevice platforms
  2. Require devices to be Compliant or Hybrid Azure AD Joined
  3. Use Intune compliance policies to check for:
    • Comet browser installation (custom script detection)
    • Other AI browser installations

Option 2: Warn and Educate Mode

Before full blocking, consider “Warn and Educate” mode: [learn.microsoft.com]

  1. Set indicators to “Warn” instead of “Block”
  2. Users see warning message but can proceed (with logging)
  3. Collect usage data for 2-4 weeks
  4. Transition to Block mode after user education

Option 3: Scoped Blocking by Device Groups

Target specific departments first:

  1. In Defender for Endpoint, create device groups:
    • Finance Team
    • Executive Leadership
    • High-Risk Departments
  2. Apply indicators only to these groups initially
  3. Expand gradually after validation

Option 4: DLP Integration for Data Leaving via AI Browsers

Even with blocks, ensure data leakage prevention:

  1. Create Microsoft Purview DLP policies
  2. Target “All locations” including endpoints
  3. Configure rules to detect sensitive data:
    • Credit card numbers
    • Social Security numbers
    • Confidential project names
  4. Block upload/sharing of sensitive content

Identifying Comet Browser Technical Indicators

User-Agent String Analysis

While official Comet user-agent strings aren’t publicly documented by Perplexity, AI browsers typically exhibit these patterns:

Common AI Browser User-Agent Characteristics:

Mozilla/5.0 (Platform) ... Comet/[version]
Mozilla/5.0 (Platform) ... Perplexity/[version]
Chromium-based with custom identifiers
May contain "AI", "Agent", "Agentic" in UA string

Detection Strategy:

  1. Review Activity Log in Defender for Cloud Apps
  2. Filter for unknown/suspicious user agents
  3. Export activity data with user-agent strings
  4. Analyze patterns using PowerShell or Excel
  5. Update policies based on findings

Network Traffic Patterns

Comet communicates with Perplexity cloud infrastructure: [computerworld.com]

  • High-frequency API calls to api.perplexity.ai
  • WebSocket connections for real-time AI responses
  • Upload of page content and browsing context
  • Telemetry to Perplexity servers

Monitor via Defender for Cloud Apps:

  • Cloud AppsActivity Log
  • Filter by IP address ranges (if known)
  • Look for unusual upload patterns

Troubleshooting Common Issues

Issue 1: Blocks Not Working in Chrome/Firefox

Symptom: Comet/Perplexity sites accessible in non-Edge browsers

Solution: [wolkenman….dpress.com]

  1. Verify Network Protection is enabled in Defender for Endpoint
  2. Check SettingsEndpointsConfiguration Management
  3. Ensure status is “Block” not “Audit”
  4. Restart browser and test again

Issue 2: Conditional Access Policy Not Triggering

Symptom: Users can access M365 apps without session controls

Solution:

  1. Verify Conditional Access policy is in “On” mode (not Report-only) [learn.microsoft.com]
  2. Check that “Microsoft Defender for Cloud Apps – Session Controls” app is not blocked
  3. Ensure apps are listed as “Monitored” in Conditional Access App Control [securityhq.com]
  4. Clear browser cache and test in incognito mode

Issue 3: Legitimate Traffic Being Blocked

Symptom: False positives blocking valid user activity

Solution:

  1. Review Activity Log for specific blocked events
  2. Refine user-agent filters to be more specific
  3. Create exception policies for legitimate tools
  4. Use “Exclude” filters in policies for specific users/groups

Issue 4: Indicators Not Appearing in Defender for Endpoint

Symptom: Unsanctioned apps don’t create indicators

Solution:

  1. Verify “Enforce App Access” is enabled [wolkenman….dpress.com]
  2. Check that Defender for Endpoint integration is active
  3. Wait 15-30 minutes for synchronization
  4. Manually create indicators if automatic creation fails

Best Practices and Recommendations

Strategic Recommendations

  1. Phased Rollout Approach
    • Week 1-2: Report-only mode, gather usage data
    • Week 3-4: Warn mode for user education
    • Week 5+: Full block mode enforcement
  2. User Communication Strategy[computerworld.com]
    • Send organization-wide email explaining policy
    • Provide approved alternatives
    • Create FAQ document
    • Offer training on secure browsing practices
  3. Continuous Monitoring
    • Review Cloud Discovery weekly for new AI apps
    • Monitor activity logs daily for policy violations
    • Track emerging AI browser releases
    • Update indicators quarterly
  4. Exception Process
    • Create formal request process for exceptions
    • Require executive approval for high-risk apps
    • Document business justification
    • Apply additional controls for approved exceptions (DLP, session monitoring)
  5. Defense in Depth[wolkenman….dpress.com]
    • Don’t rely solely on browser blocking
    • Implement data loss prevention (DLP)
    • Use endpoint detection and response (EDR)
    • Enable Microsoft Purview for data governance
    • Deploy insider risk management

Policy Comparison Table

MethodScopeEffectivenessUser ExperienceManagement Overhead
Cloud Discovery + UnsanctioningNetwork-wide⭐⭐⭐⭐⭐Transparent (blocked before access)Low (automated)
Session PoliciesM365 Apps only⭐⭐⭐⭐May show warning messagesMedium (requires tuning)
Access PoliciesM365 Apps only⭐⭐⭐⭐⭐Blocks before session startsMedium
Manual IndicatorsAll network traffic⭐⭐⭐⭐TransparentHigh (manual updates)
Conditional AccessCloud apps only⭐⭐⭐⭐May require re-authenticationLow

Recommended Combination: Use Cloud Discovery + Unsanctioning AND Access Policies for comprehensive coverage.


Staying Current: Monitoring New AI Browsers

AI browsers are rapidly evolving. Stay ahead of threats:

Monthly Review Checklist

Cloud App Catalog Updates

  • Review newly discovered apps in Generative AI category
  • Check for new AI Model Providers
  • Assess risk scores of emerging tools

Threat Intelligence

  • Monitor Gartner reports on AI browser security [gartner.com]
  • Follow Microsoft Security Blog
  • Subscribe to CISA alerts
  • Track CVE databases for AI browser vulnerabilities

Policy Effectiveness

  • Review blocked connection attempts
  • Analyze bypass attempts
  • Update user-agent filters
  • Refine domain lists

Emerging AI Browsers to Monitor

Beyond Comet and Atlas, watch for:

  • Brave Leo Browser (AI-enhanced features)
  • Opera One (integrated AI)
  • Arc Browser (with AI capabilities)
  • SigmaOS (AI-powered browsing)
  • Browser Company products

Compliance and Documentation

Required Documentation

Maintain these records for audit purposes:

  1. Policy Documentation
    • Policy names, purposes, and justifications
    • Configuration settings and filters
    • Approval chains and stakeholder sign-offs
  2. Change Log
    • Policy modifications
    • Domain additions/removals
    • Exception approvals
  3. Incident Reports
    • Blocked access attempts
    • Policy violations
    • User complaints and resolutions
  4. Risk Assessment
    • Why AI browsers are blocked
    • Business impact analysis
    • Alternative solutions provided to users

Regulatory Considerations

Consider these compliance frameworks:

FrameworkRelevance
GDPRData processing outside organization control
HIPAAProtected health information exfiltration risk
SOXFinancial data protection requirements
PCI DSSCardholder data security
NIST 800-53Access control requirements

Conclusion: Taking Action Against AI Browser Risks

The threat posed by AI browsers like Perplexity’s Comet is real, immediate, and growing. With security experts uniformly recommending that organizations “block all AI browsers for the foreseeable future,”the time for action is now—not later. [pcmag.com], [gartner.com]

Key Takeaways:

  1. Gartner’s Warning is Clear: AI browsers introduce “irreversible and untraceable” data loss risks that traditional controls cannot adequately mitigate [computerworld.com]
  2. Multi-Layered Defense is Essential: Combining Cloud Discovery, Session Policies, Access Policies, and Network Protection provides comprehensive coverage
  3. Microsoft 365 Business Premium Provides the Tools: With Defender for Cloud Apps and Defender for Endpoint, you have enterprise-grade capabilities to detect and block AI browsers
  4. User Education is Critical: Technical controls must be paired with clear communication about why AI browsers pose risks and what alternatives are approved
  5. Continuous Vigilance Required: The AI browser landscape evolves rapidly; monthly reviews of your defenses are essential [computerworld.com]

Immediate Action Steps

This Week:

  1. ✅ Enable Cloud Discovery and filter for Generative AI apps
  2. ✅ Review current AI browser usage in your organization
  3. ✅ Enable “Enforce App Access” in Defender for Cloud Apps
  4. ✅ Verify Network Protection is enabled in Defender for Endpoint

Next Week:

  1. ✅ Create Conditional Access policy routing traffic to MDCA
  2. ✅ Unsanction Comet and other AI browsers
  3. ✅ Create custom domain indicators for Perplexity infrastructure
  4. ✅ Deploy in Report-only mode for pilot group

Within 30 Days:

  1. ✅ Create Access Policies with user-agent filtering
  2. ✅ Enable full blocking mode organization-wide
  3. ✅ Communicate policy to all users
  4. ✅ Establish ongoing monitoring processes

Additional Resources

Microsoft Documentation:

Security Research:

Community Resources:


Step-by-Step Guide: Setting Up Entra ID Conditional Access for Small Businesses

Understanding Conditional Access

Conditional Access is Microsoft’s Zero Trust policy engine that evaluates signals from users, devices, and locations to make automated access decisions and enforce organizational policies. Think of it as intelligent “if-then” statements: If a user wants to access a resource, then they must complete an action (like multifactor authentication).

For SMBs using Microsoft 365 Business Premium, Conditional Access provides enterprise-grade security without requiring complex infrastructure, protecting your organization from 99.9% of identity-based attacks.

Prerequisites

  • License Requirements: Microsoft 365 Business Premium (includes Entra ID P1) or Microsoft 365 E3/E5
  • Admin Role: Conditional Access Administrator or Global Administrator privileges
  • Preparation: Ensure all users have registered for MFA before implementing policies
  • Emergency Access Account: Create at least one break-glass account excluded from all policies

Phase 1: Initial Setup and Planning (Week 1)

Step 1: Turn Off Security Defaults

  1. Navigate to Microsoft Entra admin center (entra.microsoft.com)
  2. Go to Entra IDProperties
  3. Select Manage security defaults
  4. Toggle Security defaults to Disabled
  5. Select My organization is using Conditional Access as the reason
  6. Click Save

Important: Only disable security defaults after you’re ready to create Conditional Access policies immediately.

Step 2: Create Emergency Access Accounts

  1. Create two cloud-only accounts with complex passwords
  2. Assign Global Administrator role to both accounts
  3. Store credentials securely (separate locations)
  4. Document these accounts for emergency use only
  5. Exclude these accounts from ALL Conditional Access policies

Step 3: Access the Conditional Access Portal

  1. Sign in to entra.microsoft.com
  2. Navigate to Entra IDConditional Access
  3. Select Policies to view the main dashboard

Phase 2: Create Baseline Policies (Week 1-2)

Policy 1: Require MFA for All Users

  1. Click New policy from templates
  2. Select Require multifactor authentication for all users template
  3. Name your policy: “Baseline: MFA for All Users”
  4. Under Assignments:
    • Users: All users
    • Exclude: Select your emergency access accounts
  5. Under Target resources:
    • Select All resources (formerly ‘All cloud apps’)
  6. Under Access controlsGrant:
    • Select Require multifactor authentication
  7. Set Enable policy to Report-only
  8. Click Create

Policy 2: Block Legacy Authentication

  1. Click New policy from templates
  2. Select Block legacy authentication template
  3. Name your policy: “Security: Block Legacy Authentication”
  4. Under Assignments:
    • Users: All users
    • Exclude: Emergency access accounts
  5. Under ConditionsClient apps:
    • Configure: Yes
    • Select Exchange ActiveSync clients and Other clients
  6. Under Access controlsGrant:
    • Select Block access
  7. Set Enable policy to Report-only
  8. Click Create

Policy 3: Require MFA for Administrators

  1. Click New policy from templates
  2. Select Require multifactor authentication for admins template
  3. Name your policy: “Security: MFA for Admin Roles”
  4. Under Assignments:
    • Users: Select users and groups
    • Select Directory roles
    • Choose all administrative roles
    • Exclude: Emergency access accounts
  5. Under Access controlsGrant:
    • Select Require multifactor authentication
  6. Set Enable policy to Report-only
  7. Click Create

Phase 3: Testing and Validation (Week 2)

Step 1: Use the What If Tool

  1. Navigate to Conditional AccessPoliciesWhat If
  2. Enter test scenarios:
    • Select a test user
    • Choose target applications
    • Set device platform and location
  3. Click What If to see which policies would apply
  4. Review both “Policies that will apply” and “Policies that will not apply”
  5. Document results for each test scenario

Step 2: Monitor Report-Only Mode

  1. Leave policies in Report-only mode for at least 7 days
  2. Navigate to Entra IDSign-in logs
  3. Filter by Conditional Access = Report-only
  4. Review impacts:
    • Check for “Report-only: Success” entries
    • Investigate any “Report-only: Failure” entries
    • Look for “Report-only: User action required” entries
  5. Address any issues before enforcement

Step 3: Pilot Testing

  1. Create a pilot group with 5-10 users
  2. Create a duplicate policy targeting only the pilot group
  3. Set this pilot policy to On (enforced)
  4. Monitor for 3-5 days
  5. Gather feedback from pilot users
  6. Address any issues identified

Phase 4: Production Deployment (Week 3)

Step 1: Enable Policies

  1. After successful testing, return to each policy
  2. Change Enable policy from Report-only to On
  3. Start with one policy at a time
  4. Wait 2-4 hours between enabling each policy
  5. Monitor sign-in logs after each activation

Step 2: Communicate to Users

  1. Send announcement email before enforcement
  2. Include:
    • What’s changing and when
    • Why it’s important for security
    • What users need to do (register for MFA)
    • Support contact information
  3. Provide MFA registration instructions
  4. Schedule optional training sessions

Phase 5: Advanced Policies (Week 4+)

Optional: Require Compliant Devices

Only implement after basic policies are stable

  1. Create new policy: “Security: Require Compliant Devices”
  2. Target high-value applications first
  3. Under Grant controls:
    • Select Require device to be marked as compliant
  4. Test thoroughly before enforcement

Optional: Location-Based Access

  1. Define trusted locations (office IP addresses)
  2. Create policies based on location:
    • Block access from specific countries
    • Require MFA when not in trusted location

Troubleshooting Common Issues

Users Can’t Sign In

  • Check sign-in logs for specific error messages
  • Use What If tool to identify blocking policies
  • Verify user has completed MFA registration
  • Temporarily exclude user while investigating

Policy Not Applying

  • Verify policy is set to “On” not “Report-only”
  • Check assignment conditions match user scenario
  • Review excluded users and groups
  • Wait 1-2 hours for policy propagation

Emergency Rollback

  1. Navigate to problematic policy
  2. Set Enable policy to Off
  3. Or exclude affected users temporarily
  4. Document issue for resolution
  5. Re-enable after fixing configuration

Training Resources

Microsoft Learn Modules (Free)

Documentation and Guides

Video Resources

Best Practices Summary

  • ✅ Always maintain emergency access accounts excluded from all policies
  • ✅ Test every policy in Report-only mode for at least 7 days
  • ✅ Use the What If tool before and after creating policies
  • ✅ Start with Microsoft’s template policies – they represent best practices
  • ✅ Document all policies and their business justification
  • ✅ Monitor sign-in logs regularly for anomalies
  • ✅ Communicate changes to users before enforcement
  • ✅ Have a rollback plan for every policy
  • ✅ Implement policies gradually, not all at once
  • ✅ Review and update policies quarterly

Building a Collaborative Microsoft 365 Copilot Agent: A Step-by-Step Guide

Creating a Microsoft 365 Copilot agent (a custom AI assistant within Microsoft 365 Copilot) can dramatically streamline workflows. These agents are essentially customised versions of Copilot that combine specific instructions, knowledge, and skills to perform defined tasks or scenarios[1]. The goal here is to build an agent that multiple team members can collaboratively develop and easily maintain – even if the original creator leaves the business. This report provides:

  • Step-by-step guidelines to create a Copilot agent (using no-code/low-code tools).
  • Best practices for multi-user collaboration, including managing edit permissions.
  • Documentation and version control strategies for long-term maintainability.
  • Additional tips to ensure the agent remains robust and easy to update.

Step-by-Step Guide: Creating a Microsoft 365 Copilot Agent

To build your Copilot agent without code, you will use Microsoft 365 Copilot Studio’s Agent Builder. This tool provides a guided interface to define the agent’s behavior, knowledge, and appearance. Follow these steps to create your agent:

As a result of the steps above, you have a working Copilot agent with its name, description, instructions, and any connected data sources or capabilities configured. You built this agent in plain language and refined it with no code required, thanks to Copilot Studio’s declarative authoring interface[2].

Before rolling it out broadly, double-check the agent’s responses for accuracy and tone, especially if it’s using internal knowledge. Also verify that the knowledge sources cover the expected questions. (If the agent couldn’t answer a question in testing, you might need to add a missing document or adjust instructions.)

Note: Microsoft also provides pre-built templates in Copilot Studio that you can use as a starting point (for example, templates for an IT help desk bot, a sales assistant, etc.)[2]. Using a template can jump-start your project with common instructions and sample prompts already filled in, which you can then modify to suit your needs.


Collaborative Development and Access Management

One key to long-term maintainability is ensuring multiple people can access and work on the agent. You don’t want the agent tied solely to its creator. Microsoft 365 Copilot supports this through agent sharing and permission controls. Here’s how to enable collaboration and manage who can use or edit the agent:

  • Share the Agent for Co-Authoring: After creating the agent, the original author can invite colleagues as co-authors (editors). In Copilot Studio, use the Share menu on the agent and add specific users by name or email for “collaborative authoring” access[3]. (You can only add individuals for edit access, not groups, and those users must be within your organisation.) Once shared, these teammates are granted the necessary roles (Environment Maker/Bot Contributor in the underlying Power Platform environment) automatically so they can modify the agent[3]. Within a few minutes, the agent will appear in their Copilot Studio interface as well. Now your agent effectively has multiple owners — if one person leaves, others still have full editing rights.
  • Ensure Proper Permissions: When sharing for co-authoring, make sure the colleagues have appropriate permissions in the environment. Copilot Studio will handle most of this via the roles mentioned, but it’s good for an admin to know who has edit access. By design, editors can do everything the owner can: edit content, configure settings, and share the agent further. Viewers (users who are granted use but not edit rights) cannot make changes[4]. Use Editor roles for co-authors and Viewer roles for end users as needed to control access[4]. For example, you may grant your whole team viewer access to use the agent, but only a smaller group of power users get editor access to change it. (The platform currently only allows assigning Editor permission to individuals, not to a security group, for safety[4].)
  • Collaborative Editing in Real-Time: Once multiple people have edit access, Copilot Studio supports concurrent editing of the agent’s topics (the conversational flows or content nodes). The interface will show an “Editing” indicator with the co-authors’ avatars next to any topic being worked on[3]. This helps avoid stepping on each other’s toes. If two people do happen to edit the same piece at once, Copilot Studio prevents accidental overwrites by detecting the conflict and offering choices: you can discard your changes or save a copy of the topic[3]. For instance, if you and a colleague unknowingly both edited the FAQ topic, and they saved first, when you go to save, the system might tell you a newer version exists. You could then choose to keep your version as a separate copy, review differences, and merge as appropriate. This built-in change management ensures that multi-author collaboration is safe and manageable.
  • Sharing the Agent for Use: In addition to co-authors, you likely want to share the finished agent with other employees so they can use it in Copilot. You can share the agent via a link or through your tenant’s app catalog. In Copilot Studio’s share settings, choose who can chat with (use) the agent. Options include “Anyone in your organization” or specific security groups[5]. For example, you might initially share it with just the IT department group for a pilot, or with everyone if it’s broadly useful. When a user adds the shared agent, it will show up in their Microsoft 365 Copilot interface for them to interact with. Note that sharing for use does not grant edit rights – it only allows using the agent[5]. Keep the sharing scope to “Only me” if it’s a draft not ready for others, but otherwise switch it to an appropriate audience so the agent isn’t locked to one person’s account[5].
  • Manage Underlying Resources: If your agent uses additional resources like Power Automate flows (actions) or certain connectors that require separate permissions, remember to share those as well. Sharing an agent itself does not automatically share any connected flow or data source with co-authors[3]. For example, if the agent triggers a Power Automate flow to update a SharePoint list, you must go into that flow and add your colleagues as co-owners there too[3]. Otherwise, they might be able to edit the agent’s conversation, but not open or modify the flow. Similarly, ensure any SharePoint sites or files used as knowledge sources have the right sharing settings for your team. A good practice is to use common team-owned resources (not one person’s private OneDrive file) for any knowledge source, so access can be managed by the team or admins.
  • Administrative Oversight: Because these agents become part of your organisation’s tools, administrators have oversight of shared agents. In the Microsoft 365 admin center (under Integrated Apps > Shared Agents), admins can see a list of all agents that have been shared, along with their creators, status, and who they’re shared with[1]. This means if the original creator does leave the company, an admin can identify any orphaned agents and reassign ownership or manage them as needed. Admins can also block or disable an agent if it’s deemed insecure or no longer appropriate[1]. This governance is useful for ensuring continuity and compliance – your agent isn’t tied entirely to one user’s account. From a planning perspective, it’s wise to have at least two people with full access to every mission-critical agent (one primary and one backup person), plus ensure your IT admin team is aware of the agent’s existence.

By following these practices, you create a safety net around your Copilot agent. Multiple team members can improve or update it, and no single individual is irreplaceable for its maintenance. Should someone exit the team, the remaining editors (or an admin) can continue where they left off.


Documentation and Version Control Practices

Even with a collaborative platform, it’s important to document the agent’s design and maintain version control as if it were any other important piece of software. This ensures that knowledge about how the agent works is not lost and changes can be tracked over time. Here are key practices:

  • Create a Design & Usage Document: Begin a living document (e.g. in OneNote or a SharePoint wiki) that describes the agent in detail. This should include the agent’s purpose, the problems it solves, and its scope (what it will and won’t do). Document the instructions or logic you gave it – you might even copy the core parts of the agent’s instruction text into this document for reference. Also list the knowledge sources connected (e.g. “SharePoint site X – HR Policies”) and any capabilities/flows added. This way, if a new colleague takes over the agent, they can quickly understand its configuration and dependencies. Include screenshots of the agent’s setup from Copilot Studio if helpful. If the agent goes through iterations, note what changed in each version (“Changelog: e.g. Added new Q\&A section on 2025-08-16 to cover Covid policies”). This documentation will be invaluable if the original creator is not available to explain the agent’s behavior down the line.
  • Use Source Control for Agent Configuration (ALM): Treat the agent as a configurable solution that can be exported and versioned. Microsoft 365 Copilot agents built in Copilot Studio actually reside in the Power Platform environment, which means you can leverage Power Platform’s Application Lifecycle Management (ALM) features. Specifically, you can export the agent as a solution package and store that file for version control[6]. Using Copilot Studio, create a solution in the environment, add the agent to it, and export it as an unzip-able file. This exported solution contains the agent’s definition (topics, flows, etc.). You can keep these solution files in a source repository (like a GitHub or Azure DevOps repo) to track changes over time, similar to how you’d version code. Whenever you make significant updates to the agent, export an updated solution file (with a version number or date in the filename) and commit it to the repository. This provides a backup and a history. In case of any issue or if you need to restore or compare a previous version, you can import an older solution file into a sandbox environment[6]. Microsoft’s guidance explicitly supports moving agents between environments using this export/import method, which can double as a backup mechanism[6].
  • Implement CI/CD for Complex Projects (Optional): If your organisation has the capacity, you can integrate the agent development into a Continuous Integration/Continuous Deployment process. Using tools like Azure DevOps or GitHub Actions, you can automate the export/import of agent solutions between Dev, Test, and Prod environments. This kind of pipeline ensures that all changes are logged and pass through proper testing stages. Microsoft recommends maintaining healthy ALM processes with versioning and deployment automation for Copilot agents, just as you would for other software[7]. For example, you might do initial editing in a development environment, export the solution, have it reviewed in code review (even though it’s mostly configuration, you can still check the diff on the solution components), then import into a production environment for the live agent. This way, any change is traceable. While not every team will need full DevOps for a simple Copilot agent, this approach becomes crucial if your agent grows in complexity or business importance.
  • **Consider the Microsoft 365 *Agents SDK* for Code-Based Projects:** Another approach to maintainability is building the agent via code. Microsoft offers an Agents SDK that allows developers to create Copilot agents using languages like C#, JavaScript, or Python, and integrate custom AI logic (with frameworks like Semantic Kernel or LangChain)[8]. This is a more advanced route, but it has the advantage that your agent’s logic lives in code files that can be fully managed in source control. If your team has software engineers, they could use the SDK to implement the agent with standard dev practices (unit testing, code reviews, git version control, etc.). This isn’t a no-code solution, but it’s worth mentioning for completeness: a coded agent can be as collaborative and maintainable as any other software project. The SDK supports quick scaffolding of projects and deployment to Copilot, so you could even migrate a no-code agent to a coded one later if needed[8]. Only pursue this if you need functionality beyond what Copilot Studio offers or want deeper integration/testing – for most cases, the no-code approach is sufficient.
  • Keep the Documentation Updated: Whichever development path you choose, continuously update your documentation when changes occur. If a new knowledge source is added or a new capability toggled on, note it in the doc. Also record any design rationale (“We disabled the image generator on 2025-09-01 due to misuse”) so future maintainers understand past decisions. Good documentation ensures that even if original creators or key contributors leave, anyone new can come up to speed quickly by reading the material.

By maintaining both a digital paper trail (documents) and technical version control (solution exports or code repositories), you safeguard the project’s knowledge. This prevents the “single point of failure” scenario where only one person knows how the agent really works. It also makes onboarding new team members to work on the agent much easier.


Additional Tips for a Robust, Maintainable Agent

Finally, here are additional recommendations to ensure your Copilot agent remains reliable and easy to manage in the long run:

  • Define a Clear Scope and Boundaries: A common pitfall is trying to make one agent do too much. It’s often better to have a focused agent that excels at a specific set of tasks than a catch-all that becomes hard to maintain. Clearly state what user needs the agent addresses. If later you find the scope creeping beyond original intentions (for example, your HR bot is suddenly expected to handle IT helpdesk questions), consider creating a separate agent for the new domain or using multi-agent orchestration, rather than overloading one agent. This keeps each agent simpler to troubleshoot and update. Also use the agent’s instructions to explicitly guard against out-of-scope requests (e.g., instruct it to politely decline questions unrelated to its domain) so that maintenance remains focused.
  • Follow Best Practices in Instruction Design: Well-structured instructions not only help the AI give correct answers, but also make the agent’s logic easier for humans to understand later. Use clear and action-oriented language in your instructions and avoid unnecessary complexity[9]. For example, instead of a vague instruction like “help with leaves,” write a specific rule: “If user asks about leave status, retrieve their leave request record from SharePoint and display the status.” Break down the agent’s workflow into ordered steps where necessary (using bullet or numbered lists in the instructions)[9]. This modular approach (goal → action → outcome for each step) acts like commenting your code – it will be much easier for someone else to modify the behavior if they can follow a logical sequence. Additionally, include a couple of example user queries and desired responses in the instructions (few-shot examples) for clarity, especially if the agent’s task is complex. This reduces ambiguity for both the AI and future editors.
  • Test Thoroughly and Collect Feedback: Continuous testing is key to robustness. Even after deployment, encourage users (or the team internally) to provide feedback if the agent gives an incorrect or confusing response. Periodically review the agent’s performance: pose new questions to it or check logs (if available) to see how it’s handling real queries. Microsoft 365 Copilot doesn’t yet provide full conversation logs to admins, but you can glean some insight via any integrated telemetry. If you have access to Azure Application Insights or the Power Platform CoE kit, use them – Microsoft suggests integrating these to monitor usage, performance, and errors for Copilot agents[7]. For example, Application Insights can track how often certain flows are called or if errors occur, and the Power Platform Center of Excellence toolkit can inventory your agent and its usage metrics[7]. Monitoring tools help you catch issues early (like an action failing because of a permissions error) and measure the agent’s value (how often it’s used, peak times, etc.). Use this data to guide maintenance priorities.
  • Implement Governance and Compliance Checks: Since Copilot agents can access organisational data, ensure that all security and compliance requirements are met. From a maintainability perspective, this means the agent should be built in accordance with IT policies (e.g., respecting Data Loss Prevention rules, not exposing sensitive info). Work with your admin to double-check that the agent’s knowledge sources and actions comply with company policy. Also, have a plan for regular review of content – for instance, if one of the knowledge base documents the agent relies on is updated or replaced, update the agent’s knowledge source to point to the new info. Remove any knowledge source that is outdated or no longer approved. Keeping the agent’s inputs current and compliant will prevent headaches (or forced takedowns) later on.
  • Plan for Handover: Since the question specifically addresses if the original creator leaves, plan for a smooth handover. This includes everything we’ve discussed (multiple editors, documentation, version history). Additionally, consider a short training session or demo for the team members who will inherit the agent. Walk them through the agent’s flows in Copilot Studio, show how to edit a topic, how to republish updates, etc. This will give them confidence to manage it. Also, make sure the agent’s ownership is updated if needed. Currently, the original creator remains the “Owner” in the system. If that person’s account is to be deactivated, it may be wise to have an admin transfer any relevant assets or at least note that co-owners are in place. Since admins can see the creator’s name on the agent, proactively communicate to IT that the agent has co-owners who will take over maintenance. This can avoid a scenario where an admin might accidentally disable an agent assuming no one can maintain it.
  • Regular Maintenance Schedule: Treat the agent as a product that needs occasional maintenance. Every few months (or whatever cadence fits your business), review if the agent’s knowledge or instructions need updates. For example, if processes changed or new common questions have emerged, update the agent to cover them. Also verify that all co-authors still have access and that their permissions are up to date (especially if your company uses role-based access that might change with team reorgs). A little proactive upkeep will keep the agent effective and prevent it from becoming obsolete or broken without anyone noticing.

By following the above tips, your Microsoft 365 Copilot agent will be well-positioned to serve users over the long term, regardless of team changes. You’ve built it with a collaborative mindset, documented its inner workings, and set up processes to manage changes responsibly. This not only makes the agent easy to edit and enhance by multiple people, but also ensures it continues to deliver value even as your organisation evolves.


Conclusion: Building a Copilot agent that stands the test of time requires forethought in both technology and teamwork. Using Microsoft’s no-code Copilot Studio, you can quickly create a powerful assistant tailored to your needs. Equally important is opening up the project to your colleagues, setting the right permissions so it’s a shared effort. Invest in documentation and consider leveraging export/import or even coding options to keep control of the agent’s “source.” And always design with clarity and governance in mind. By doing so, you create not just a bot, but a maintainable asset for your organisation – one that any qualified team member can pick up and continue improving, long after the original creator’s tenure. With these steps and best practices, your Copilot agent will remain helpful, accurate, and up-to-date, no matter who comes or goes on the team.

References

[1] Manage shared agents for Microsoft 365 Copilot – Microsoft 365 admin

[2] Use the Copilot Studio Agent Builder to Build Agents

[3] Share agents with other users – Microsoft Copilot Studio

[4] Control how agents are shared – Microsoft Copilot Studio

[5] Publish and Manage Copilot Studio Agent Builder Agents

[6] Export and import agents using solutions – Microsoft Copilot Studio

[7] Phase 4: Testing, deployment, and launch – learn.microsoft.com

[8] Create and deploy an agent with Microsoft 365 Agents SDK

[9] Write effective instructions for declarative agents

Comprehensive Guide to Microsoft Autopilot v2 Deployment in Microsoft 365 Business

Microsoft’s Windows Autopilot is a cloud-based suite of technologies designed to streamline the deployment and configuration of new Windows devices for organizations[1]. This guide provides a detailed look at the latest updates to Windows Autopilot – specifically the new Autopilot v2 (officially called Windows Autopilot Device Preparation) – and offers step-by-step instructions for implementing it in a Microsoft 365 Business environment. We will cover the core concepts, new features in Autopilot v2, benefits for businesses, the implementation process (from prerequisites to deployment), troubleshooting tips, and best practices for managing devices with Autopilot v2.

1. Overview of Microsoft Autopilot and Its Purpose

Windows Autopilot simplifies the Windows device lifecycle from initial deployment through end-of-life. It leverages cloud services (like Microsoft Intune and Microsoft Entra ID) to pre-configure devices out-of-box without traditional imaging. When a user unboxes a new Windows 10/11 device and connects to the internet, Autopilot can automatically join it to Azure/Microsoft Entra ID, enroll it in Intune (MDM), apply corporate policies, install required apps, and tailor the out-of-box experience (OOBE) to the organization[1][1]. This zero-touch deployment means IT personnel no longer need to manually image or set up each PC, drastically reducing deployment time and IT overhead[2]. In short, Autopilot’s purpose is to get new devices “business-ready” with minimal effort, offering benefits such as:

  • Reduced IT Effort – No need to maintain custom images for every model; devices use the OEM’s factory image and are configured via cloud policies[1][1].
  • Faster Deployment – Users only perform a few quick steps (like network connection and sign-in), and everything else is automated, so employees can start working sooner[1].
  • Consistency & Compliance – Ensures each device receives standard configurations, security policies, and applications, so they immediately meet organizational standards upon first use[2].
  • Lifecycle Management – Autopilot can also streamline device resets, repurposing for new users, or recovery scenarios (for example, using Autopilot Reset to wipe and redeploy a device)[1].

2. Latest Updates: Introduction of Autopilot v2 (Device Preparation)

Microsoft has recently introduced a next-generation Autopilot deployment experience called Windows Autopilot Device Preparation (commonly referred to as Autopilot v2). This new version is essentially a re-architected Autopilot aimed at simplifying and improving deployments based on customer feedback[3]. Autopilot v2 offers new capabilities and architectural changes that enhance consistency, speed, and reliability of device provisioning. Below is an overview of what’s new in Autopilot v2:

  • No Hardware Hash Import Required: Unlike the classic Autopilot (v1) which required IT admins or OEMs to register devices in Autopilot (upload device IDs/hardware hashes) beforehand, Autopilot v2 eliminates this step[4]. Devices do not need to be pre-registered in Intune; instead, enrollment can be triggered simply by the user logging in with their work account. This streamlines onboarding by removing the tedious hardware hash import process[3]. (If a device is already registered in the old Autopilot, the classic profile will take precedence – so using v2 means not importing the device beforehand[5].)
  • Cloud-Only (Entra ID) Join: Autopilot v2 currently supports Microsoft Entra ID (Azure AD) join only – it’s designed for cloud-based identity scenarios. Hybrid Azure AD Join (on-prem AD) is not supported in v2 at this time[3]. This focus on cloud join aligns with modern, cloud-first management in Microsoft 365 Business environments.
  • Single Unified Deployment Profile: The new Autopilot Device Preparation uses a single profile to define all deployment settings and OOBE customization, rather than separate “Deployment” and “ESP” profiles as in legacy Autopilot[3]. This unified profile encapsulates join type, user account type, and OOBE preferences, plus it lets you directly select which apps and scripts should install during the setup phase.
  • Enrollment Time Grouping: Autopilot v2 introduces an “Enrollment Time Grouping” mechanism. When a user signs in during OOBE, the device is automatically added to a specified Azure AD group on the fly, and any applications or configurations assigned to that group are immediately applied[5][5]. This replaces the old dependence on dynamic device groups (which could introduce delays while membership queries run). Result: faster and more predictable delivery of apps/policies during provisioning[5].
  • Selective App Installation (OOBE): With Autopilot v1, all targeted device apps would try to install during the initial device setup, possibly slowing things down. In Autopilot v2, the admin can pick up to 10 essential apps (Win32, MSI, Store apps, etc.) to install during OOBE; any apps not selected will be deferred until after the user reaches the desktop[3][6]. By limiting to 10 critical apps, Microsoft aimed to increase success rates and speed (as their telemetry showed ~90% of deployments use 10 or fewer apps initially)[6].
  • PowerShell Scripts Support in ESP: Autopilot v2 can also execute PowerShell scripts during the Enrollment Status Page (ESP) phase of setup[3]. This means custom configuration scripts can run as part of provisioning before the device is handed to the user – a capability that simplifies advanced setup tasks (like configuring registry settings, installing agent software, etc., via script).
  • Improved Progress & UX: The OOBE experience is updated – Autopilot v2 provides a simplified progress display (percentage complete) during provisioning[6]. Users can clearly see that the device is installing apps/configurations. Once the critical steps are done, it informs the user that setup is complete and they can start using the device[6][6]. (Because the device isn’t identified as Autopilot-managed until after the user sign-in, some initial Windows setup screens like EULA or privacy settings may appear in Autopilot v2 that were hidden in v1[3]. These are automatically suppressed only after the Autopilot policy arrives during login.)
  • Near Real-Time Deployment Reporting: Autopilot v2 greatly enhances monitoring. Intune now offers an Autopilot deployment report that shows status per device in near real time[6]. Administrators can see which devices have completed Autopilot, which stage they’re in, and detailed results for each selected app and script (success/failure), as well as overall deployment duration[5][5]. This granular reporting makes troubleshooting easier, as you can immediately identify if (for example) a particular app failed to install during OOBE[5][5].
  • Availability in Government Clouds: The new Device Preparation approach is available in GCC High and DoD government cloud environments[6][5], which was not possible with Autopilot previously. This broadens Autopilot use to more regulated customers and is one reason Microsoft undertook this redesign (Autopilot v2 originated as a project to meet government cloud requirements and then expanded to all customers)[7].

The table below summarizes key differences between Autopilot v1 (classic) and Autopilot v2:

Feature/CapabilityAutopilot v1 (Classic)Autopilot v2 (Device Preparation)
Device preregistration (Hardware hash upload)Required (devices must be registered in Autopilot device list before use)[4]Not required (user can enroll device directly; device should not be pre-added, or v2 profile won’t apply)[5]
Supported join typesAzure AD Join; Hybrid Azure AD Join (with Intune Connector)[3]Azure/Microsoft Entra ID Join only (no on-prem AD support yet)[3]
Self-Deploying / Pre-provisioning (White Glove)Supported (TPM attestation-based self-deploy; technician pre-provision mode available)Not supported in initial release[3] (future support is planned for these scenarios)
Deployment profilesSeparate Deployment Profile + ESP Profile (configuration split)Single Device Preparation Policy (one profile for all settings: join, account type, OOBE, app selection)[3]
App installation during OOBEInstalls all required apps targeted to device (could be many; admin chooses which are “blocking”)Installs selected apps only (up to 10) during OOBE; non-selected apps wait until after OOBE[3][6]
PowerShell scripts in OOBENot natively supported in ESP (workarounds needed)Supported – can run PowerShell scripts during provisioning (via device prep profile)[3]
Policy application in OOBESome device policies (Wi-Fi, certs, etc.) could block in ESP; user-targeted configs had limited supportDevice policies synced at OOBE (not blocking)[3]; user-targeted policies/apps install after user reaches desktop[3]
Out-of-Box experience (UI)Branding and many Windows setup screens are skipped (when profile is applied from the start of OOBE)Some Windows setup screens appear by default (since no profile until sign-in)[3]; afterwards, shows new progress bar and completion summary[6]
Reporting & MonitoringBasic tracking via Enrollment Status Page; limited real-time infoDetailed deployment report in Intune with near real-time status of apps, scripts, and device info[5]

Why these updates? The changes in Autopilot v2 address common pain points from Autopilot v1. By removing the dependency on upfront registration and dynamic groups, Microsoft has made provisioning more robust and “hands-off”. The new architecture “locks in” the admin’s intended config at enrollment time and provides better error handling and reporting[6][6]. In summary, Autopilot v2 is simpler, faster, more observable, and more reliable – the guiding principles of its design[5] – making device onboarding easier for both IT admins and end-users.

3. Benefits of Using Autopilot v2 in a Microsoft 365 Business Environment

Implementing Autopilot v2 brings significant advantages, especially for organizations using Microsoft 365 Business or Business Premium (which include Intune for device management). Here are the key benefits:

  • Ease of Deployment – Less IT Effort: Autopilot v2’s no-registration model is ideal for businesses that procure devices ad-hoc or in small batches. IT admins no longer need to collect hardware hashes or coordinate with OEMs to register devices. A user can unbox a new Windows 11 device, connect to the internet, and sign in with their work account to trigger enrollment. This self-service enrollment reduces the workload on IT staff, which is especially valuable for small IT teams.
  • Faster Device Setup: By limiting installation to essential apps during OOBE and using enrollment time grouping, Autopilot v2 gets devices ready more quickly. End-users see a shorter setup time before reaching the desktop. They can start working sooner with all critical tools in place (e.g. Office apps, security software, etc. installed during setup)[7][7]. Non-critical apps or large software can install in the background later, avoiding long waits up-front.
  • Improved Reliability and Fewer Errors: The new deployment process is designed to “fail fast” with better error details[6]. If something is going to go wrong (for example, an app that fails to install), Autopilot v2 surfaces that information quickly in the Intune report and does not leave the user guessing. The enrollment time grouping also avoids timing issues that could occur with dynamic Azure AD groups. Overall, this means higher success rates for device provisioning and less troubleshooting compared to the old Autopilot. In addition, by standardizing on cloud join only, many potential complexities (like on-prem domain connectivity during OOBE) are removed.
  • Enhanced User Experience: Autopilot v2 provides a more transparent and reassuring experience to employees receiving new devices. The OOBE progress bar with a percentage complete indicator lets users know that the device is configuring (rather than appearing to be stuck). Once the critical setup is done, Autopilot informs the user that the device is ready to go[6]. This clarity can reduce helpdesk calls from users unsure if they should wait or reboot during setup. Also, because devices are delivered pre-configured with corporate settings and apps, users can be productive on Day 1 without needing IT to personally assist.
  • Better Monitoring for IT: In Microsoft 365 Business environments, often a single admin oversees device management. The Autopilot deployment report in Intune gives that admin a real-time dashboard to monitor deployments. They can see if a new laptop issued to an employee enrolled successfully, which apps/scripts ran, and if any step failed[5][5]. For any errors, the admin can drill down immediately and troubleshoot (for instance, if an app didn’t install, they know to check that installer or assign it differently). This reduces guesswork and allows proactive support, contributing to a smoother deployment process across the organization.
  • Security and Control: Autopilot v2 includes support for corporate device identification. By uploading known device identifiers (e.g., serial numbers) into Intune and enabling enrollment restrictions, a business can ensure only company-owned devices enroll via Autopilot[4][4]. This prevents personal or unauthorized devices from accidentally being enrolled. Although this requires a bit of setup (covered below), it gives small organizations an easy way to enforce that Autopilot v2 is used only for approved hardware, adding an extra layer of security and compliance. Furthermore, Autopilot v2 automatically makes the Azure AD account a standard user by default (not local admin), which improves security on the endpoint[5].

In summary, Autopilot v2 is well-suited for Microsoft 365 Business scenarios: it’s cloud-first and user-driven, aligning with the needs of modern SMBs that may not have complex on-prem infrastructure. It lowers the barrier to deploying new devices (no imaging or device ID admin work) while improving the speed, consistency, and security of device provisioning.

4. Implementing Autopilot v2: Step-by-Step Guide

In this section, we’ll walk through how to implement Windows Autopilot Device Preparation (Autopilot v2) in your Microsoft 365 Business/Intune environment. The process involves: verifying prerequisites, configuring Intune with the new profile and required settings, and then enrolling devices. Each step is detailed below.

4.1 Prerequisites and Initial Setup

Before enabling Autopilot v2, ensure the following prerequisites are met:

  1. Windows Version Requirements: Autopilot v2 requires Windows 11. Supported versions are Windows 11 22H2 or 23H2 with the latest updates (specifically, installing KB5035942 or later)[3][5], or any later version (Windows 11 24H2+). New devices should be shipped with a compatible Windows 11 build (or be updated to one) to use Autopilot v2. Windows 10 devices cannot use Autopilot v2; they would fall back to the classic Autopilot method.
  2. Microsoft Intune: You need an Intune subscription (Microsoft Endpoint Manager) as part of your M365 Business. Intune will serve as the Mobile Device Management (MDM) service to manage Autopilot profiles and device enrollment.
  3. Azure AD/Microsoft Entra ID: Devices will be Azure AD joined. Ensure your users have Microsoft Entra ID accounts with appropriate Intune licenses (e.g., Microsoft 365 Business Premium includes Intune licensing) and that automatic MDM enrollment is enabled for Azure AD join. In Azure AD, under Mobility (MDM/MAM), Microsoft Intune should be set to Automatically enroll corporate devices for your users.
  4. No Pre-Registration of Devices: Do not import the device hardware IDs into the Intune Autopilot devices list for devices you plan to enroll with v2. If you previously obtained a hardware hash (.CSV) from your device or your hardware vendor registered the device to your tenant, you should deregister those devices to allow Autopilot v2 to take over[5]. (Autopilot v2 will not apply if an Autopilot deployment profile from v1 is already assigned to the device.)
  5. Intune Connector (If Hybrid not needed): Since Autopilot v2 doesn’t support Hybrid AD join, you do not need the Intune Connector for Active Directory for these devices. (If you have the connector running for other hybrid-join Autopilot scenarios, that’s fine; it simply won’t be used for v2 deployments.)
  6. Network and Access: New devices must have internet connectivity during OOBE (Ethernet or Wi-Fi accessible from the initial setup). Ensure that the network allows connection to Azure AD and Intune endpoints. If using Wi-Fi, users will need to join a Wi-Fi network in the first OOBE steps. (Consider using a provisioning SSID or instructing users to connect to an available network.)
  7. Plan for Device Identification (Optional but Recommended): Decide if you will restrict Autopilot enrollment to corporate-owned devices only. For better control (and to prevent personal device enrollment), it’s best practice to use Intune’s enrollment restrictions to block personal Windows enrollments and use Corporate device identifiers to flag your devices. We will cover how to set this up in the steps below. If you plan to use this, gather a list of device serial numbers (and manufacturers/models) for the PCs you intend to enroll.

4.2 Configuring the Autopilot v2 (Device Preparation) Profile in Intune

Once prerequisites are in place, the core setup work is done in Microsoft Intune. This involves creating Azure AD groups and then creating a Device Preparation profile (Autopilot v2 profile) and configuring it. Follow these steps:

1. Create Azure AD Groups for Autopilot: We need two security groups to manage Autopilot v2 deployment:

  • User Group – contains the users who will be enrolling devices via Autopilot v2.
  • Device Group – will dynamically receive devices at enrollment time and be used to assign apps/policies.

In the Azure AD or Intune portal, navigate to “Groups” and create a new group for users. For example, “Autopilot Device Preparation – Users”. Add all relevant user accounts (e.g., all employees or the subset who will use Autopilot) to this group[4]. Use Assigned membership for explicit control.

Next, create another security group for devices, e.g., “Autopilot Device Preparation – Devices”. Set this as a Dynamic Device group if possible, or Assigned (we will be adding devices automatically via the profile). An interesting detail: Intune’s Autopilot v2 mechanism uses an application identity called “Intune Provisioning Client” to add devices to this group during enrollment[4]. You can assign that as the owner of the group (though Intune may handle this automatically when the profile is used).

2. Create the Device Preparation (Autopilot v2) Profile: In the Intune admin center, go to Devices > Windows > Windows Enrollment (or Endpoint Management > Enrollment). There should be a section for “Windows Autopilot Device Preparation (Preview)” or “Device Preparation Policies”. Choose to Create a new profile/policy[4].

  • Name and Group Assignment: Give the profile a clear name (e.g., “Autopilot Device Prep Policy – Cloud PCs”). For the target group, select the Device group created in step 1 as the group to assign devices to at enrollment[4]. (In some interfaces, you might first choose the device group in the profile so the system knows where to add devices.)
  • Deployment Mode: Choose User-Driven (since user-driven Azure AD join is the scenario for M365 Business). Autopilot v2 also has an “Automatic” mode intended for Windows 365 Cloud PCs or scenarios without user interaction, but for physical devices in a business, user-driven is typical.
  • Join Type: Select Azure AD (Microsoft Entra ID) join. (This is the only option in v2 currently – Hybrid AD join is not available).
  • User Account Type: Choose whether the end user should be a standard user or local admin on the device. Best practice is to select Standard (non-admin) to enforce least privilege[5]. (In classic Autopilot, this was an option in the deployment profile as well. Autopilot v2 defaults to standard user by design, but confirm the setting if presented.)
  • Out-of-box Experience (OOBE) Settings: Configure the OOBE customization settings as desired:
    • You can typically configure Language/Region (or set to default to device’s settings), Privacy settings, End-User License Agreement (EULA) acceptance, and whether users see the option to configure for personal use vs. organization. Note: In Autopilot v2, some of these screens may not be fully suppressible as they are in v1, but set your preferences here. For instance, you might hide the privacy settings screen and accept EULA automatically to streamline user experience.
    • If the profile interface allows it, enable “Skip user enrollment if device is known corporate” or similar, to avoid the personal/work question (this ties in with using corporate identifiers).
    • Optionally, set a device naming template if available. However, Autopilot v2 may not support custom naming at this stage (and users can be given the ability to name the device during setup)[3]. Check Intune’s settings; if not present, plan to rename devices via Intune policy later if needed.
  • Applications & Scripts (Device Preparation): Select the apps and PowerShell scripts that you want to be installed during the device provisioning (OOBE) phase[4]. Intune will present a list of existing apps and scripts you’ve added to Intune. Here, pick only your critical or required applications – remember the limit is 10 apps max for the OOBE phase. Common choices are:
    • Company Portal (for user self-service and additional app access)[4].
    • Microsoft 365 Apps (Office suite)[4].
    • Endpoint protection software (antivirus/EDR agent, if not already part of Windows).
    • Any other crucial line-of-business app that the user needs immediately. Also select any PowerShell onboarding scripts you want to run (for example, a script to set a custom registry or install a specific agent that’s not packaged as an app). These selected items will be tracked in the deployment. (Make sure any app you select is assigned in Intune to the device group we created, or available for all devices – more on app assignment in the next step.)
  • Assign the Profile: Finally, assign the Device Preparation profile to the User group created in step 1[4]. This targeting means any user in that group who signs into a Windows 11 device during OOBE will trigger this Autopilot profile. (The device will get added to the specified device group, and the selected apps will install.)

Save/create the profile. At this point, Intune has the Autopilot v2 policy in place, waiting to apply at next enrollment for your user group.

3. Assign Required Applications to Device Group: Creating the profile in step 2 defines which apps should install, but Intune still needs those apps to be deployed as “Required” to the device group for them to actually push down. In Intune:

  • Go to Apps > Windows (or Apps section in MEM portal).
  • For each critical app you included in the profile (Company Portal, Office, etc.), check its Properties > Assignments. Make sure to assign the app to the Autopilot Devices group (as Required installation)[4]. For example, set Company Portal – Required for [Autopilot Device Preparation – Devices][4].
  • Repeat for Microsoft 365 Apps and any other selected application[4]. If you created a PowerShell script configuration in Intune, ensure that script is assigned to the device group as well.

Essentially, this step ensures Intune knows to push those apps to any device that appears in the devices group. Autopilot v2 will add the new device to the group during enrollment, and Intune will then immediately start installing those required apps. (Without this step, the profile alone wouldn’t install apps, since the profile itself only “flags” which apps to wait for but the apps still need to be assigned to devices.)

4. Configure Enrollment Restrictions (Optional – Corporate-only): If you want to block personal devices from enrolling (so that only corporately owned devices can use Autopilot), set up an enrollment restriction in Intune:

  • In Intune portal, navigate to Devices > Enrollment restrictions.
  • Create a new Device Type or Platform restriction policy (or edit the existing default one) for Windows. Under Personal device enrollment, set Personally owned Windows enrollment to Blocked[4].
  • Assign this restriction to All Users (or at least all users in the Autopilot user group)[4].

This means if a user tries to Azure AD join a device that Intune doesn’t recognize as corporate, the enrollment will be refused. This is a good security measure, but it requires the next step (uploading corporate identifiers) to work smoothly with Autopilot v2.

5. Upload Corporate Device Identifiers: With personal devices blocked, you must tell Intune which devices are corporate. Since we are not pre-registering the full Autopilot hardware hash, Intune can rely on manufacturer, model, and serial number to recognize a device as corporate-owned during Autopilot v2 enrollment. To upload these identifiers:

  • Gather device info: For each new device, obtain the serial number, plus the manufacturer and model strings. You can get this from order information or by running a command on the device (e.g., on an example device, run wmic csproduct get vendor,name,identifyingnumber to output vendor (manufacturer), name (model), and identifying number (serial)[4]). Many OEMs provide this info in packing lists or you can scan a barcode from the box.
  • Prepare CSV: Create a CSV file with columns for Manufacturer, Model, Serial Number. List each device’s information on a separate line[4]. For example:\ Dell Inc.,Latitude 7440,ABCDEFG1234\ Microsoft Corporation,Surface Pro 9,1234567890\ (Use the exact strings as reported by the device/OEM to avoid mismatches.)
  • Upload in Intune: In the Intune admin center, go to Devices > Enrollment > Corporate device identifiers. Choose Add then Upload CSV. Select the format “Manufacturer, model, and serial number (Windows only)”[4] and upload your CSV file. Once processed, Intune will list those identifiers as corporate.

With this in place, when a user signs in on a device, Intune checks the device’s hardware info. If it matches one of these entries, it’s flagged as corporate-owned and allowed to enroll despite the personal device block[4][4]. If it’s not in the list, the enrollment will be blocked (the user will get a message that enrolling personal devices is not allowed). Important: Until you have corporate identifiers set up, do not enable the personal device block, or Autopilot device preparation will fail for new devices[6][6]. Always upload the identifiers first or simultaneously.

At this stage, you have completed the Intune configuration for Autopilot v2. You have:

  • A user group allowed to use Autopilot.
  • A device preparation profile linking that user group to a device group, with chosen settings and apps.
  • Required apps assigned to deploy.
  • Optional restrictions in place to ensure only known devices will enroll.

4.3 Enrollment and Device Setup Process (Using Autopilot v2)

With the above configuration done, the actual device enrollment process is straightforward for the end-user. Here’s what to expect when adding a new device to your Microsoft 365 environment via Autopilot v2:

  1. Out-of-Box Experience (Initial Screens): When the device is turned on for the first time (or after a factory reset), the Windows OOBE begins. The user will select their region and keyboard (unless the profile pre-configured these). The device will prompt for a network connection. The user should connect to the internet (Ethernet or Wi-Fi). Once connected, the device might check for updates briefly, then reach the “Sign in” screen.
  2. User Sign-In (Azure AD): The user enters their work or school (Microsoft Entra ID/Azure AD) credentials – i.e., their Microsoft 365 Business account email and password. This is the trigger for Autopilot Device Preparation. Upon signing in, the device joins your organization’s Azure AD. Because the user is in the “Autopilot Users” group and an Autopilot Device Preparation profile is active, Intune will now kick off the device preparation process in the background.
  3. Device Preparation Phase (ESP): After credentials are verified, the user sees the Enrollment Status Page (ESP) which now reflects “Device preparation” steps. In Autopilot v2, the ESP will show the progress of the configuration. A key difference in v2 is the presence of a percentage progress indicator that gives a clearer idea of progress[6]. Behind the scenes, several things happen:
    • The device is automatically added to the specified Device group (“Autopilot Device Preparation – Devices”) in Azure AD[5]. The “Intune Provisioning Agent” does this within seconds of the user signing in.
    • Intune immediately starts deploying the selected applications and PowerShell scripts to the device (those that were marked for installation during OOBE). The ESP screen will typically list the device setup steps, which may include device configuration, app installation, etc. The apps you marked as required (Company Portal, Office, etc.) will download and install one by one. Their status can often be viewed on the screen (e.g., “Installing Office 365… 50%”).
    • Any device configuration policies assigned to the device group (e.g., configuration profiles or compliance policies you set to target that group) will also begin to apply. Note: Autopilot v2 does not pause for all policies to complete; it mainly ensures the selected apps and scripts complete. Policies will apply in parallel or afterwards without blocking the ESP[3].
    • If you enabled BitLocker or other encryption policies, those might kick off during this phase as well (depending on your Intune configuration for encryption on Azure AD join).
    • The user remains at the ESP screen until the critical steps finish. With the 10-app limit and no dynamic group delay, this phase should complete relatively quickly (typically a few minutes to perhaps an hour for large Office downloads on slower connections). The progress bar will reach 100%.
  4. Completion and First Desktop Launch: Once the selected apps and scripts have finished deploying, Autopilot signals that device setup is complete. The ESP will indicate it’s done, and the user will be allowed to proceed to log on to Windows (or it may automatically log them in if credentials were cached from step 2). In Autopilot v2, a final screen can notify the user that critical setup is finished and they can start using the device[6]. The user then arrives at the Windows desktop.
  5. Post-Enrolment (Background tasks): Now the device is fully Azure AD joined and enrolled in Intune as a managed device. Any remaining apps or policies that were not part of the initial device preparation will continue to install in the background. For example, if you targeted some less critical user-specific apps (say, OneDrive client or Webex) via user groups, those will download via Intune management without interrupting the user. The user can begin working, and they’ll likely see additional apps appearing or software finishing installations within the first hour of use.
  6. Verification: The IT admin can verify the device in the Intune portal. It should appear under Devices with the user assigned, and compliance/policies applying. The Autopilot deployment report in Intune will show this device’s status as successful if all selected apps/scripts succeeded, or flagged if any failures occurred[5]. The user should see applications like Office, Teams, Outlook, and the Company Portal already installed on the Start Menu[4]. If all looks good, the device is effectively ready and managed.

4.4 Troubleshooting Common Issues in Autopilot v2

While Autopilot v2 is designed to be simpler and more reliable, you may encounter some issues during setup. Here are common issues and how to address them:

  • Device is blocked as “personal” during enrollment: If you enabled the enrollment restriction to block personal devices, a new device might fail to enroll at user sign-in with a message that personal devices are not allowed. This typically means Intune did not recognize the device as corporate. Ensure you have uploaded the correct device serial, model, and manufacturer under corporate identifiers before the user attempts enrollment[4]. Typos or mismatches (e.g., “HP Inc.” vs “Hewlett-Packard”) can cause the check to fail. If an expected corporate device was blocked, double-check its identifier in Intune and re-upload if needed, then have the user try again (after a reset). If you cannot get the identifiers loaded in time, you may temporarily toggle the restriction to allow personal Windows enrollments to let the device through, then re-enable once fixed.
  • Autopilot profile not applying (device does standard Azure AD join without ESP): This can happen if the user is not in the group assigned to the Autopilot Device Prep profile, or if the device was already registered with a classic Autopilot profile. To troubleshoot:
    • Verify that the user who is signing in is indeed a member of the Autopilot Users group that you targeted. If not, add them and try again.
    • Check Intune’s Autopilot devices list. If the device’s hardware hash was previously imported and has an old deployment profile assigned, the device might be using Autopilot v1 behavior (which could skip the ESP or conflict). Solution: Remove the device from the Autopilot devices list (deregister it) so that v2 can proceed[5].
    • Also ensure the device meets OS requirements. If someone somehow tried with an out-of-date Windows 10, the new profile wouldn’t apply.
  • One of the apps failed to install during OOBE: If an app (or script) that was selected in the profile fails, the Autopilot ESP might show an error or might eventually time out. Autopilot v2 doesn’t explicitly block on policies, but it does expect the chosen apps to install. If an app installation fails (perhaps due to an MSI error or content download issue), the user may eventually be allowed to log in, but Intune’s deployment report will mark the deployment as failed for that device[5]. Use the Autopilot deployment report in Intune to see which app or step failed[5]. Then:
    • Check the Intune app assignment for that app. For instance, was the app installer file reachable and valid? Did it have correct detection rules? Remedy any packaging error.
    • If the issue was network (e.g., large app timed out), consider not deploying that app during OOBE (remove it from the profile’s selected apps so it installs later instead).
    • The user can still proceed to work after skipping the failed step (in many cases), but you’ll want to push the necessary app afterward or instruct the user to install via Company Portal if possible.
  • User sees unexpected OOBE screens (e.g., personal vs organization choice): As noted, Autopilot v2 can show some default Windows setup prompts that classic Autopilot hid. For example, early in OOBE the user might be asked “Is this a personal or work device?” If they see this, they should select Work/School (which leads to the Azure AD sign-in). Similarly, the user might have to accept the Windows 11 license agreement. To avoid confusion, prepare users with guidance that they may see a couple of extra screens and how to proceed. Once the user signs in, the rest will be automated. In future, after the device prep profile applies, those screens might not appear on subsequent resets, but on first run they can. This is expected behavior, not a failure.
  • Autopilot deployment hangs or takes too long: If the process seems stuck on the ESP for an inordinate time:
    • Check if it’s downloading a large update or app. Sometimes Windows might be applying a critical update in the background. If internet is slow, Office download (which can be ~2GB) might simply be taking time. If possible, ensure a faster connection or use Ethernet for initial setup.
    • If it’s truly hung (no progress increase for a long period), you may need to power cycle. The good news is Autopilot v2 is resilient – it has more retry logic for applying the profile[8]. On reboot, it often picks up where it left off, or attempts the step again. Frequent hanging might indicate a problematic step (again, refer to Intune’s report).
    • Ensure the device’s time and region were set correctly; incorrect time can cause Azure AD token issues. Autopilot v2 does try to sync time more reliably during ESP[8].
  • Post-enrollment policy issues: Because Autopilot v2 doesn’t wait for all policies, you might notice things like BitLocker taking place only after login, or certain configurations applying slightly later. This is normal. However, if certain device configurations never apply, verify that those policies are targeted correctly (likely to the device group). If they were user-targeted, they should apply after the user logs in. If something isn’t applying at all, treat it as a standard Intune troubleshooting case (e.g., check for scope tags, licensing, or conflicts).

Overall, many issues can be avoided by testing Autopilot v2 on a pilot device before mass rollout. Run through the deployment yourself with a test user and device to catch any application installation failures or unexpected prompts, and adjust your profile or process accordingly.

5. Best Practices for Maintaining and Managing Autopilot v2 Devices

After deploying devices with Windows Autopilot Device Preparation, your work isn’t done – you’ll want to manage and maintain those devices for the long term. Here are some best practices to ensure ongoing success:

  • Establish Clear Autopilot Processes: Because Autopilot v2 and v1 may coexist (for different use cases), document your process. For example, decide: will all new devices use Autopilot v2 going forward, or only certain ones? Communicate to your procurement and IT teams that new devices should not be registered via the old process. If you buy through an OEM with Autopilot registration service, pause that for devices you’ll enroll via v2 to avoid conflicts.
  • Keep Windows and Intune Updated: Autopilot v2 capabilities may expand with new Windows releases and Intune updates. Ensure devices get Windows quality updates regularly (this keeps the Autopilot agent up-to-date and compatible). Watch Microsoft Intune release notes for any Autopilot-related improvements or changes. For instance, if/when Microsoft enables features like self-deploying or hybrid join in Autopilot v2, it will likely come via an update[6] – staying current allows you to take advantage.
  • Limit and Optimize Apps in the Profile: Be strategic about the apps you include during the autopilot phase. The 10-app limit forces some discipline – include only truly essential software that users need immediately or that is required for security/compliance. Everything else can install later via normal Intune assignment or be made available in Company Portal. This ensures the provisioning is quick and has fewer chances to fail. Also prefer Win32 packaged apps for reliability and to avoid Windows Store dependencies during OOBE[2]. In general, simpler is better for the OOBE phase.
  • Use Device Categories/Tags if Needed: Intune supports tagging devices during enrollment (in classic Autopilot, there was “Convert all targeted devices to Autopilot” and grouping by order ID). In Autopilot v2, since devices aren’t pre-registered, you might use dynamic groups or naming conventions post-enrollment to organize devices (e.g., by department or location). Consider leveraging Azure AD group rules or Intune filters if you need to deploy different apps to different sets of devices after enrollment.
  • Monitor Deployment Reports and Logs: Take advantage of the new Autopilot deployment report in Intune for each rollout[5]. After onboarding a batch of new devices, review the report to see if any had issues (e.g., maybe one device’s Office install failed due to a network glitch). Address any failures proactively (rerun a script, push a missed app, etc.). Additionally, know that users or IT can export Autopilot logs easily from the device if needed[5] (there’s a troubleshooting option during the OOBE or via pressing certain key combos). Collecting logs can help Microsoft support or your IT team diagnose deeper issues.
  • Maintain Corporate Identifier Lists: If you’re using the corporate device identifiers method, keep your Azure AD device inventory synchronized with Intune’s list. For every new device coming in, add its identifiers. For devices being retired or sold, you might remove their identifiers. Also, coordinate this with the enrollment restriction – e.g., if a top executive wants to enroll their personal device and you have blocking enabled, you’d need to explicitly allow or bypass that (possibly by not applying the restriction to that user or by adding the device as corporate through some means). Regularly update the CSV as you purchase hardware to avoid last-minute scrambling when a user is setting up a new PC.
  • Plan for Feature Gaps: Recognize the current limitations of Autopilot v2 and plan accordingly:
    • If you require Hybrid AD Join (joining on-prem AD) for certain devices, those devices should continue using the classic Autopilot (with hardware hash and Intune Connector) for now, since v2 can’t do it[3].
    • If you utilize Autopilot Pre-Provisioning (White Glove) via IT staff or partner to pre-setup devices before handing to users (common for larger orgs or complex setups), note that Autopilot v2 doesn’t support that yet[3]. You might use Autopilot v1 for those scenarios until Microsoft adds it to v2.
    • Self-Deploying Mode (for kiosks or shared devices that enroll without user credentials) is also not in v2 presently[3]. Continue using classic Autopilot with TPM attestation for kiosk devices as needed. It’s perfectly fine to run both Autopilot methods side-by-side; just carefully target which devices or user groups use which method. Microsoft is likely to close these gaps in future updates, so keep an eye on announcements.
  • End-User Training and Communication: Even though Autopilot is automated, let your end-users know what to expect. Provide a one-page instruction with their new laptop: e.g., “1) Connect to Wi-Fi, 2) Log in with your work account, 3) Wait while we set up your device (about 15-30 minutes), 4) You’ll see a screen telling you when it’s ready.” Setting expectations helps reduce support tickets. Also inform them that the device will be managed by the company (which is standard, but transparency helps trust).
  • Device Management Post-Deployment: After Autopilot enrollment, manage the devices like any Intune-managed endpoints. Set up compliance policies (for OS version, AV status, etc.), Windows Update rings or feature update policies to keep them up-to-date, and use Intune’s Endpoint analytics or Windows Update for Business reports to track device health. Autopilot has done the job of onboarding; from then on, treat the devices as part of your standard device management lifecycle. For instance, if a device is reassigned to a new user, you can invoke Autopilot Reset via Intune to wipe user data and redo the OOBE for the new user—Autopilot v2 will once again apply (just ensure the new user is in the correct group).
  • Continuous Improvement: Gather feedback from users about the Autopilot process. If many report that a certain app wasn’t ready or some setting was missing on first login, adjust your Autopilot profile or Intune assignments. Autopilot v2’s flexibility allows you to tweak which apps/scripts are in the initial provision vs. post-login. Aim to find the right balance where devices are secure and usable as soon as possible, without overloading the provisioning. Also consider pilot testing Windows 11 feature updates early, since Autopilot behavior can change or improve with new releases (for example, a future Windows 11 update might reduce the appearance of some initial screens in Autopilot v2, etc.).

By following these best practices, you’ll ensure that your organization continues to benefit from Autopilot v2’s efficiencies long after the initial setup. The result is a modern device deployment strategy with minimal hassle, aligned to the cloud-first, zero-touch ethos of Microsoft 365.


Conclusion: Microsoft Autopilot v2 (Windows Autopilot Device Preparation) represents a significant step forward in simplifying device onboarding. By leveraging it in your Microsoft 365 Business environment, you can add new Windows 11 devices with ease – end-users take them out of the box, log in, and within minutes have a fully configured, policy-compliant workstation. The latest updates bring more reliability, insight, and speed to this process, making life easier for IT admins and employees alike. By understanding the new features, following the implementation steps, and adhering to best practices outlined in this guide, you can successfully deploy Autopilot v2 and streamline your device deployment workflow[4][5]. Happy deploying!

References

[1] Overview of Windows Autopilot | Microsoft Learn

[2] Windows Autopilot Best Practices: 2025 Updated

[3] Intune Autopilot V1 vs Intune Autopilot V2- What is changing?

[4] How to Set Up Autopilot Device Preparation in Microsoft Intune

[5] Overview of Windows Autopilot device preparation

[6] Announcing new Windows Autopilot onboarding experience for government …

[7] What’s new in Microsoft Intune: January 2025

[8] What’s new in Windows Autopilot | Microsoft Learn

Troubleshooting Microsoft Defender for Business: Step-by-Step Guide

Microsoft Defender for Business is a security solution designed for small and medium businesses to protect against cyber threats. When issues arise, a systematic troubleshooting approach helps identify root causes and resolve problems efficiently. This guide provides a step-by-step process to troubleshoot common Defender for Business issues, highlights where to find relevant logs and alerts, and suggests advanced techniques for complex situations. All steps are factual and based on Microsoft’s latest guidance as of 2025.

Table of Contents

  • common-issues-and-symptoms
  • key-locations-for-logs-and-alerts
  • step-by-step-troubleshooting-process
    1. identify-the-issue-and-gather-information
    2. check-the-microsoft-365-defender-portal-for-alerts
    3. verify-device-status-and-protection-settings
    4. examine-device-logs-event-viewer
    5. resolve-configuration-or-policy-issues
    6. verify-issue-resolution
    7. escalate-to-advanced-troubleshooting-if-needed
  • advanced-troubleshooting-techniques
  • best-practices-to-prevent-future-issues
  • additional-resources-and-support

Common Issues and Symptoms

These are some typical problems administrators encounter with Defender for Business:

  • Setup and Onboarding Failures: The initial setup or device onboarding process fails. An error like “Something went wrong, and we couldn’t complete your setup” may appear, indicating a configuration channel or integration issue (often with Intune)[1]. Devices that should be onboarded don’t show up in the portal.
  • Devices Showing As Unprotected: In the Microsoft Defender portal, you might see notifications that certain devices are not protected even though they were onboarded[1]. This often happens when real-time protection is turned off (for instance, if a non-Microsoft antivirus is running, it may disable Microsoft Defender’s real-time protection).
  • Mobile Device Onboarding Issues: Users cannot onboard their iOS or Android devices using the Microsoft Defender app. A symptom is that mobile enrollment doesn’t complete, possibly due to provisioning not finished on the backend[1]. For example, if the portal shows a message “Hang on! We’re preparing new spaces for your data…”, it means the Defender for Business service is still provisioning mobile support (which can take up to 24 hours) and devices cannot be added until provisioning is complete[1].
  • Defender App Errors on Mobile: The Microsoft Defender app on mobile devices may crash or show errors. Users report issues like app not updating threats or not connecting. (Microsoft provides separate troubleshooting guides for the mobile Defender for Endpoint app on Android/iOS in such cases[1].)
  • Policy Conflicts: If you have multiple security management tools, you might see conflicting policies. For instance, an admin who was managing devices via Intune and then enabled Defender for Business’s simplified configuration could encounter conflicts where settings in Intune and Defender for Business overlap or contradict[1]. This can result in devices flipping between policy states or compliance errors.
  • Intune Integration Errors: During the setup process, an error indicating an integration issue between Defender for Business and Microsoft Intune might occur[1]. This often requires enabling certain settings (detailed in Step 5 below) to establish a proper configuration channel.
  • Onboarding or Reporting Delays: A device appears to onboard successfully but doesn’t show up in the portal or is missing from the device list even after some time. This could indicate a communication issue where the device is not reporting in. It might be caused by connectivity problems or by an issue with the Microsoft Defender for Endpoint service (sensor) on the device.
  • Performance or Scan Issues: (Less common with Defender for Business, but possible) – Devices might experience high CPU or scans get stuck, which could indicate an issue with Defender Antivirus on the endpoint that needs further diagnosis (this overlaps with Defender for Endpoint troubleshooting).

Understanding which of these scenarios matches your situation will guide where to look first. Next, we’ll cover where to find the logs and alerts that contain clues for diagnosis.


Key Locations for Logs and Alerts

Effective troubleshooting relies on checking both cloud portal alerts and on-device logs. Microsoft Defender for Business provides information in multiple places:

Microsoft 365 Defender Portal (security.microsoft.com): This is the cloud portal where Defender for Business is managed. The Incidents & alerts section is especially important. Here you can monitor all security incidents and alerts in one place[2]. For each alert, you can click to see details in a flyout pane – including the alert title, severity, affected assets (devices or users), and timestamps[2]. The portal often provides recommended actions or one-click remediation for certain alerts[2]. It’s the first place to check if you suspect Defender is detecting threats or if something triggered an alert that correlates with the issue.

Device Logs via Windows Event Viewer: On each Windows device protected by Defender for Business, Windows keeps local event logs for Defender components. Access these by opening Event Viewer (Start > eventvwr.msc). Key logs include:

  • Microsoft-Windows-SENSE/Operational – This log records events from the Defender for Endpoint sensor (“SENSE” is the internal code name for the sensor)[3]. If a device isn’t showing up in the portal or has onboarding issues, this log is crucial. It contains events for service start/stop, onboarding success/failure, and connectivity to the cloud. For example, Event ID 6 means the service isn’t onboarded (no onboarding info found), which indicates the device failed to onboard and needs the onboarding script rerun[3]. Event ID 3 means the service failed to start entirely[3], and Event ID 5 means it couldn’t connect to the cloud (network issue)[3]. We will discuss how to interpret and act on these later.
  • Windows Defender/Operational – This is the standard Windows Defender Antivirus log under Applications and Services Logs > Microsoft > Windows > Windows Defender > Operational. It logs malware detections and actions taken on the device[4]. For troubleshooting, this log is helpful if you suspect Defender’s real-time protection or scans are causing an issue or to confirm if a threat was detected on a device. You might see events like “Malware detected” (Event ID 1116) or “Malware action taken” (Event ID 1117) which correspond to threats found and actions (like quarantine) taken[4]. This can explain, for instance, if a file was blocked and that’s impacting a user’s work.
  • Other system logs: Standard Windows logs (System, Application) might also record errors (for example, if a service fails or crashes, or if there are network connectivity issues that could affect Defender).

Alerts in Microsoft 365 Defender: Defender for Business surfaces alerts in the portal for various issues, not only malware. For example, if real-time protection is turned off on a device, the portal will flag that device as not fully protected[1]. If a device hasn’t reported in for a long time, it might show in the device inventory with a stale last-seen timestamp. Additionally, if an advanced attack is detected, multiple alerts will be correlated as an incident; an incident might be tagged with “Attack disruption” if Defender automatically contained devices to stop the spread[2] – such context can validate if an ongoing security issue is causing what you’re observing.

Intune or Endpoint Manager (if applicable): Since Defender for Business can integrate with Intune (Endpoint Manager) for device management and policy deployment, some issues (especially around onboarding and policy conflicts) may require checking Intune logs:

  • In Intune admin center, review the device’s Enrollment status and Device configuration profiles (for instance, if a security profile failed to apply, it could cause Defender settings to not take effect).
  • Intune’s Troubleshooting + support blade for a device can show error codes if a policy (like onboarding profile) failed.
  • If there’s a known integration issue (like the one mentioned earlier), ensure the Intune connection and settings are enabled as described in the next sections.

Advanced Hunting and Audit (for advanced users): If you have access to Microsoft 365 Defender’s advanced hunting (which might require an upgraded license beyond Defender for Business’s standard features), you could query logs (e.g., DeviceEvents, AlertEvents) for deeper investigation. Also, the Audit Logs in the Defender portal record configuration changes (useful to see if someone changed a policy right before issues started).

Now, with an understanding of where to get information, let’s proceed with a systematic troubleshooting process.


Step-by-Step Troubleshooting Process

The following steps outline a logical process to troubleshoot issues in Microsoft Defender for Business. Adjust the steps as needed based on the specific symptoms you are encountering.

Step 1: Identify the Issue and Gather Information

Before jumping into configuration changes, clearly define the problem. Understanding the nature of the issue will focus your investigation:

  • What are the symptoms? For example, “Device X is not appearing in the Defender portal”, “Users are getting no protection on their phones”, or “We see an alert that one device isn’t protected”, etc.
  • When did it start? Did it coincide with any changes (onboarding new devices, changing policies, installing another antivirus, etc.)?
  • Who or what is affected? A single device, multiple devices, all mobile devices, a specific user?
  • Any error messages? Note any message in the portal or on the device. For instance, an error code during setup, or the portal banner saying “some devices aren’t protected”[1]. These messages often hint at the cause.

Gathering this context will guide you on where to look first. For example, an issue with one device might mean checking that device’s status and logs, whereas a widespread issue might suggest a configuration problem affecting many devices.

Step 2: Check the Microsoft 365 Defender Portal for Alerts

Log in to the Microsoft 365 Defender portal (https://security.microsoft.com) with appropriate admin credentials. This centralized portal often surfaces the problem:

  1. Go to Incidents & alerts: In the left navigation pane, click “Incidents & alerts”, then select “Alerts” (or “Incidents” for grouped alerts)[2]. Look for any recent alerts that correspond to your issue. For example, if a device isn’t protected or hasn’t reported, there may be an alert about that device.
  2. Review alert details: If you see relevant alerts, click on one to open the details flyout. Check the alert title and description – these describe what triggered it (e.g. “Real-time protection disabled on Device123” or “Malware detected and quarantined”). Note the severity (Informational, Low, Medium, High) and the affected device or user[2]. The portal will list the device name and perhaps the user associated with it.
  3. Take recommended actions: The alert flyout often includes recommended actions or a direct link to “Open incident page” or “Take action”. For instance, for a malware alert, it may suggest running a scan or isolating the device. For a configuration alert (like real-time protection off), it might recommend turning it back on. Make note of these suggestions as they directly address the issue described[2].
  4. Check the device inventory: Still in the Defender portal, navigate to Devices (under Assets). Find the device in question. The device page can show its onboarding status, last seen time, OS, and any outstanding issues. If the device is missing entirely, that confirms an onboarding problem – skip to Step 4 to troubleshoot that.
  5. **Inspect *Incidents***: If multiple alerts have been triggered around the same time or on the same device, the portal might have grouped them into an *Incident* (visible under the Incidents tab). Open the incident to see a timeline of what happened. This can give a broader context especially if a security threat is involved (e.g. an incident might show that a malware was detected and then real-time protection was turned off – indicating the malware might have attempted to disable Defender).

Example: Suppose the portal shows an alert “Real-time protection was turned off on DeviceXYZ”. This is a clear indicator – the device is onboarded but not actively protecting in real-time[1]. The recommended action would likely be to turn real-time protection back on. Alternatively, if an alert says “New malware found on DeviceXYZ”, you’d know the issue is a threat detection, and the portal might guide you to remediate or confirm that malware was handled. In both cases, you’ve gathered an essential clue before even touching the device.

If you do not see any alert or indicator in the portal related to your problem, the issue might not be something Defender is reporting on (for example, if the problem is an onboarding failure, there may not be an alert – the device just isn’t present at all). In such cases, proceed to the next steps.

Step 3: Verify Device Status and Protection Settings

Next, ensure that the devices in question are configured correctly and not in a state that would cause issues:

  1. Confirm onboarding completion: If a device doesn’t appear in the portal’s device list, ensure that the onboarding process was done on that device. Re-run the onboarding script or package on the device if needed. (Defender for Business devices are typically onboarded via the local script, Intune, Group Policy, etc. If this step wasn’t done or failed, the device won’t show up in the portal.)
  2. Check provisioning status for mobile: If the issue is with mobile devices (Android/iOS) not onboarding, verify that Defender for Business provisioning is complete. As mentioned, the portal (under Devices) might show a message “preparing new spaces for your data” if the service setup is still ongoing[1]. Provisioning can take up to 24 hours for a new tenant. If you see that message, the best course is to wait until it disappears (i.e., until provisioning finishes) before troubleshooting further. Once provisioning is done, the portal will prompt to onboard devices, and then users should be able to add their mobile devices normally[1].
  1. Verify real-time protection setting: On any Windows device showing “not protected” in the portal, log onto that device and open Windows Security > Virus & threat protection. Check if Real-time protection is on. If it’s off and cannot be turned on, check if another antivirus is installed. By design, onboarding a device running a third-party AV can cause Defender’s real-time protection to be automatically disabled to avoid conflict[1]. In Defender for Business, Microsoft expects Defender Antivirus to be active alongside the service for best protection (“better together” scenario)[1]. If a third-party AV is present, decide if you will remove it or live with Defender in passive mode (which reduces protection and triggers those alerts). Ideally, ensure Microsoft Defender Antivirus is enabled.
  2. Policy configuration review: If you suspect a policy conflict or misconfiguration, review the policies applied:
    • In the Microsoft 365 Defender portal, go to Endpoints > Settings > Rules & policies (or in Intune’s Endpoint security if that’s used). Ensure that you haven’t defined contradictory policies in multiple places. For example, if Intune had a policy disabling something but Defender for Business’s simplified setup has another setting, prefer one system. In a known scenario, an admin had Intune policies and then used the simplified Defender for Business policies concurrently, leading to conflicts[1]. The resolution was to delete or turn off the redundant policies in Intune and let Defender for Business policies take precedence (or vice versa) to eliminate conflicts[1].
    • Also verify tamper protection status – by default, tamper protection is on (preventing unauthorized changes to Defender settings). If someone turned it off for troubleshooting and forgot to re-enable, settings could be changed without notice.
  3. Intune onboarding profile (if applicable): If devices were onboarded via Intune (which should be the case if you connected Defender for Business with Intune), check the Endpoint security > Microsoft Defender for Endpoint section in Intune. Ensure there’s an onboarding profile and that those devices show as onboarded. If a device is stuck in a pending state, you may need to re-enroll or manually onboard.

By verifying these settings, you either fix simple oversights (like turning real-time protection back on) or gather evidence of a deeper issue (for example, confirming a device is properly onboarded, yet still not visible, implying a reporting issue, or confirming there’s a policy conflict that needs resolution in the next step).

Step 4: Examine Device Logs (Event Viewer)

If the issue is not yet resolved by the above steps, or if you need more insight into why something is wrong, dive into the device’s event logs for Microsoft Defender. Perform these checks on an affected device (or a sample of affected devices if multiple):

  1. Open Event Viewer (Local logs): On the Windows device, press Win + R, type eventvwr.msc and hit Enter. Navigate to Applications and Services Logs > Microsoft > Windows and scroll through the sub-folders.
  2. Check “SENSE” Operational log: Locate Microsoft > Windows > SENSE > Operational and click it to open the Microsoft Defender for Endpoint service log[3]. Look for recent Error or Warning events in the list:
    • Event ID 3: “Microsoft Defender for Endpoint service failed to start.” This means the sensor service didn’t fully start on boot[3]. Check if the Sense service is running (in Services.msc). If not, an OS issue or missing prerequisites might be at fault.
    • Event ID 5: “Failed to connect to the server at \.” This indicates the endpoint could not reach the Defender cloud service URLs[3]. This can be a network or proxy issue – ensure the device has internet access and that security.microsoft.com and related endpoints are not blocked by firewall or proxy.
    • Event ID 6: “Service isn’t onboarded and no onboarding parameters were found.” This tells us the device never got the onboarding info – effectively it’s not onboarded in the service[3]. Possibly the onboarding script never ran successfully. Solution: rerun onboarding and ensure it completes (the event will change to ID 11 on success).
    • Event ID 7: “Service failed to read onboarding parameters”[3] – similar to ID 6, means something went wrong reading the config. Redeploy the onboarding package.
    • Other SENSE events might point to registry permission issues or feature missing (e.g., Event ID 15 could mean the SENSE service couldn’t start due to ELAM driver off or missing components – those cases are rare on modern systems, but the event description will usually suggest enabling a feature or a Windows update[5][5]).
    Each event has a description. Compare the event’s description against Microsoft’s documentation for Defender for Endpoint event IDs to get specific guidance[3][3]. Many event descriptions (like examples above) already hint at the resolution (e.g., check connectivity, redeploy scripts, etc.).
  3. Check “Windows Defender” Operational log: Next, open Microsoft > Windows > Windows Defender > Operational. Look for recent entries, especially around the time the issue occurred:
    • If the issue is related to threat detection or a failed update, you might see events in the 1000-2000 range (these correspond to malware detection events and update events).
    • For example, Event ID 1116 (MALWAREPROTECTION_STATE_MALWARE_DETECTED) means malware was detected, and ID 1117 means an action was taken on malware[4]. These confirm whether Defender actually caught something malicious, which might have triggered further issues.
    • You might also see events indicating if the user or admin turned settings off. Event ID 5001-5004 range often relates to settings changes (like if real-time protection was disabled, it might log an event).
    The Windows Defender log is more about security events than errors; if your problem is purely a configuration or onboarding issue, this log might not show anything unusual. But it’s useful to confirm if, say, Defender is working up to the point of detecting threats or if it’s completely silent (which could mean it’s not running at all on that device).
  4. Additional log locations: If troubleshooting a device connectivity or performance issue, also check the System log in Event Viewer for any relevant entries (e.g., Service Control Manager errors if the Defender service failed repeatedly). Also, the Security log might show Audit failures if, for example, Defender attempted an action.
  5. Analyze patterns: If multiple devices have issues, compare logs. Are they all failing to contact the service (Event ID 5)? That could point to a common network issue. Are they all showing not onboarded (ID 6/7)? Maybe the onboarding instruction wasn’t applied to that group of devices or a script was misconfigured.

By scrutinizing Event Viewer, you gather concrete evidence of what’s happening at the device level. For instance, you might confirm “Device A isn’t in the portal because it has been failing to reach the Defender service due to proxy errors – as Event ID 5 shows.” Or “Device B had an event indicating onboarding never completed (Event 6), explaining why it’s missing from portal – need to re-onboard.” This will directly inform the fix.

Step 5: Resolve Configuration or Policy Issues

Armed with the information from the portal (Step 2), settings review (Step 3), and device logs (Step 4), you can now take targeted actions to fix the issue.

Depending on what you found, apply the relevant resolution below:

  • If Real-Time Protection Was Off: Re-enable it. In the Defender portal, ensure that your Next-generation protection policy has Real-time protection set to On. If a third-party antivirus is present and you want Defender active, consider uninstalling the third-party AV or check if it’s possible to run them side by side. Microsoft recommends using Defender AV alongside Defender for Business for optimal protection[1]. Once real-time protection is on, the portal should update and the “not protected” alert will clear.
  • If Devices Weren’t Onboarded Successfully: Re-initiate the onboarding:
    • For devices managed by Intune, you can trigger a re-enrollment or use the onboarding package again via a script/live response.
    • If using local scripts, run the onboarding script as Administrator on the PC. After running, check Event Viewer again for Event ID 11 (“Onboarding completed”)[3].
    • For any devices still not appearing, consider running the Microsoft Defender for Endpoint Client Analyzer on those machines – it’s a diagnostic tool that can identify issues (discussed in Advanced section).
  • If Event Logs Show Connectivity Errors (ID 5, 15): Ensure the device has internet access to Defender endpoints. Make sure no firewall is blocking:
    • URLs like *.security.microsoft.com, *windows.com related to Defender cloud. Proxy settings might need to allow the Defender service through. See Microsoft’s documentation on Defender for Endpoint network connections for required URLs.
    • After adjusting network settings, force the device to check in (you can reboot the device or restart the Sense service and watch Event Viewer to see if it connects successfully).
  • If Policy Conflicts are Detected: Decide on one policy source:
    • Option 1: Use Defender for Business’s simplified configuration exclusively. This means removing or disabling parallel Intune endpoint security policies that configure AV or Firewall or Device Security, to avoid overlap[1].
    • Option 2: Use Intune (Endpoint Manager) for all device security policies and avoid using the simplified settings in Defender for Business. In this case, go to the Defender portal settings and turn off the features you are managing elsewhere.
    • In practice, if you saw conflicts, a quick remedy is to delete duplicate policies. For example, if Intune had an Antivirus policy and Defender for Business also has one, pick one to keep. Microsoft’s guidance for a situation where an admin uses both was to delete existing Intune policies to resolve conflicts[1].
    • After aligning policies, give it some time for devices to update their policy and then check if the conflict alerts disappear.
  • If Integration with Intune Failed (Setup Error): Follow Microsoft’s recommended fix which involves three steps[1][1]:
    1. In the Defender for Business portal, go to Settings > Endpoints > Advanced Features and ensure Microsoft Intune connection is toggled On[1].
    2. Still under Settings > Endpoints, find Configuration management > Enforcement scope. Make sure Windows devices are selected to be managed by Defender for Endpoint (Defender for Business)[1]. This allows Defender to actually enforce policies on Windows clients.
    3. In the Intune (Microsoft Endpoint Manager) portal, navigate to Endpoint security > Microsoft Defender for Endpoint. Enable the setting “Allow Microsoft Defender for Endpoint to enforce Endpoint Security Configurations” (set to On)[1]. This allows Intune to hand off certain security configuration enforcement to Defender for Business’s authority. These steps establish the necessary channels so that Defender for Business and Intune work in harmony. After doing this, retry the setup or onboarding that failed. The previous error message about the configuration channel should not recur.
  • If Onboarding Still Fails or Device Shows Errors: If after trying to onboard, the device still logs errors like Event 7 or 15 indicating issues, consider these:
    • Run the onboarding with local admin rights (ensure no permission issues).
    • Update the device’s Windows to latest patches (sometimes older Windows builds have known issues resolved in updates).
    • As a last resort, you can try an alternate onboarding method (e.g., if script fails, try via Group Policy or vice versa).
    • Microsoft also suggests if Security Management (the feature that allows Defender for Business to manage devices without full Intune enrollment) is causing trouble, you can temporarily manually onboard the device to the full Defender for Endpoint service using a local script as a workaround[1]. Then offboard and try again once conditions are corrected.
  • If a Threat Was Detected (Malware Incident): Ensure it’s fully remediated:
    • In the portal, check the Action Center (there is an Action center in Defender portal under “Actions & submissions”) to see if there are pending remediation actions (like undo quarantine, etc.).
    • Run a full scan on the device through the portal or locally.
    • Once threats are removed, verify if any residual impact remains (e.g., sometimes malware can turn off services – ensure the Windows Security app shows all green).

Perform the relevant fixes and monitor the outcome. Many changes (policy changes, enabling features) may take effect within minutes, but some might take an hour or more to propagate to all devices. You can speed up policy application by instructing devices to sync with Intune (if managed) or simply rebooting them.

Step 6: Verify Issue Resolution

After applying fixes, confirm that the issue is resolved:

  • Check the portal again: Go back to the Microsoft 365 Defender portal’s Incidents & alerts and Devices pages.
    • If there was an alert (e.g., device not protected), it should now clear or show as Resolved. Many alerts auto-resolve once the condition is fixed (for instance, turning real-time protection on will clear that alert after the next device check-in).
    • If you removed conflicts or fixed onboarding, any incident or alert about those should disappear. The device should now appear in the Devices list if it was missing, and its status should be healthy (no warnings).
    • If a malware incident was being shown, ensure it’s marked Remediated or Mitigated. You might need to mark it as resolved if it doesn’t automatically.
  • Confirm on the device: For device-specific issues, physically check the device:
    • Open Windows Security and verify no warning icons are present.
    • In Event Viewer, see if new events are positive. For example, Event ID 11 in SENSE log (“Onboarding completed”) confirms success[3]. Or Event ID 1122 in Windows Defender log might show a threat was removed.
    • If you restarted services or the system, ensure they stay running (the Sense service should be running and set to automatic).
  • Test functionality: Perform a quick test relevant to the issue:
    • If mobile devices couldn’t onboard, try onboarding one now that provisioning is fixed.
    • If real-time protection was off, intentionally place a test EICAR anti-malware file on the machine to see if Defender catches it (it should, if real-time protection is truly working).
    • If devices were not reporting, force a machine to check in (by running MpCmdRun -SignatureUpdate to also check connectivity).
    • These tests confirm that not only is the specific symptom gone, but the underlying protection is functioning as expected.

If everything looks good, congratulations – the immediate issue is resolved. Make sure to document what the cause was and how it was fixed, for future reference.

Step 7: Escalate to Advanced Troubleshooting if Needed

If the problem persists despite the above steps, or if logs are pointing to something unclear, it may require advanced troubleshooting:

  • Multiple attempts failed? For example, if a device still won’t onboard after trying everything, or an alert keeps returning with no obvious cause, then it’s time to dig deeper.
  • Use the Microsoft Defender Client Analyzer: Microsoft provides a Client Analyzer tool for Defender for Endpoint that collects extensive logs and configurations. In a Defender for Business context, you can run this tool via a Live Response session. Live Response is a feature that lets you run commands on a remote device from the Defender portal (available if the device is onboarded). You can upload the Client Analyzer scripts and execute them to gather a diagnostic package[6][6]. This package can highlight misconfigurations or environmental issues. For Windows, the script MDELiveAnalyzer.ps1 (and related modules like MDELiveAnalyzerAV.ps1 for AV-specific logs) will produce a zip file with results[6][6]. Review its findings for any errors (or provide it to Microsoft support).
  • Enable Troubleshooting Mode (if performance issue): If the issue is performance-related (e.g., you suspect Defender’s antivirus is causing an application to crash or high CPU), Microsoft Defender for Endpoint has a Troubleshooting mode that can temporarily relax certain protections for testing. This is more applicable to Defender for Endpoint P2, but if accessible, enabling troubleshooting mode on a device allows you to see if the problem still occurs without certain protections, thereby identifying if Defender was the culprit. Remember to turn it off afterwards.
  • Consult Microsoft Documentation: Sometimes a specific error or event ID might be documented in Microsoft’s knowledge base. For instance, Microsoft has a page listing Defender Antivirus event IDs and common error codes – check those if you have a particular code.
  • Community and Support Forums: It can be useful to see if others have hit the same issue. The Microsoft Tech Community forums or sites like Reddit (e.g., r/DefenderATP) might have threads. (For example, missing incidents/alerts were discussed on forums and might simply be a UI issue or permission issue in some cases.)
  • Open a Support Case: When all else fails, engage Microsoft Support. Defender for Business is a paid service; you can open a ticket through your Microsoft 365 admin portal. Provide them with:
    • A description of the issue and steps you’ve taken.
    • Logs (Event Viewer exports, the Client Analyzer output).
    • Tenant ID and device details, if requested. Microsoft’s support can analyze backend data and guide further. They may identify if it’s a known bug or something environment-specific.

Escalating ensures that more complex or rare issues (like a service bug, or a weird compatibility issue) are handled by those with deeper insight or patching ability.


Advanced Troubleshooting Techniques

For administrators comfortable with deeper analysis, here are a few advanced techniques and tools to troubleshoot Defender for Business issues:

Advanced Hunting: This is a query-based hunting tool available in Microsoft 365 Defender. If your tenant has it, you can run Kusto-style queries to search for events. For example, to find all devices that had real-time protection off, you could query the DeviceHealthStatus table for that signal. Or search DeviceTimeline for specific event IDs across machines. It’s powerful for finding hidden patterns (like if a certain update caused multiple devices to onboard late or if a specific error code appears on many machines).

Audit Logs: Especially useful if the issue might be due to an admin change. The audit log will show events like policy changes, onboarding package generated, settings toggled, who did it and when. It helps answer “did anything change right before this issue?” For instance, if an admin offboarded devices by mistake, the audit log would show that.

Integrations and Log Forwarding: Many enterprises use a SIEM for unified logging. While Defender for Business is a more streamlined product, its data can be integrated into solutions like Sentinel (with some licensing caveats)[7]. Even without Sentinel, you could use Windows Event Forwarding to send important Defender events to a central server. That way, you can spot if all devices are throwing error X in their logs. This is beyond immediate troubleshooting, but helps in ongoing monitoring and advanced analysis.

Deep Configuration Checks: Sometimes group policies or registry values can interfere. Ensure no Group Policy is disabling Windows Defender (check Turn off Windows Defender Antivirus policy). Verify that the device’s time and region settings are correct (an odd one, but significant time skew can cause cloud communication issues).

Use Troubleshooting Mode: Microsoft introduced a troubleshooting mode for Defender which, when enabled on a device, disables certain protections for a short window so you can, for example, install software that was being blocked or see if performance improves. After testing, it auto-resets. This is advanced and should be used carefully, but it’s another tool in the toolbox.

Using these advanced techniques can provide deeper insights or confirm whether the issue lies within Defender for Business or outside of it (for example, a network device blocking traffic). Always ensure that after advanced troubleshooting, you return the system to a fully secured state (re-enable anything you turned off, etc.).


Best Practices to Prevent Future Issues

Prevention and proper management can reduce the likelihood of Defender for Business issues:

  • Keep Defender Components Updated: Microsoft Defender AV updates its engine and intelligence regularly (multiple times a day for threat definitions). Ensure your devices are getting these updates automatically (they usually do via Windows Update or Microsoft Update service). Also, keep the OS patched so that the Defender for Endpoint agent (built into Windows 10/11) is up-to-date. New updates often fix known bugs or improve stability.
  • Use a Single Source for Policy: Avoid mixing multiple security management platforms for the same settings. If you’re using Defender for Business’s built-in policies, try not to also set those via Intune or Group Policy. Conversely, if you require the advanced control of Intune, consider using Microsoft Defender for Endpoint Plan 1 or 2 with Intune instead of Defender for Business’s simplified model. Consistency prevents conflicts.
  • Monitor the Portal Regularly: Make it a routine to check the Defender portal’s dashboard or set up email notifications for high-severity alerts. Early detection of an issue (like devices being marked unhealthy or a series of failed updates) can let you address it before it becomes a larger problem.
  • Educate Users on Defender Apps: If your users install the Defender app on mobile, ensure they know how to keep it updated and what it should do. Sometimes user confusion (like ignoring the onboarding prompt or not granting the app permissions) can look like a “technical issue”. Provide a simple guide for them if needed.
  • Test Changes in a Pilot: If you plan to change configurations (e.g., enable a new attack surface reduction rule, or integrate with a new management tool), test with a small set of devices/users first. Make sure those pilot devices don’t report new issues before rolling out more broadly.
  • Use “Better Together” Features: Microsoft often touts “better together” benefits – for example, using Defender Antivirus with Defender for Business for coordinated protection[1]. Embrace these recommendations. Features like Automatic Attack Disruption will contain devices during a detected attack[2], but only if all parts of the stack are active. Understand what features are available in your SKU and use them; missing out on a feature could mean missing a warning sign that something’s wrong.
  • Maintain Proper Licensing: Defender for Business is targeted for up to 300 users. If your org grows or needs more advanced features, consider upgrading to Microsoft Defender for Endpoint plans. This ensures you’re not hitting any platform limits and you get features like advanced hunting, threat analytics, etc., which can actually make troubleshooting easier by providing more data.
  • Document and Share Knowledge: Keep an internal wiki or document for your IT team about past issues and fixes. For example, note down “In Aug 2025, devices had conflict because both Intune and Defender portal policies were applied – resolved by turning off Intune policy X.” This way, if something similar recurs or a new team member encounters it, the solution is readily available.

By following best practices, you reduce misconfigurations and are quicker to catch problems, making the overall experience with Microsoft Defender for Business smoother and more reliable.


Additional Resources and Support

For further information and help on Microsoft Defender for Business:

  • Official Microsoft Learn Documentation: Microsoft’s docs are very useful. The page “Microsoft Defender for Business troubleshooting” on Microsoft Learn covers many of the issues we discussed (setup failures, device protection, mobile onboarding, policy conflicts) with step-by-step guidance[1][1]. The “View and manage incidents in Defender for Business” page explains how to use the portal to handle alerts and incidents[2]. These should be your first reference for any new or unclear issues.
  • Microsoft Tech Community & Forums: The Defender for Business community forum is a great place to see if others have similar questions. Microsoft MVPs and engineers often post walkthroughs and answer questions. For example, blogs like Jeffrey Appel’s have detailed guides on Defender for Endpoint/Business features and troubleshooting (common deployment mistakes, troubleshooting modes, etc.)[8].
  • Support Tickets: As mentioned, don’t hesitate to use your support contract. Through the Microsoft 365 admin center, you can start a service request. Provide detailed info and severity (e.g., if a security feature is non-functional, treat it with high importance).
  • Training and Workshops: Microsoft occasionally offers workshops or webinars on their security products. These can provide deeper insight into using the product effectively (e.g., a session on “Managing alerts and incidents” or “Endpoint protection best practices”). Keep an eye on the Microsoft Security community for such opportunities.
  • Up-to-date Security Blog: Microsoft’s Security blog and announcements (for example, on the TechCommunity) can have news of new features or known issues. A recent blog might announce a new logging improvement or a known issue being fixed in the next update – which could be directly relevant to troubleshooting.

In summary, Microsoft Defender for Business is a powerful solution, and with the step-by-step approach above, you can systematically troubleshoot issues that come up. Starting from the portal’s alerts, verifying configurations, checking device logs, and then applying fixes will resolve most common problems. And for more complex cases, Microsoft’s support and documentation ecosystem is there to assist. By understanding where to find information (both in the product and in documentation), you’ll be well-equipped to keep your business devices secure and healthy.

References

[1] Microsoft Defender for Business troubleshooting

[2] View and manage incidents in Microsoft Defender for Business

[3] Review events and errors using Event Viewer

[4] windows 10 – How to find specifics of what Defender detected in real …

[5] Troubleshoot Microsoft Defender for Endpoint onboarding issues

[6] Collect support logs in Microsoft Defender for Endpoint using live …

[7] Microsoft 365 Defender for Business logs into Microsoft Sentinel

[8] Common mistakes during Microsoft Defender for Endpoint deployments