Blocking Emails by Region and Language in Exchange Online Anti-Spam Policies

Exchange Online’s anti-spam policies include international spam filters that let you block unwanted emails based on the sender’s region and the language of the message. By using Region Block Lists and Language Block Lists, administrators can automatically mark certain incoming emails as spam – for example, emails sent from countries your organization doesn’t do business with, or messages written in languages your users don’t speak. This helps prevent email not intended for the user (such as foreign spam or phishing attempts) from ever reaching their inbox.

Exchange Online Anti-Spam Overview

Exchange Online Protection (EOP) applies a default spam filter (also known as a Hosted Content Filter Policy) to all incoming mail[1]. Admins can customize this policy or create new ones to tighten spam filtering. Among many settings (blocking specific senders, domains, etc.), EOP provides International Spam settings to filter messages by country/region of origin and language[1][2]. These filters are optional and disabled by default – but when enabled, they instruct EOP to treat certain emails as spam purely due to their origin or language.

How it works: Exchange Online analyzes each incoming message’s metadata and content. It determines the source country (using the sender’s IP address geolocation) and attempts to detect the language the message is written in. If the message matches a blocked region or language that you’ve specified and you have turned on these filters, Exchange Online will increase the message’s spam score or outright flag it as spam[3][4]. Such messages will then be handled according to your spam policy (usually delivered to the Junk Email folder or quarantined, rather than reaching the inbox).



Why Use Region and Language Filters?

By leveraging these block lists, organizations can reduce spam and phishing that users are unlikely to find legitimate. For example, a company operating only in North America might block all emails coming from domains in far-off regions often associated with spam. Similarly, if your users only speak English and French, you might block emails written in Russian or Chinese to stop foreign-language scams. International spam filtering is a coarse filter – it’s not based on content quality but on origin characteristics – yet it can significantly cut down unwanted mail that standard content filters might miss. (Keep in mind determined attackers might evade these by using relay servers in “allowed” countries or by writing spam in your users’ languages, so these filters are one layer of defense, not a silver bullet.)

Default behavior: Out of the box, Exchange Online’s international filters are off (no regions or languages are blocked)[4]. If you enable them without specifying any entries, they won’t have effect. Once you enable a Region or Language block list and add entries to it, any incoming message matching those conditions gets stamped with a high spam confidence level (SCL). By default, EOP will send such spam to the recipient’s Junk Email folder (or quarantine it if it’s detected as high-confidence phishing)[3]. This means the user is protected from seeing it in their inbox, though they can still review junk/quarantine if needed.

Note: The Region and Language block lists simply mark messages as spam – they don’t outright reject the message. The messages will still arrive to your tenant and be deliverable to Junk Email or Quarantine based on your spam policy actions. Ensure your anti-spam policy’s actions for spam are configured (the default is to send to Junk) so that these flagged emails don’t reach the inbox.

Configuring Region and Language Block Lists via PowerShell

You can configure these international spam settings easily using Exchange Online PowerShell. Below is a step-by-step guide to enable and customize the Region and Language block lists:

Below are the detailed instructions and PowerShell commands for each step:

  1. Connect to Exchange Online PowerShell – Open a PowerShell console and connect to your Exchange Online environment. If you have the Exchange Online PowerShell module installed, run: Connect-ExchangeOnline -Credential (Get-Credential) This will prompt for your admin credentials and establish the session. (Alternatively, use the older Connect-MsolService or New-PSSession methods if not using the newer module.)
  2. View current policy settings (optional) – It’s good practice to see what the current spam filter policy is before changing it. By default, the primary policy is named “Default” (or “Default Anti-Spam Policy”). Run the following to inspect the international block list settings: Get-HostedContentFilterPolicy -Identity "Default" | Format-List Name, EnableRegionBlockList, RegionBlockList, EnableLanguageBlockList, LanguageBlockList This will show whether the region and language filters are enabled (should show False by default) and any listed codes (likely empty). For example, you might see: EnableRegionBlockList : False RegionBlockList : {} EnableLanguageBlockList : False LanguageBlockList : {} indicating the filters are currently off.
  3. Enable and configure the Region Block List – Decide which countries or regions you want to block. Use their two-letter country codes (ISO 3166-1 alpha-2 format)[3]. For instance, “CN” (China), “RU” (Russia), “IR” (Iran), “BR” (Brazil), etc. Then run the command: Set-HostedContentFilterPolicy -Identity "Default" ` -EnableRegionBlockList $true ` -RegionBlockList "CN","RU","IR" In this example, we enable the region filter and add China, Russia, and Iran to the blocked RegionBlockList. From now on, any incoming email originating from servers in those countries will be marked as spam[3]. (Use the country codes that make sense for your organization – you might include those where you do not have clients or colleagues. You can list one or dozens of codes as needed.) Tip: You can find the full list of supported country codes in Microsoft’s documentation[3] or any ISO country code list. Common examples include US (United States), GB (United Kingdom), CN (China), DE (Germany), IN (India), etc. Only use codes for countries you truly want to block – blocking major email source countries could filter out legitimate emails if, for example, a partner’s email routed through that region.
  4. Enable and configure the Language Block List – Choose the languages you want to block. Use ISO 639-1 two-letter language codes[4] (these are often the first two letters of the language name in English, but not always). For example: “ZH” (Chinese), “RU” (Russian), “AR” (Arabic), “KO” (Korean), “JA” (Japanese) are common codes. Then run: Set-HostedContentFilterPolicy -Identity "Default" ` -EnableLanguageBlockList $true ` -LanguageBlockList "ZH","RU","AR" This turns on language-based filtering and configures the list to block Chinese, Russian, and Arabic content. Now, if an inbound message’s content is detected as written in Russian, Chinese, or Arabic, it will be marked as spam[4]. Note: Ensure you include the correct codes. For instance, “EN” is English, “ES” is Spanish, “FR” is French, “DE” is German, “JA” is Japanese, “ZH” is Chinese (Mandarin). Microsoft supports a wide range of language codes – you can find the supported list in documentation[4]. Only block languages that your users do not understand or correspond with; you wouldn’t want to block a language that any legitimate communication might use. In our example, we assumed our organization doesn’t correspond in Chinese or Arabic, so blocking those will help catch spam in those scripts.
  5. Verify the new settings – Run the Get-HostedContentFilterPolicy -Identity "Default" | Format-List ... command again (from step 2) to confirm that EnableRegionBlockList and EnableLanguageBlockList now show True, and that the RegionBlockList and LanguageBlockList contain the codes you set. For example, it might now display: EnableRegionBlockList : True RegionBlockList : {CN, RU, IR} EnableLanguageBlockList : True LanguageBlockList : {ZH, RU, AR} This means your policy is active with those filters. These changes take effect quickly (usually within minutes) for new incoming emails. Monitoring: After enabling these, keep an eye on your Quarantine or users’ Junk folders to gauge impact. You could, for instance, send a test email from an account in a blocked country (or ask a contact in that country to email you) and verify it goes to Junk. In the Security & Compliance Center, the Threat Explorer/Review can show messages flagged by these rules. Each caught email’s headers will include indicators (e.g., SFV:BLK in X-Forefront-Antispam-Report for region-block, or a note of “banned language”). This helps confirm the filter is working.

Management and Tweaks: You can update the lists at any time. For example, to add or remove entries without affecting others, use the Add/Remove syntax. Suppose you want to add Nigeria (NG) to the region block list without retyping everything:

Set-HostedContentFilterPolicy -Identity "Default" -RegionBlockList @{Add="NG"}

Similarly, to remove a language, say you decide to stop blocking Arabic:

Set-HostedContentFilterPolicy -Identity "Default" -LanguageBlockList @{Remove="AR"}

Always double-check with Get-HostedContentFilterPolicy after changes. Keep your block lists maintained as your business needs evolve (for instance, if you start dealing with a new country, remove it from the blocked list!).

Finally, remember that these settings apply tenant-wide by default (since the default policy covers all recipients). If needed, you can create custom anti-spam policies with their own Region/Language settings and scope them to specific users or groups – for example, not blocking Spanish for your Latin America team but blocking it for others. This can be done by creating a new policy via PowerShell (New-HostedContentFilterPolicy and a corresponding New-HostedContentFilterRule to assign it to certain recipients)[1]. In most cases, however, a single global setting is sufficient.

Conclusion

By using Exchange Online’s region and language block lists, you add a focused layer of defense against unsolicited emails. Region-based filtering blocks emails coming from countries that you know send you no legitimate mail (often catching spam campaigns from those areas)[3]. Language-based filtering blocks emails in languages your users don’t read – which are often spam or phishing lures in practice[4]. These features are easy to turn on with a few PowerShell commands and can dramatically reduce “noise” in user mailboxes.

Do note that legitimate communication can occasionally be caught (for example, an English-language email sent via a server in a blocked country, or a multilingual email with a few words triggering language detection). Therefore, use these filters judiciously and inform your helpdesk, so they know a possible reason if an expected message doesn’t arrive. Overall, when configured thoughtfully, region and language block lists are powerful tools to prevent emails not intended for your users, keeping your organization’s inboxes more focused and secure.

References

[1] Content filtering procedures | Microsoft Learn

[2] How to Block Emails from Foreign Countries in Office 365

[3] Set-HostedContentFilterPolicy (ExchangePowerShell) | Microsoft Learn

[4] Set-HostedContentFilterPolicy (ExchangePowerShell) | Microsoft Learn

CIA Brief 20250803

image

Protection against multi-modal attacks with Microsoft Defender –

https://techcommunity.microsoft.com/blog/microsoftdefenderforoffice365blog/protection-against-multi-modal-attacks-with-microsoft-defender/4438786

Copilot Search: Acronyms and bookmarks –

https://www.youtube.com/watch?v=nftEC73Cjxo

Modernize your identity defense with Microsoft Identity Threat Detection and Response –

https://www.microsoft.com/en-us/security/blog/2025/07/31/modernize-your-identity-defense-with-microsoft-identity-threat-detection-and-response/

Frozen in transit: Secret Blizzard’s AiTM campaign against diplomats –

https://www.microsoft.com/en-us/security/blog/2025/07/31/frozen-in-transit-secret-blizzards-aitm-campaign-against-diplomats/

AI agents and the future of identity: What’s on the minds of your peers? –

https://techcommunity.microsoft.com/blog/microsoft-entra-blog/ai-agents-and-the-future-of-identity-what%E2%80%99s-on-the-minds-of-your-peers/4436815

Enhance your LMS with the power of Microsoft 365 –

https://techcommunity.microsoft.com/blog/educationblog/preview-the-new-microsoft-365-lti%C2%AE-for-your-lms/4434606

Earnings Release FY25 Q4 –

https://www.microsoft.com/en-us/Investor/earnings/FY-2025-Q4/press-release-webcast

Use Copilot without recording a Teams meeting –

https://support.microsoft.com/en-us/office/use-copilot-without-recording-a-teams-meeting-a59cb88c-0f6b-4a20-a47a-3a1c9a818bd9

How Microsoft’s customers and partners accelerated AI Transformation in FY25 to innovate with purpose and shape their future success –

https://blogs.microsoft.com/blog/2025/07/28/how-microsofts-customers-and-partners-accelerated-ai-transformation-in-fy25-to-innovate-with-purpose-and-shape-their-future-success/

AI Security Essentials: What Companies Worry About and How Microsoft Helps –

https://techcommunity.microsoft.com/blog/microsoft-security-blog/ai-security-essentials-what-companies-worry-about-and-how-microsoft-helps/4436639

Sploitlight: Analyzing a Spotlight-based macOS TCC vulnerability –

https://www.microsoft.com/en-us/security/blog/2025/07/28/sploitlight-analyzing-a-spotlight-based-macos-tcc-vulnerability/

Security for AI Assessment | Microsoft 365 Copilot –

https://security-for-ai-assessment.microsoft.com/

After hours

Team Water – https://www.youtube.com/watch?v=KRhofr57Na8

Editorial

If you found this valuable, the I’d appreciate a ‘like’ or perhaps a donation at https://ko-fi.com/ciaops. This helps me know that people enjoy what I have created and provides resources to allow me to create more content. If you have any feedback or suggestions around this, I’m all ears. You can also find me via email director@ciaops.com and on X (Twitter) at https://www.twitter.com/directorcia.

If you want to be part of a dedicated Microsoft Cloud community with information and interactions daily, then consider becoming a CIAOPS Patron – www.ciaopspatron.com.

Watch out for the next CIA Brief next week

Updated PowerShell PnP connection script

image

One of the challenging things about manipulating SharePoint items with PowerShell was that you need to use PnP PowerShell. I have always found this tricky to get working and connect and it seems things have changed again.

Now when you want to use PnP PowerShell you can only do so using a Azure AD app! This is additionally challenging if you want to do that manually so I modified by free connection script:

https://github.com/directorcia/Office365/blob/master/o365-connect-pnp.ps1

to allow the creation of the Azure AD app as part of the process and to also allow you to specify an existing Azure Ad app that was already created by the process as the way to connect. The documentation is here:

https://github.com/directorcia/Office365/wiki/SharePoint-Online-PnP-Connection-Script

but in essence, you run the script the first time and it will create an Azure AD app for you like this:

image

Subsequent times you can simply use the apps again by specifying the ClientID GUID in the command line to make the connection. If you don’t then the script will create a new Azure AD app. So create an Azure AD app the first time and use that same app going forward I suggest. Of course, consult the full online documentation for all the details.

Hopefully, this makes it a little easier to use PnP PowerShell in your environment.

Lifecycle of a Microsoft 365 Business Premium Tenant After License Expiry

When a Microsoft 365 Business Premium subscription is not renewed, the tenant doesn’t shut down instantly. Instead, it transitions through several stages (Expired, Disabled, and Deleted) over a defined timeline. During each stage, different levels of access are available and the status of your data changes. Understanding this lifecycle is crucial for administrators to prevent data loss and plan accordingly[1][1]. This report details each stage step-by-step, who can access the tenant and its data at each point, what happens to user data (including retention and recovery options), and the timelines and best practices associated with each phase. We’ll focus on Microsoft 365 Business Premium (as a representative Microsoft 365 for Business plan), which follows the standard subscription lifecycle for most business plans.

Overview of Post-Expiration Stages

Once a Business Premium subscription reaches its end date without renewal, it goes through three stages before final shutdown[1]:

  • Expired (Grace Period) – Immediately after the subscription’s end date, a grace period begins (generally 30 days for most Business subscriptions)[1]. During this stage, services continue to operate normally for end users, and all data remains accessible as usual[1]. This is essentially a buffer period to allow for renewal or data backup before any service disruption occurs.
  • Disabled (Suspended Access) – If the subscription is not renewed by the end of the grace period, it moves into the disabled stage (typically lasts 90 days after the grace period)[1]. In this phase, user access is suspended – users can no longer log in to Microsoft 365 services or apps[2]. However, administrators retain access to content and the admin portal, allowing them to retrieve or back up data and to reactivate the subscription if desired[1][1]. The data is still preserved in Microsoft’s data centers during this stage.
  • Deleted (Tenant Deletion) – After the disabled period (~120 days after initial expiration, in total), the subscription enters the deleted state[2]. At this final stage, all customer data is permanently erased, and the Microsoft 365 tenant (including its Microsoft Entra ID/Azure AD instance) is removed (if it’s not being used for other services)[1]. At this point, no recovery is possible – the data and services are irretrievable.

Each stage comes with changes in who can access the tenant’s services and what happens to the stored data. The table below summarizes the key aspects of each stage:

AspectExpired Stage (Grace Period)Disabled Stage (Suspension)Deleted Stage (Termination)
Duration~30 days after end of term (grace period)[1]~90 days after grace period ends[1]After ~120 days total (post-Disabled)[2] (data is purged)
User AccessFull access to all services and data. Users continue to use email, OneDrive, Teams, Office apps, etc., normally.[1]No user access to Microsoft 365 services. Users are blocked from email, OneDrive, Teams, etc. Office applications enter a read-only “unlicensed” mode[1] (no editing or new content).No access – user accounts and licenses are terminated. (Users effectively no longer exist in the tenant once deleted.)
Admin AccessAdmin has full access. Administrators can use the Microsoft 365 admin center and all admin functions normally. They receive expiration warnings and can still renew/reactivate the subscription during this period[1][1].Admin access only. Administrators can log in to the admin center and view or export data (e.g. using eDiscovery or content search). However, admins cannot assign new licenses to users while in this state[1]. The admin’s ability to use services might be limited to data retrieval; user-facing apps for admins are also in reduced functionality.Limited/No admin access to data. Global admins can still sign into the admin portal to manage billing or purchase other subscriptions[1], but all customer data is permanently inaccessible. The subscription cannot be reactivated at this point[1]. If the Azure AD (Entra ID) isn’t used by other services, it is removed along with all user accounts[1].
Data StatusAll data retained. Customer data (emails, files, chat history, etc.) remains intact and fully accessible to users and admins[1]. No data deletion occurs in this stage.Data retained (read-only). All data is still stored in the tenant (Exchange mailboxes, SharePoint/OneDrive files, Teams messages). However, only admins can access this data directly[1] (e.g., an admin could export mailbox contents or files). Users cannot access their data through normal means, but the data has not yet been deleted.Data deleted. All user and organization data is permanently deleted from Microsoft’s servers[2]. This includes Exchange mailboxes, SharePoint sites, OneDrive files, Teams chat history, Planner data, etc. The data cannot be recovered once this stage is reached.
Email & CommunicationsEmail fully functional. Users can send/receive email as normal; mailboxes are active. Teams chats and calls continue normally during this stage.Email disabled. Exchange Online mailboxes remain in place but are inaccessible to users, and email delivery stops (messages may bounce since the mailbox is now inactive)[2]. Teams functionality is also suspended – users cannot login to Teams, and messages aren’t delivered. (Data in mailboxes and Teams chats is still preserved on the back-end during this time.)No email/Teams. Mailboxes are gone; inbound emails will not find the recipient (the tenant and users don’t exist). Teams data and channels are removed along with the SharePoint/OneDrive data that stored them.
Reactivation OptionsCan renew/revive. The subscription can be reactivated instantly by administrators during this entire stage with no loss of data or functionality[1]. (Microsoft continues to accept payment to restore full service during grace.)Can still renew. Administrators can reactivate the subscription during the 90-day disabled window by paying/renewing[1]. This will restore user access and no data will be lost. If not renewing, admins should use this time to back up any needed data.Cannot reactivate. Once in Deleted status, it’s too late to simply renew – the subscription is considered terminated. Recovery is not possible; a new subscription would be a fresh start without the old data[1].

Note: The timeline above (30 days grace + 90 days disabled) applies to most Microsoft 365 Business subscriptions in most regions[1]. If your subscription was obtained via certain volume licensing programs or a Cloud Solution Provider (CSP), the durations might vary slightly. For example, enterprise volume licensing agreements often have a 90-day grace period and a shorter disabled period, or vice versa[1]. However, for Microsoft 365 Business Premium (direct or CSP purchase), the 30-day grace and 90-day disabled schedule is the standard sequence.


Stage 1: Expired (Grace Period – Full Access Maintained)

When it starts: Immediately after the subscription’s end date, if you did not renew or if auto-renewal was turned off, the subscription enters the Expired status[1]. All previously assigned licenses remain in place during this stage, and the service continues uninterrupted for a limited time.

Duration: Approximately 30 days (for most Business Premium subscriptions)[1] after the license term ends. This 30-day window is often called a grace period.

Access for Users: During the expired stage, end users experience no change in service. All users can still log in and use Microsoft 365 apps and services normally, including Outlook email, Teams, SharePoint, OneDrive, Office applications, etc.[1]. Essentially, full functionality continues as if the subscription were active. Users are typically unaware that the subscription has technically expired – there are no immediate pop-ups or lockouts at this stage beyond possible subtle “license expired” notices in account settings.

Example: If your Business Premium expired yesterday, your employees can still send and receive emails, access their OneDrive files, and use Office apps today without interruption. The experience is unchanged in this grace period.

Access for Admins: Administrators retain full admin capabilities during the expired phase. You can still access the Microsoft 365 Admin Center and all admin portals (Exchange Admin, SharePoint Admin, etc.) normally[1]. In fact, Microsoft will be alerting the admins about the situation: Admins receive notifications in the admin center and via email as the expiration date approaches and passes[1]. These warnings typically inform you that the subscription has expired and remind you to act (renew or backup data) before further consequences.

  • Initial Notifications: Prior to expiration, Microsoft sends a series of warnings to the global and billing administrators of the tenant, often starting a few weeks before the due date[1]. For example, admins may get emails at intervals like 30 days, 14 days, 7 days before the subscription ends (exact timing can vary) reminding them to renew. In the Admin Center dashboard, alerts will also indicate the upcoming subscription end. This heads-up is meant to prevent accidental lapses.
  • Admin Options during Grace: During the 30-day expired stage, admins have two primary options:
    1. Renew / Reactivate the Subscription: At any point in the grace period, the admin can renew the subscription (or turn recurring billing back on) to return the status to “Active”[1]. This is a seamless process – once payment is made or the subscription is reactivated, service continues normally without any data loss or further action needed. (If auto-renew was enabled, this happens automatically and the subscription never enters expired status at all.)
    2. Let it Lapse / Prepare to Exit: If the organization intends not to continue with Microsoft 365, the admin can choose to let the subscription run its course. No immediate action is required to “cancel” at this point because turning off renewal ensures it will expire. During the grace period, it’s wise to begin data backup efforts if you plan to leave the service[3][3]. Microsoft specifically recommends backing up your data during the Expired stage if you are planning not to renew[3][3], since this is a window where everything is still fully accessible. (We will discuss data backup and export options in a later section.)

Data Status: All your data remains intact and fully accessible during Expired status. There is no deletion or removal of any data at this stage. This means:

  • Exchange Online mailboxes: All emails, calendars, contacts are retained and functional. Users can continue to send/receive mail normally.
  • SharePoint Online sites and OneDrive: All files and SharePoint site content remain unchanged. Users can add, edit, and delete files as usual; synchronization with local devices continues.
  • Teams: All chat histories, team channels, and files shared in Teams remain available. Teams meetings can be scheduled and attended normally.
  • Other services: Planner tasks, OneNote notebooks, Azure AD user accounts, etc., are all unaffected and continue to operate.

In summary, the Expired stage is a safety net – a 30-day full functionality extension past the subscription end date. It exists to ensure that a lapse in payment or decision doesn’t immediately grind business productivity to a halt, and to give administrators time to evaluate next steps (renew or plan for shutdown)[1][1]. Users have no loss of service in this period, and only admins are aware of the ticking clock via the notifications.

Administrator Tip: Use the grace period wisely. If renewal is intended, it’s best to reactivate before the 30 days are up to avoid any service disruption. If you do not intend to renew, start communicating with users and begin backing up critical data now, while everything is accessible. This might include exporting mailbox PST files, downloading files from OneDrive/SharePoint, and capturing any Teams data you need to retain.


Stage 2: Disabled (Suspended Access – Admin Only)

When it starts: If the subscription is still not renewed once the ~30-day grace period ends, the tenant status automatically changes from Expired to Disabled (sometimes also referred to as the suspended or inactive stage). For most Business Premium subscriptions, this transition happens on Day 31 after expiration (i.e., one month after the subscription’s official end date).

Duration: Typically *90 days in the Disabled state*[1] for standard Microsoft 365 business subscriptions. This 90-day disabled period starts immediately after the grace period. In many scenarios, this means from day 31 through day 120 after your subscription term ended, the tenant is in Disabled status. (Some enterprise agreements might use slightly different timings, but 90 days is the norm for Business Premium.) This 90-day window is critical: it’s the final period during which data is retained and the subscription can be reactivated before permanent deletion.

Access for Users: During the disabled stage, all end-user access is cut off:

  • User Login and Apps: Users can no longer log in to Microsoft 365 services (their licenses are now considered “inactive”). If a user tries to sign in to Outlook, Teams, or any Office 365 app, it will fail or indicate that the subscription is inactive. Office desktop apps (like Word, Excel installed via Microsoft 365) will detect the license is expired and **eventually go into a *reduced-functionality mode*[1] – essentially read-only mode. They will start showing *“Unlicensed Product” notifications*, meaning editing and creating new documents is disabled[1].
  • Email: Email functionality stops. Users cannot send or receive emails once the tenant is disabled[2]. Exchange Online will stop delivering messages to user mailboxes. External people who send email to your users may receive bounce-back errors (since the system treats the mailboxes as inactive). The emails that already exist in mailboxes remain stored, but users can’t access them.
  • OneDrive and SharePoint: Users lose access to their OneDrive and SharePoint content. If they try to access SharePoint sites or OneDrive files via web or sync clients, they will be denied. The data is still present on Microsoft’s servers, but not accessible to the user. Essentially, the SharePoint sites and OneDrive accounts are frozen in place during disabled status.
  • Teams: Teams becomes non-functional for users. They cannot log into Teams clients, join meetings, or post messages. Messages sent to them will not be delivered. The Teams data (chat history, channel conversations, etc.) remains stored (since it’s part of Exchange mailboxes and SharePoint) but is inactive.
  • Other Services: Any other Microsoft 365 services (Microsoft 365 apps, Power BI if included, Planner, etc.) will be inaccessible to users. For example, OneNote notebooks stored in SharePoint/OneDrive remain but can’t be edited by users. If a user had mobile apps logged in, they would stop syncing or show an error.

In short, regular users are effectively locked out of all Microsoft 365 resources during the Disabled stage. The tenant’s services are in a suspended state, awaiting either reactivation or deletion. For end users, the experience is that everything has stopped working – this is the stage where they will notice the lapse (if they hadn’t during the grace period).

Access for Admins: Administrators still retain access to the system in this stage, though in a more limited capacity:

  • Admin Center: Global and Billing Administrators can continue to sign in to the Microsoft 365 Admin Center and view the subscription status[1]. From here, an admin can initiate renewal/reactivation of the subscription if desired (more on that below). Admins can also navigate to the various admin portals (Exchange Admin, SharePoint Admin, etc.). However, their ability to make changes is limited because the subscription is in a suspended state.
  • Data Access for Admin: Critically, customer data is still available to admins even though users can’t access it[1]. For example:
    • An Exchange Online admin (or a global admin with eDiscovery roles) could use Content Search (eDiscovery) to export mailbox data for a user account. This allows retrieval of emails, contacts, etc., even though the user can’t log in.
    • A SharePoint admin can access SharePoint site collections (e.g., via PowerShell or admin interfaces) and could retrieve documents or site data if needed. Additionally, files in OneDrive might be accessible by SharePoint admin because OneDrive is essentially a SharePoint site under the hood.
    • If third-party backup solutions were in place, they might still be able to connect via admin credentials to pull data during this stage.
  • License Management: One notable restriction is that, in the disabled stage, admins cannot assign or add new licenses to users[1]. The subscription is essentially frozen: you can’t onboard new users under it or extend more licenses. The admin’s role here is mostly to either recover data or restore the subscription, not to operate business-as-usual changes.

Admins do not have normal end-user functionality (for example, if the global admin also had a mailbox on this tenant, they also cannot use email normally for that mailbox, since it’s unlicensed now). But through backend admin tools, they can access content and, importantly, they can still purchase/renew services.

Data Status: The good news in the disabled stage is that all your data is still being retained by Microsoft; nothing has been deleted yet. The data is essentially in stasis:

  • Exchange data: All user mailboxes and emails are preserved. Although email flow is halted, the emails and calendar items that were in the mailboxes remain stored on the server. If the subscription is reactivated, users will regain access to their full mailboxes as they were.
  • SharePoint/OneDrive data: All site contents and OneDrive files are still present in the SharePoint Online backend. Users are just blocked from viewing/editing them. No files are removed during this stage; storage remains allocated as-is.
  • Teams data: Since Teams conversations are stored in user mailboxes (for chat) and SharePoint (for channel files), that data is also intact. Meeting recordings in OneDrive/SharePoint remain as files. Teams channel chats (which are journaled into group mailboxes) remain as well.
  • Azure AD (Entra ID): Your Azure AD tenant (which contains user accounts, groups, etc.) is still intact during disabled stage. No accounts are deleted automatically at this point; all user accounts still exist (though they lack active licenses). This is why an admin can still recover data – all the identities and their associated content are present.
  • Retention Policies / Legal Hold: If you had any retention or legal hold policies applied to data (for compliance), the data is still there under hold. However, it’s worth noting that these policies do not override the ultimate deletion that will occur if the subscription isn’t renewed by the end of disabled stage. In other words, a legal hold will keep data from user-driven deletion during an active subscription, but once the tenant is shutting down, Microsoft will eventually remove that data after the retention period regardless of hold, because the entire tenant is being decommissioned. We’ll discuss compliance considerations later, but during disabled stage the data on hold is still safe (since nothing is deleted yet).

In summary, Disabled stage = data frozen, users locked out, admins in read-only mode. The business impact here is significant because users can’t work, so this stage is effectively a service suspension. It’s meant to be a final warning period; Microsoft keeps your data around for a bit longer (90 days) in case you realize the mistake or change your mind, but normal operations are halted to incentivize a resolution.

Admin Options during Disabled stage:

  • Reactivation: You can still renew or reactivate your Business Premium subscription during the disabled stage[1]. In fact, this is the last chance to do so. Reactivating during this period will immediately restore user access. As soon as you pay for a new subscription term (or otherwise renew), the tenant returns to Active status and all users can use their services again, picking up right where they left off (emails start flowing, files accessible, etc.). No data was lost, so it’s a smooth restoration. From Microsoft’s perspective, this is simply a late payment. In the admin center, a global or billing admin can select the expired subscription and proceed to “Reactivate” or renew[2]; once processed, the status goes back to Active.
  • Backup/Data Export: If you do not plan to renew, this 90-day window is your final opportunity to retrieve any remaining data. Admins should use this time to export emails, documents, and other content that the organization needs to retain. For example, export user mailboxes to PST files via eDiscovery, download SharePoint libraries, and save important OneDrive files. After the disabled stage ends, these will be gone forever, so treat this as a countdown to permanent data loss. Microsoft’s guidance is to back up your data while it’s in the Disabled state if you’re canceling the subscription[1].
  • No New Data Creation: Obviously, since the services are disabled, you generally won’t be creating new data in this stage via normal use. But be cautious: do not assume Microsoft is backing up your data for you during this time. They are simply retaining it. It’s still the admin’s responsibility to extract and safeguard any information needed.

One more nuance: Microsoft’s policy notes that any customer data left in a canceled subscription might be deleted after 90 days and will be deleted no later than 180 days after cancellation[1]. The standard is 90 days, but they leave room for some systems possibly holding data slightly longer. You should not count on the extra margin beyond 90 days; it’s best to assume 90 days is the deadline, with 180 days being an absolute upper bound in some cases. In practice, for most Business Premium scenarios, at the 91st day of disabled status the tenant moves to deleted status (next stage).

Impact on shared resources: It’s important to note how shared/company-wide data is affected in the disabled stage:

  • SharePoint Online sites (like team sites, communication sites) become read-only. Members (users) cannot access them, but an admin could access or export data. If someone from outside (a guest or external sharing link) tries to access content, it will fail because the site is effectively locked along with the tenant.
  • Shared mailboxes (if any) and public folders in Exchange are also inaccessible to users. An admin with eDiscovery could export them though.
  • Teams shared channels or group chats are inaccessible because no user accounts can sign in.
  • OneDrive for Business accounts tied to each user are inaccessible to those users. If an admin needs to, they could use a SharePoint admin take-over of a OneDrive site to retrieve files.
  • Applications and Integrations: Any third-party applications integrated via API might stop working if they rely on user credentials or active licenses. If they use app permissions and Graph API, an admin might still retrieve data via API (with app credentials) in disabled stage, since admin consented apps could read data that’s still stored.

User Communication: If you haven’t already, this is the time to let your users know what’s happening. In a planned non-renewal, you likely would have informed users that services would be cut off at a certain date. If the disabled stage comes as a surprise (e.g., an unexpected lapse), you will likely be getting many helpdesk tickets now – “I can’t access email or Teams.” The admin should be prepared to respond (either “we’re working on renewing” or “the service has been suspended and we’re transitioning off of it”).


Stage 3: Deleted (Permanent Deletion of Tenant Data)

When it starts: If no action is taken to renew/reactivate during the 90-day disabled period, the subscription will progress to the Deleted stage. In typical cases, this occurs at or shortly after day 91 of the Disabled stage – which is roughly 120 days (4 months) after the original subscription expiration date. At this point, Microsoft will fully deactivate and remove the tenant.

Duration: The Deleted stage is a terminal state – it’s not a timed phase but rather the end point. The subscription is considered fully terminated and remains in a deleted/non-recoverable state thereafter. (Microsoft does not keep the environment data beyond this in a retrievable way.)

Access for Users: No user access whatsoever. In fact, user accounts themselves are typically purged as part of the tenant deletion (unless your Azure AD is kept alive by another subscription). From the end-user perspective, the Microsoft 365 organization ceases to exist:

  • If users try to log in via the Office 365 portal or any apps, their login will fail (the account is gone or the domain is no longer recognized).
  • Emails sent to user addresses will bounce with non-delivery reports indicating the recipient was not found, since Exchange Online has removed those mailboxes.
  • OneDrive URLs or SharePoint site links will no longer function at all (they’ll likely show an error that the site can’t be found).
  • Essentially, by the time of deletion, end users should already have been off the service, as there is nothing to access anymore.

Access for Admins: Administrators have no access to user data once the tenant is deleted. However, there is a small caveat: the admin might still be able to log into the admin portal if the Azure Active Directory is still partially available (for example, if you had other Microsoft services or Azure subscriptions on the same Azure AD, the tenant’s Azure AD might not be deleted). But in terms of the Microsoft 365 subscription:

  • The subscription will show as deleted and cannot be reactivated[1].
  • Admin Center functionality is minimal: you might only be able to use the admin center to manage other subscriptions or purchase a new one. If your entire tenant was solely for Microsoft 365 and it’s deleted, even the admin portal login might not work anymore once Entra ID (Azure AD) is removed.
  • Any attempt to recover data at this stage is fruitless – Microsoft has already begun permanently removing it from their systems.

Data Status: All customer data is permanently deleted once the subscription hits the Deleted stage[2]. This is irreversible data destruction intended to free up storage and maintain compliance with data handling policies (since you’re no longer a customer, they won’t keep your data indefinitely).

Here’s what that means in concrete terms:

  • Exchange Online: Mailboxes and their contents are purged from the Exchange databases. The mailbox objects are removed from Exchange Online and the associated data is wiped. Microsoft may retain backups for a short additional buffer (for their own disaster recovery), but not in any way accessible to you. Practically, your emails are gone.
  • SharePoint/OneDrive: Site collections for SharePoint and individual OneDrive sites are deleted. The files and list data within them are destroyed. Microsoft might retain fragments or backups for a short time internally, but again, not accessible and eventually wiped as per their data retention disposal policies.
  • Teams: Teams data (chat messages, channel content) which lived in Exchange and SharePoint is gone because its underlying storage is gone. Meeting recordings that were in OneDrive/SharePoint are gone. The Teams service itself forgets your tenant.
  • Azure Active Directory (Microsoft Entra ID): The Azure AD tenant is deleted (provided it’s not used by any other active subscriptions or services)[1]. This means all user accounts, groups, and other Azure AD objects are removed. If your company had only this one Microsoft 365 subscription in that Azure AD, the directory is now gone. (If you had, say, an Azure subscription or another Microsoft 365 subscription still active on the same directory, the Azure AD remains for that, but the Microsoft 365 service data is still wiped.)
  • Backups & Redundancy: Microsoft 365 has geo-redundant backups and such during active subscription, but once the retention period is over, those too are disposed of. By policy, Microsoft will not retain your content beyond the specified period once you’re no longer paying for the service. There is no rollback from the Deleted stage.

In essence, the Deleted stage marks the end-of-life for your tenancy’s data. Think of it as Microsoft performing a complete data deletion and tenant teardown in their cloud.

Recovery Options: At this stage, recovery is not possible through conventional means. Even if you immediately buy a new subscription with the same name or details, it will be a fresh tenant with none of the old data[1]. (Microsoft explicitly notes that if a subscription is deleted, adding a new subscription of the same type does not restore the old data[1].) The only “recovery” would have been to restore from your own backups that you hopefully took during earlier stages. Microsoft Support cannot restore a fully deleted tenant’s content once it’s beyond the retention window.

There is a nuance from the partner-center information: if a partner renewed the same SKU within 90 days after cancellation, sometimes data can be automatically restored[4]. But that is essentially the same as reactivating within the disabled stage. After the ~90 days disabled, those options expire. Post deletion, even if you contact Microsoft, they will apologize that it’s gone.

Impact on shared resources: By now everything is gone:

  • SharePoint sites URLs might eventually become available for reuse by other tenants (after a certain period).
  • Exchange email addresses might become reusable by others after the domain is removed or reused.
  • The custom domain you had on Microsoft 365 (e.g., yourcompany.com for email) is freed up in the Microsoft cloud. (You could take that domain and apply it to a different tenant if you wanted, once the original tenant is deleted or once you deliberately remove it prior to deletion.)
  • Microsoft Entra ID domain (the onmicrosoft.com domain) is permanently gone.

Final state: The tenant is now closed. Microsoft will have fulfilled any contractual data retention requirements and ensured customer data is wiped. If you attempt to sign in to the account after this, it will behave as if the account does not exist.

Important: If there is any chance you need something from the tenant (a file, an email, anything) after this point, it’s too late. The only recourse would be if you had an offline backup or if perhaps some email was also stored in a user’s Outlook cache or a file was on a user’s local PC. But server-side, Microsoft has cleared it.


Data Retention and Recovery Considerations

Throughout the above stages, a key theme is data retention: Microsoft holds onto your data for a period (grace + disabled) before deletion. Let’s address specific questions about data retention and recovery:

What are the options for data recovery after the grace period?
After the initial 30-day grace (Expired stage) passes, the tenant goes into disabled. During the Disabled stage (days 31–120), you still have two recovery options:

  1. Reactivate the Subscription: This is the preferred way if you want everything back to normal. As a global or billing admin, you can simply pay for the subscription again (renew for another term) and Microsoft will restore the subscription to active status immediately[1]. All user accounts and data are still there (since they weren’t deleted yet), so this effectively “unpauses” the service.
  2. Manually Export/Backup Data: If you don’t want to continue the service, the only way to “recover” data for yourself is to manually extract it while the tenant is disabled. That means using admin tools to backup Exchange mailboxes, SharePoint data, etc., to your own storage. Microsoft provides eDiscovery and content search tools that can export data out of Exchange Online and SharePoint Online. Third-party backup solutions (if they were configured earlier) could also be utilized to pull data. But after the grace period, users themselves can’t get their data – it’s on the admin to retrieve it.

Once the disabled period ends and the data is in Deleted status, no recovery method is available via Microsoft. The phrase “subscription can’t be reactivated” at the deleted stage is crucial[1]. Microsoft will have already deleted the data at that point[2].

Is there a final stage before permanent deletion?
Effectively, the Disabled stage is the final stage before deletion. There is no additional “warning stage” beyond disabled; deleted is the point of no return. One could argue that the very end of the disabled period is the last moment. Microsoft does not always send a specific notification right before deletion (you are already warned plenty that the subscription is disabled and needs action). As an admin, you should treat the end of the disabled timeline as the deadline to save anything or renew. Some admins set personal reminders for 90 days after the subscription expired as the last-ditch date.

Can administrators recover data just before it’s permanently deleted?
During the disabled stage (before deletion), yes – admins can recover by reactivating the subscription or by exporting data. Just before deletion, an admin might attempt to call Microsoft Support and request an extension of the disabled period. Occasionally, Microsoft Support might offer a slight grace if you are only a few days past (especially for enterprise accounts). However, this is not guaranteed and not an official policy for Business subscriptions. By policy, once data is deleted, support cannot restore it, as backups are also gone or irretrievable post-180 days. The best practice is to never rely on last-minute support; instead, take proactive steps well in advance of the deletion date.

Are there differences in how different data types are handled?
All data in Microsoft 365 falls under the same overarching lifecycle when a subscription lapses (with the exception of some specialized scenarios like if you have Exchange Online Archiving standalone, etc., which is not the case for Business Premium since it’s a bundle of services). In general:

  • Exchange Online (mailboxes) – retained through grace and disabled, then# Lifecycle of a Microsoft 365 Business Premium Tenant After License Non-Renewal

When a Microsoft 365 Business Premium subscription is not renewed at the end of its term, the tenant and its data progress through several lifecycle stages before final termination. Throughout these stages, the level of access for users and admins, as well as the status of stored data, changes in defined ways. This report details each stage – Expired (Grace Period), Disabled (Suspension), and Deleted (Termination) – including who can access services, what happens to data, the timelines involved, and recommended actions for administrators at each phase. We also address special considerations such as user notifications, data recovery options, and compliance (legal holds).

Overview: Stages After a Business Premium Subscription Expires

When a Business Premium subscription ends (e.g. you reach the renewal date without payment or you turn off auto-renewal), the subscription moves through three main stages before the tenant is fully shut down[2][1]:

  • Expired (Grace Period) – Immediately after the subscription’s end date, a grace period begins (typically 30 days for direct-purchase business subscriptions)[2][1]. During this stage, services remain fully accessible to users and admins as normal, allowing a last chance to renew or backup data without disruption[1].
  • Disabled (Suspended) – If the subscription is not renewed during the grace period, it moves to a disabled state (lasting roughly 90 days for most business subscriptions)[1]. In this stage, user access is turned off – users can no longer use Microsoft 365 services or apps – but administrators still have access to the tenant’s admin portal and data for backup or reactivation purposes[1].
  • Deleted (Terminated) – Finally, if no action is taken during the Disabled period, the subscription enters the deleted state (around 120 days after expiration, i.e. after 30+90 days)[2]. At this point all customer data is permanently deleted from Microsoft’s servers and no further recovery is possible[2][1]. The Microsoft Entra ID (Azure AD tenant) is also removed (if it’s not being used by other services)[1].

Each stage brings progressively more restrictions. Table 1 below summarizes the key characteristics of each post-expiry stage in terms of duration, access, and data status:

Table 1: Subscription Lifecycle Stages and Access/Data Status

AspectExpired Stage (Grace Period)Disabled Stage (Suspension)Deleted Stage (Termination)
Approx. Duration~30 days after end-date (typical)[1]~90 days after grace period[1]Begins ~120 days post-expiry (after Disabled)[2]
User Access to ServicesFully available. Users have normal access to all Microsoft 365 apps, email, OneDrive, Teams, etc. (no immediate impact)[1][2].No user access. Users are blocked from signing in to Microsoft 365 services. Office applications will enter a read-only (“unlicensed”) mode, and users cannot send/receive email or use Teams[1][2].No access. The subscription is closed. User accounts and licenses are no longer valid in Microsoft 365; all services are inaccessible and user data is gone[2].
Administrator AccessFull admin access. Admins retain normal access to the admin center and all data. They can manage settings and initiate renewal/reactivation during this period[1].Limited admin access. Admins can still sign in to the Microsoft 365 admin center and view or export data. However, they cannot assign licenses to users (since the subscription is suspended)[1][1]. Admins can still purchase or reactivate a subscription during this stage to restore service.Admin center only (if applicable). After deletion, admins generally lose access to the tenant’s data entirely. The admin portal may only be used to manage other subscriptions or start a new subscription for the organization[1]. If the Azure AD tenant itself is deleted, even admin sign-in is no longer possible.
Data State & RetentionData intact. All customer data (emails, files, SharePoint/OneDrive content, Teams data, etc.) remains fully retained and unchanged in this stage[1]. No data is deleted while in the 30-day grace period.Data retained (admin-only). All data is still retained in the backend without deletion. Only admins have access to this data during the Disabled stage[1]. For example, SharePoint and OneDrive files remain stored and can be accessed by an admin (or exported via eDiscovery tools), but end-users cannot access them[2]. Exchange mailboxes are preserved, but emails stop flowing to users’ inboxes (messages may queue or bounce)[2].**Data **permanently deleted. All customer data stored in the Microsoft 365 tenant is irreversibly purged by Microsoft[2]. This includes Exchange mailboxes, SharePoint sites, OneDrive files, Teams chat history, and any other content. The Azure AD (Entra ID) for the tenant is also deleted (unless it’s linked to other active services)[1]. No data can be recovered once this stage is reached.
Reactivation OptionsSubscription can be reactivated by admins at any time during this stage. A global or billing administrator can renew or purchase licenses to return the subscription to Active status with no loss of data[1].Subscription can still be reactivated during this stage. Admins can pay for the subscription and restore full functionality for users. Once reactivated during the Disabled period, all users regain access and data is again fully accessible[2].Cannot be reactivated. After deletion, the subscription and its data cannot be restored by renewing. If you later re-purchase Microsoft 365, it will be a fresh tenant without the old data[1].

Table 1: The progression of a lapsed Microsoft 365 Business subscription through Expired, Disabled, and Deleted states, with access permissions and data status at each stage.[1][1]

As shown above, a Business Premium tenant that is not renewed has about 120 days (4 months) from expiration until data is permanently lost, under the typical schedule (30 days Expired + 90 days Disabled)[1]. This timeline can vary slightly based on how the subscription was purchased (for instance, enterprise volume licensing agreements may have different grace periods)[1], but for direct and cloud subscriptions of Business Premium, the 30/90 day pattern holds in most cases.

Below, we detail each stage step-by-step, including the access level for users vs. admins, what happens to data and services, and what actions should be taken during that stage. We also cover the notifications admins receive as the subscription nears expiry and discuss special considerations (like legal compliance holds and data recovery).


Stage 0: Before Expiration – Warnings and Renewal Options

Before diving into the post-expiration stages, it’s important to note what happens leading up to the subscription’s end-date. Admins are not caught by surprise when a Business Premium subscription is about to expire:

  • Advance Notifications: Microsoft sends multiple warnings to administrators as the renewal date approaches[1]. These notifications appear in the Microsoft 365 admin center and are sent via email to billing administrators. They typically start some weeks before expiration and increase in frequency as the date nears. (For example, an admin might see reminders a month out, then 1-2 weeks out, and a final reminder a few days before expiry, ensuring they are aware of the pending license lapse.)
  • Admin Center Alerts: In the Microsoft 365 Admin Center dashboard, alerts will indicate an upcoming subscription renewal deadline. Global and billing administrators are informed that the Business Premium subscription will expire on a given date if no action is taken.
  • End-User Notices: Generally, end-users do not receive expiration notices at this stage. The warnings are directed to admins. Users continue to work normally and will only see impact if the subscription actually lapses. (End-users might eventually see “Your license has expired” messages in Office applications after the grace period, but not before that point[1].)

Administrators have options before expiration:

  1. Renew or Extend – The admin can renew the subscription (manually or via auto-renewal if enabled) before the expiration date to avoid any service interruption[1]. This could involve confirming payment for the next term or increasing seat counts if needed. If auto-renew was turned off intentionally (perhaps to allow it to lapse), the admin can still re-enable recurring billing prior to expiry to keep the tenant active[1].
  2. Let it Expire – If the organization decides not to continue with Microsoft 365, the admin can simply let the subscription run its course. Turning off recurring billing ensures it ends on the expiration date and does not charge again[1]. In this case, the stages described below will begin once the term expires. (Microsoft recommends performing data backups of critical information before the subscription ends if you plan not to renew[1].)

Once the expiration date arrives without renewal, the tenant immediately enters the Expired (grace period) stage. The sections below describe each subsequent phase in detail.


Stage 1: Expired (Grace Period – Days 1 to ~30 after Expiry)

Description: The Expired stage is a grace period of approximately 30 days that begins immediately after the subscription’s end date (Day 0 of non-renewal)[1]. During this time, the service is still essentially “up” and running normally. Microsoft provides this grace period to allow organizations a final opportunity to correct a lapsed payment or decide on renewal without cutting off access right away[1].

Duration: For Business Premium (and most Microsoft 365 business plans), the Expired status lasts 30 days from the expiration date[2]. (Some enterprise agreements might have a longer grace by contract, but 30 days is standard for cloud subscriptions[1].)

Access for Users: During the Expired stage, **end users *experience no change* in service[1]. All users can continue to log in and use Microsoft 365 apps and services as if nothing happened:

  • Users can send and receive emails via Exchange Online, and their Outlook continues to function normally[2].
  • OneDrive and SharePoint Online files remain accessible; users can view, edit, upload, and share documents during this period.
  • Teams chat, calls, and meetings continue to work as usual.
  • Desktop Office applications (Word, Excel, etc.) remain fully functional – no “unlicensed” warnings yet.
  • Any other services included in Business Premium (such as Microsoft Defender for Office 365, Intune, etc.) remain operational during grace.

In short, the grace period means business continuity: your staff likely won’t even realize the subscription has formally expired, provided the admin resolves it before the grace ends.

Access for Admins: Administrators still have full administrative control during the Expired stage:

  • Admins can sign in to the Microsoft 365 admin center and use all admin functionalities normally[1].
  • Admins can add or remove users, though (since the subscription is technically expired) they should not remove any licenses that are in use – but they can still manage settings and view all data.
  • However, no new licenses can be assigned beyond what was already there at expiry[1]. (If an admin tries to assign a license to a new user in an expired subscription, it won’t let them since the plan isn’t active for additional seats.)
  • Importantly, admins are the ones who can take action to end the Expired stage: by reactivating the subscription (i.e., processing payment). We cover this under “Actions” below.

Data Status: All customer data remains intact and fully accessible during the Expired stage[1]. Microsoft does not delete or restrict any data at this point, because the assumption is that you may renew and continue using the service. Key points:

  • Exchange Online mailboxes: All email messages, contacts, calendars, etc., are retained with no loss. Users can continue to use mail normally. New emails are delivered and nothing is queued or bounced at this stage.
  • SharePoint Online sites and OneDrive: All files and site contents remain exactly as they were. Users can add new files or modifications, which are saved normally within the tenant.
  • Teams data: Chat history, team channel content, calendars, etc., remain available and continue accumulating normally.
  • Azure AD (Entra ID): The directory of user accounts remains fully in place. User accounts are still active and tied to their licenses as before. No accounts are deleted during grace.

No special data retention policy kicks in yet – effectively, the tenant is in a state of full functionality, just with a clock ticking in the background. If the admin renews within this 30-day window, the subscription returns to Active status and everything continues uninterrupted, with no data loss or changes needed[2].

Administrator Notifications and Actions in Expired stage:

  • Ongoing Warnings: The admin center will display alerts like “Your subscription has expired – reactivate to avoid suspension” (or similar wording). Microsoft will continue sending emails to admins during the grace period as reminders that the subscription needs attention.
  • Reactivation: Admins can reactivate/renew the subscription at any point in the Expired stage by initiating payment (turning the subscription back to Active)[1]. This is typically done in the Billing section of the admin portal by selecting the expired Business Premium subscription and paying the renewal invoice or re-enabling a payment method. Once reactivated, the “Expired” status is lifted immediately – no data or access was lost, and users experience no downtime[2].
  • Backup Plans: If the organization decides not to renew (i.e. intends to let the subscription lapse permanently), the Expired stage is a good time to begin data backup and transition efforts. Microsoft specifically recommends backing up your data before it gets deleted if you plan to leave the service[1][1]. During the 30-day grace, since everything is accessible, admins can use content export tools (like the eDiscovery Center to export mailboxes to PST, or SharePoint’s SharePoint Migration Tool or manual download to save libraries) to capture important information. Third-party backup utilities can also be run at this stage to archive data while all accounts are active.
  • No Immediate User Impact: Because users have full access, an admin might choose to notify users (internally) that the subscription will not be renewed and advise them to save any personal files from OneDrive if needed. However, from a service perspective, users won’t see any difference during these 30 days.

Summary: The Expired (grace) stage is essentially a safety net period. All functionality is retained for ~30 days after a Business Premium subscription lapses[2]. This stage exists to prevent accidental loss of service due to a missed payment or oversight. Administrators should use this period to either renew the subscription or prepare for the next stage (suspension) by backing up data or informing users, depending on whether the plan is to continue or discontinue the service.


Stage 2: Disabled (Suspension Period – ~Day 31 to Day 120)

If no renewal action is taken during the 30-day grace, the grace period ends and the subscription status automatically changes from Expired to Disabled. This marks the beginning of the service suspension phase, where user access is cut off but data is still held for a limited time.

Description: The Disabled stage is a period of service suspension that lasts for up to 90 days after the end of the grace period[1]. In this stage, the subscription is not active, and thus normal functionality stops for end users. However, the tenant’s data is not yet deleted – Microsoft keeps it in storage for this period, giving a final window for recovery or renewal.

Duration: Approximately 90 days (three months) after the Expired stage. For most Business subscriptions, the Disabled status extends from day 31 through day 120 after subscription expiry[1]. (In total, Expired + Disabled ~ equals 120 days post-expiration. Some Microsoft documents refer to the full 90-day retention here.) In practice, Microsoft assures at least 90 days of Disabled status for data retention; in some cases data might be kept slightly longer (up to 180 days maximum after cancellation, per policy) but 90 days is the standard to count on[1].

Access for Users: During the Disabled stage, end users lose access to all Microsoft 365 services under that subscription:

  • User Login and Apps: Users who try to sign in to any Microsoft 365 service (Outlook, Teams, SharePoint, etc.) will no longer be able to authenticate under this tenant’s credentials, because their licenses are now in a suspended state. Essentially, the licenses are not valid during Disabled status, so users are blocked from using cloud services.
  • Office Applications: If users have the Office desktop apps installed (via their Business Premium license), those apps will detect the subscription is expired/disabled. They will eventually go into “reduced functionality mode,” which means view-only or read-only access. In Office, a banner may appear saying “Unlicensed Product”[1]. Users can still open and read documents, but editing or creating new documents is disabled while the product is unlicensed.
  • Exchange Email: Email services become inactive. Users will not be able to send or receive emails with their Exchange Online accounts once disabled. If someone emails a user, the message may not be delivered (likely the sender will receive a bounce/backscatter indicating the mailbox is unavailable). The user cannot log into Outlook or OWA at this stage. The email data (existing mailbox contents) still exists on the server, but it’s inaccessible to the user and essentially “frozen” in place until potential reactivation.
  • SharePoint and OneDrive: Users cannot access SharePoint sites or their OneDrive files via the usual interfaces. If they attempt to visit SharePoint or OneDrive links, they will likely get an access denied or a notice that the account is inactive. In effect, SharePoint Online sites and OneDrive accounts are inaccessible to the users, though the content still exists in the backend.
  • Teams: Microsoft Teams functionality is also disabled for users. They cannot log into Teams app or join meetings with their M365 account. Messages sent to them in Teams chats during this period will not reach them (the account is inactive). Any scheduled meetings created by that user might fail or appear orphaned.
  • Other Services: Any service that required an active user license (e.g., Microsoft Intune device management, or Office mobile apps tied to account) will not be usable by the user during the Disabled stage.

In summary, from the user perspective the account is effectively “locked out”. They have no access to emails, files, or any Office 365 app. It’s as if their license was removed entirely. This typically causes immediate impact in the organization – for example, employees will notice they can’t log in one morning, which likely prompts urgent action if it was unintentional.

Access for Admins: Even though end users are locked out, administrators still have limited access to the environment during the Disabled stage:

  • Admin Center Access: Global and Billing Admins can continue to log in to the Microsoft 365 Admin Center and view the tenant’s settings[1]. The Admin Center will clearly indicate the subscription is disabled due to non-payment. Admins can navigate the interface to gather information or perform certain tasks (with some restrictions).
  • Data Access for Admins: Crucially, admins can still access or extract data during this stage, even though users cannot. The Microsoft documentation states “data is accessible to admins only” in the Disabled state[1]. This means:
    • An admin can use content search/eDiscovery tools to open mailbox content and export emails. For instance, a compliance admin could search the user’s mailbox and export items to a PST file. (Admins might not be able to simply log in to the user’s mailbox via Outlook, since the user license is off, but using admin tools or converting the mailbox to a shared mailbox temporarily could allow access. Additionally, third-party backup tools with admin credentials can retrieve the data.)
    • For SharePoint/OneDrive, a SharePoint administrator can likely still access SharePoint Online Admin Center and use features like the SharePoint Management Shell or OneDrive admin retention tools to recover files. Also, files might be accessible if the admin assigns themselves as site collection admin to the user’s OneDrive site and then downloads content.
    • Any data in Microsoft Teams (which actually stores channel files in SharePoint and chat in Exchange mailboxes) can be retrieved via those underlying storage mechanisms if needed by an admin.
  • License Management: In the admin portal, the subscription will show as disabled. Admins cannot assign any of the Business Premium licenses to users during this period[1] (the system won’t allow changes because the subscription isn’t active). The admin also cannot add new users with that license. Essentially, capacity to manage user licensing is frozen.
  • Other Admin Functions: Admins can still perform tasks not related to that subscription’s licenses. For example, if the tenant had other active subscriptions (like perhaps Azure services or a different M365 subscription), they can still manage those. They can also manage domain settings, view reports, or use the admin center for things that don’t require modifying the disabled subscription.

It’s important to note that while admins have access to data, this doesn’t mean they can use the services in a traditional sense. For example, an admin’s own mailbox (if their user account was also under the now-disabled subscription) would also be inaccessible via normal means. The admin may need to use specialized admin tools to extract their own mailbox data too. The admin advantage is that they can go into the backend and get data, not that they can fully use the apps.

Data Status: All customer data remains preserved during the Disabled stage; however, it is in a read-only, dormant state:

  • No Data Deletion Yet: Microsoft does not delete anything during the Disabled period. Your users’ emails, files, and other content are all still stored safely in the cloud. The difference is just that users can’t reach it. Think of it as the data being in a vault that only admins can unlock at this point.
  • OneDrive/SharePoint Content: All documents and sites remain in place. If an admin were to reactive the subscription, users would find their OneDrive and SharePoint files exactly as they left them. If the organization is not renewing, admins should take this time to extract any files needed. For example, the admin could manually access each user’s OneDrive (with admin privileges) and copy data to a local storage or alternate account. Similarly, SharePoint sites can have their contents exported (via SharePoint Migration Tool or via saving libraries to disk).
  • Exchange Online Mailboxes: Mailboxes remain stored with all their email and calendar content. New incoming emails during Disabled stage may not be delivered to these mailboxes (senders might get an NDR message after a certain time). However, the content up to the point of entering Disabled stage is still there. Admins can use eDiscovery or content search to get the mailbox data. If the plan is to migrate away from M365, this stage is the time to export user mailboxes to PST files or another mail system. (If a mailbox was placed on Litigation Hold or had a retention policy, its data is still preserved here as well – more on compliance later.)
  • Teams Data: Teams chats and channel messages from before the Disabled stage remain stored (in user mailboxes or group mailboxes for channels). While users can’t use Teams now, an admin could retrieve chat content via Compliance Content Search if needed. Files shared in Teams are either in SharePoint (still accessible to admin) or OneDrive (accessible via admin).
  • Public Folders / Other Services: If any other data (like public folders in Exchange, or Planner tasks, etc.) existed, they also remain intact in the backend but inaccessible to users.

In essence, the Disabled period is your “last chance” to either restore service or save your data. Microsoft has put a hold on deleting anything, but the clock is ticking.

Administrator Options and Actions in Disabled stage:

  • Reactivating the Subscription: The most straightforward way to exit the Disabled stage is to reactivate the subscription by renewing payment within this 90-day window[1]. The global admin or billing admin can go into the Admin Center’s billing section and pay for the Business Premium subscription (or purchase a new subscription of equal or greater value and assign licenses to users). Once the payment is processed and the subscription returns to Active, all user access is restored immediately. Users will be able to log in again, emails will resume delivery, and the “unlicensed” notices on Office apps will disappear. Essentially, it will be as if the lapse never happened – no data was lost and everything resumes from where it left off[2]. This is the ideal outcome if the lapse was unintended or circumstances changed to allow renewal.
    • Note: Reactivating after a lapse may require paying for the period that was missed or starting a new term. Microsoft allows reactivation in-place during Disabled stage, so you generally keep the same tenant and just resume billing going forward.
  • Backing Up Data: If the decision is to not renew at all, the Disabled stage is the final opportunity to back up any remaining data from the Microsoft 365 tenant:
    • Admins should ensure they have exported all user mailboxes (using eDiscovery PST export, or a third-party backup tool). As a best practice, do this early in the Disabled phase rather than waiting till the last minute, to avoid any accidental data loss or issues.
    • All SharePoint sites and OneDrives that contain needed files should be backed up (download documents, or use a script to fetch all files).
    • If specialized data exists (like Project data, forms, or Power BI content), those should also be retrieved via available export options.
    • Microsoft’s notice is that any customer data left after the Disabled period “might be deleted after 90 days and will be deleted no later than 180 days” following the subscription cancellation[1]. So administrators should act under the assumption that once the standard 90 days are up, data could be purged at any time. Waiting beyond this point is extremely risky.
  • User Communication: If not renewing, it’s likely users are already aware (since they lost access). Admins should communicate with users that the service has been suspended. If the org is transitioning to another platform (like a different email system), this is when users need instructions on how to proceed (for example, accessing a new email account elsewhere). If the loss of service was unintentional, admins would by now be working to get it reactivated – and users should be informed that IT is addressing the downtime.
  • Grace in Disabled? It’s worth noting that while we say ~90 days, admins should not rely on any extra hidden grace beyond that. Microsoft’s policy is clear that data will be deleted after the Disabled period, and sometimes they cite 90 days explicitly, other times “no later than 180 days” to cover edge cases[1]. The safest interpretation: assume 90 days exactly. In many cases, tenants have reported data still being there up to 120 or even 150 days after expiration, but this is not guaranteed. The only guarantee is within 90 days.

In summary, the Disabled stage means the tenant is effectively offline for users but the data is frozen in place. Administrators can either renew the subscription to immediately restore functionality or finalize their data extraction and migration plans. If neither is done by the end of this stage, the tenant will move to the final stage and data will be permanently lost. This stage is critical for admins to manage carefully: it is the last buffer preventing permanent data loss.


Stage 3: Deleted (Final Tenant Deletion – After ~120 Days)

The final stage in the lifecycle is the Deleted stage, which the subscription enters after the Disabled period runs its course with no reactivation. Once this stage is reached, the subscription and all associated data are considered fully terminated by Microsoft.

Description: The Deleted stage represents the point at which Microsoft 365 has permanently turned off the subscription and purged customer data. In other words, the tenant is deprovisioned from Microsoft’s services. This typically happens automatically at the end of the 90-day Disabled window (for Business Premium, roughly 120 days after the initial expiration, as depicted in the timeline)[2].

Duration: Deleted is a terminal state, not a time-limited stage. Once in the Deleted status, the subscription doesn’t transition further – the tenant remains off. At this point the subscription is considered “non-recoverable”[4]. There is no additional grace; the data is gone and the service will not come back unless starting from scratch.

Access for Users: There is no user access at all in the Deleted stage:

  • All user accounts from the former tenant no longer have any Microsoft 365 service tied to them. In fact, if the Azure Active Directory (Entra ID) for the tenant is deleted (as it typically is if no other services were using it), the user accounts themselves are deleted too[1].
  • If a user tries to log in, their account won’t be found. Their email addresses are no longer recognized by Microsoft 365. Essentially, from the cloud service perspective, those users do not exist anymore in that context.
  • Any attempt to access data (SharePoint sites, OneDrive URLs, etc.) will fail because those resources are no longer available in Microsoft’s cloud.

Access for Admins: Administrator access is also extremely limited:

  • Admin Center: In general, the deleted subscription will no longer appear in the Admin Center for that tenant. If the entire tenant (Azure AD) is deleted, the global admin account used for that tenant is also gone, so even the admin cannot sign in to that tenant’s portal anymore[1].
  • If the Azure AD is not deleted (for example, if the organization had other separate subscriptions like an Azure subscription or a different Microsoft 365 subscription still using that same directory), then the admin can still log in to the Azure AD and see that the Business Premium subscription object is in a deleted state. But none of the data from the subscription is accessible – the Exchange, SharePoint, etc. data has been wiped.
  • Essentially, admins can only use the admin center to manage other active subscriptions or to purchase a new subscription if they want to start over[1]. They cannot recover anything related to the deleted subscription. Microsoft’s documentation states that once deleted, the subscription cannot be reactivated or restored[1].

Data Status: All customer data is permanently deleted at this stage:

  • Microsoft purge operations will have been executed to remove Exchange mailboxes, SharePoint site collections, OneDrive content, Teams chat data, and any other stored information for the tenant[2]. The data is no longer available on Microsoft’s servers. It is irrecoverable by any means.
  • Additionally, the Microsoft Entra ID (Azure Active Directory) for the tenant is removed (if that directory isn’t being used by another subscription)[1]. This means the actual tenant identification is gone – all user objects, groups, and any Azure AD-integrated applications in that directory are deleted.
    • Note: If the Azure AD was shared with another service (like if you had an Azure subscription without M365, or if you activated some separate service on the same tenant), Microsoft might not delete the directory itself. Instead, they would just remove all Microsoft 365 service data and leave the bare directory. In that scenario, the global admin account might still exist as a user in Azure AD, but with no licenses. However, all data (mail, files) is still wiped.
  • Backups: Microsoft generally does not retain backups once a tenant is deleted beyond what might exist for disaster recovery on their side (and those are not accessible to customers). So effectively, anything not already saved by the admin before deletion is lost. Even support cannot bring back a tenant that has passed this point.
  • Domain Names: If the organization was using a custom domain with Microsoft 365 (e.g., companyname.com for email addresses), after deletion, that domain will eventually be released from the old tenant. Typically, within a few days of tenant deletion, the domain becomes free to use on another tenant. This could be relevant if you plan to set up a new M365 tenant and reuse the same email domain.

Administrator Actions at Deleted stage: Ideally, you do not want to reach this stage without preparation. Once in Deleted status, options are extremely limited:

  • New Subscription: The only path forward, if you want to use Microsoft 365 again, is to start a new subscription/tenant. This would be essentially starting from scratch – you’d get a new tenant ID (or possibly register the old domain if it’s freed up) and manually import any data you saved. Microsoft explicitly notes

ntally allowed a lapse has no recourse beyond this point.


Additional Considerations

Notifications and Pre-Expiration Warnings (Admin Perspective)

Administrators will receive several notifications as the renewal date approaches. In the Microsoft 365 admin center, warnings typically start appearing as the subscription nears its end. According to Microsoft, admins receive a series of email and in-portal notifications prior to expiration[1]. These might include messages like “Your subscription will expire on \. Please renew to avoid interruption.” While the exact cadence isn’t specified publicly, many admins report getting notices roughly 30 days out, 7 days out, and at expiration, among others. It’s crucial for admins to ensure their contact info is up to date in the tenant, so these notices are received.

End users, on the other hand, do not typically get an “expiration” notification from Microsoft (unless an admin communicates it or if their Office apps show a small warning). Microsoft’s notifications about subscription status are directed to admins, not end-users. The first time an end-user might see an automated notice is if their Office apps go unlicensed in the Disabled stage, which results in a banner prompting for login/renewal. Therefore, it is the admin’s responsibility to communicate with users if a lapse is expected.

Impact on Different Services and Data Types

As outlined earlier, all major services are affected, but here’s a quick recap of how various data types/services behave through the stages:

  • Exchange Email: During Expired (grace), email is fully functional[2]. During Disabled, mailboxes are inaccessible to users and email flow is halted (messages to/from users will not be delivered)[2]. The data in the mailbox remains stored though, until deletion. At Deleted stage, mailbox data is gone permanently. If there were any special mail archiving or journaling in place, those too are gone unless handled externally.
  • OneDrive and SharePoint files: During Expired, all files and SharePoint content can be accessed and edited normally by users. During Disabled, the content is read-only and only accessible to admins (users can’t access their OneDrives or SharePoint sites at all)[2]. No data deletion happens until the final stage; then at Deleted, all files and site content are purged from SharePoint/OneDrive storage.
  • Microsoft Teams: Teams relies on other services (Exchange for chat storage, SharePoint for files). In Expired, Teams chats, calls, and filesharing work normally. In Disabled, Teams is non-functional for users – they cannot login to the Teams app or attend meetings via their account. Messages sent to them will fail. The data (chat history, Team sites) is retained in the backend but nobody can use Teams in the organization. By Deleted, all Teams data is removed (any Team sites are SharePoint sites, which are deleted; chat data in mailboxes is deleted).
  • Other Office apps (Word, Excel, PowerPoint, etc.): In Expired, the desktop apps continue to work normally (since the user’s license is technically still considered valid during grace). In Disabled, if a user tries to use an Office desktop app, it will detect an inactive license and switch to read-only mode[1] (documents can be opened or printed, but not edited or saved). Web versions of Office apps won’t be usable at all because login is blocked. At Deleted, of course, the apps can’t be used through that account (the user would have to sign in with a different active license or use another means).
  • SharePoint Online site functionality: If your Business Premium tenant had any SharePoint Online intranet or site pages, those follow the same rule: accessible in Expired, no access in Disabled (effectively offline, though admins could pull data out via SharePoint admin), and deleted at the end. If external users had access to any content (via sharing links), those links would stop working once Disabled hits because the content is locked down, and obviously cease completely after deletion.
  • Azure AD data: While not “user content”, it’s worth noting the status of your Azure AD. In Expired and Disabled, the Azure AD (user accounts, groups) still exists. You could even perform some Azure AD tasks (like resetting passwords or adding guest users) in Disabled, but they won’t have effect on usage until a renewal. At deletion, if your Azure AD is not used by any other subscription, it gets deleted along with all the user accounts[1]. If your Azure AD was linked to other active services (like an Azure subscription, or if you had multiple Microsoft 365 subscriptions and only one expired), then the Azure AD itself may remain, but the accounts’ ties to the expired subscription are removed. In a pure single-subscription scenario, Azure AD goes away with the tenant deletion.
  • Licenses and add-ons: Any additional licenses (like add-on licenses or other service subscriptions attached to users) will also expire or become non-functional in line with the main subscription. For example, if you had a premium third-party app in Teams or an Azure Marketplace app that relies on the tenant, those would also cease when the main tenant is disabled/deleted.

There are generally no differences in the process for different data types – all customer data is treated the same in the retention and deletion timeline[5]. The key difference is just in how the user experiences the loss of access for each service. But ultimately, whether it’s an email or a file or a chat message, it will be preserved through the Disabled stage and wiped at the Deleted stage.

Best Practices for Administrators at Each Stage

Managing a subscription that’s expiring requires planning. Here are best practices and action items for admins:

  • Before Expiration (Active stage):
    • Keep an eye on renewal dates. Mark your calendar well in advance of your renewal deadline, especially if you have recurring billing off.
    • Enable auto-renewal if appropriate, to avoid accidental lapses[2]. If you intentionally don’t want to renew, plan for that decision rather than letting it catch you off guard.
    • Notify finance or decision-makers in your organization as the date approaches so that the renewal can be approved or alternative plans made.
    • If you know you will not renew, formulate a data migration plan ahead of time (e.g., moving to another platform or archiving data).
  • Expired Stage (0–30 days after end):
    • Renew promptly if you intend to continue. There’s no benefit to waiting, and renewing will remove the “expired” status and keep users from ever seeing any disruption[1].
    • If not renewing, begin data backup tasks immediately (don’t wait until day 29). Copy critical files, export mailboxes, etc., while everything is easily accessible. This 30-day window is the most convenient time to get data out.
    • Monitor the grace period timeline. Know when that 30 days is up. Microsoft may show a countdown in the admin center. You don’t want to accidentally slip into Disabled if you didn’t mean to.
    • Inform key staff: if not renewing, leadership and IT staff should know the exact date when users will lose access (day 30). You might hold off telling all end-users until closer to the Disabled date to avoid confusion, but your IT helpdesk should be prepared.
  • Disabled Stage (30–120 days after):
    • If you haven’t yet renewed but still want to, this is the last chancereactivate the subscription as soon as possible to restore service[1].
    • If you’re in this stage intentionally (to finish migration or because of finances), accelerate your backup/export efforts. You have up to 90 days, but it’s wise to complete backups well before the final deadline in case of any issues or large data volumes to export.
    • Manage communications: At the start of the Disabled stage, you should communicate with end-users that the service is now suspended. Likely they will already be alerting you since they can’t access email or Teams. Provide them guidance if they need any data (though they themselves can’t access it now, you might fulfill requests by retrieving data for them).
    • Security consideration: Even though users can’t access, their accounts still exist in Azure AD. It might be prudent to ensure MFA is enabled or accounts are protected in case someone tries to misuse the situation. Generally, though, since login won’t grant access to data, this is a minor concern.
    • Consider alternate solutions: If your organization only needs some parts of M365, consider whether you can purchase a smaller plan to maintain minimal access. For example, if email data retention is legally required, buying a few Exchange Online Plan 1 licenses for key mailboxes and reactivating the tenant under that could be a strategy. This must be done before deletion.
  • Approaching Deletion (~120 days):
    • Double-check that all required data is backed up. Ensure you have downloaded everything vital – you won’t get another chance.
    • If you are on the fence about needing something, it’s better to back it up now. Even if it’s large (like a SharePoint document library), export it.
    • Verify backups: Open some PST files, try restoring a document from backup to make sure your backups are not corrupted.
    • Remind decision-makers that the drop-dead date is coming. Sometimes seeing “your data will be unrecoverable after X date” motivates a final decision to either renew or accept the loss.
  • Post-Deletion:
    • If you’ve moved away from Microsoft 365, ensure you have a secure storage for the data you exported (since it may contain sensitive emails, etc., outside of Microsoft’s protected cloud).
    • If you are starting a new platform, begin importing that data as needed.
    • Clean up any decommissioning tasks (like uninstalling Office software from devices if you’re no longer licensed, etc.)
    • Reflect on the process and ensure any future critical cloud subscriptions are tracked so that expirations are handled more smoothly.

In general, the best practice is to avoid reaching the Disabled/Deleted stages unintentionally. If you plan to keep using Microsoft 365, renewing before day 30 is ideal to prevent any user impact. If you plan to leave, use the provided time to cleanly extract your data. Communication and planning are key to avoid panic when users lose access.

Compliance and Legal Hold Considerations

One might wonder: What if our organization has placed certain mailboxes or data on Litigation Hold or uses retention policies? Will that data still be deleted after the 120 days? The answer is yes – the subscription lifecycle overrides individual data holds. Once the tenant is deleted, any and all data in it is gone, regardless of legal hold. Legal hold and retention settings keep data from user deletion during an active subscription, but they do not keep data indefinitely if the entire subscription is terminated. Microsoft’s policy for subscription termination is that after the retention period, all customer content is deleted from the cloud[5]. There is no built-in mechanism to extend that on a per-tenant basis for hold reasons without a valid subscription.

Therefore, if you have compliance obligations (e.g., emails that must be retained for X years), you must plan for that before the subscription is lost. Options include:

  • Maintain at least an Exchange Online subscription for those mailboxes (i.e., don’t let the tenant fully expire; keep a minimal plan active so that holds remain in effect).
  • Export and archive data externally according to your compliance requirements. For example, if you must keep certain emails for 7 years, you should export those mailboxes to a secure archive (on-premises or another service) before Microsoft deletes them.
  • Use a third-party backup or archive service that can take ownership of the data. Some companies will, for example, export all Office 365 data to an eDiscovery archive or to an offline backup appliance prior to letting a subscription lapse.

It’s also wise to document the chain of custody for data if legal compliance is involved. Microsoft provides audit logs and reports that could show when data was deleted (which would indicate the subscription deletion date). You might save those reports to demonstrate that data was held for the required period and then deleted as part of system decommissioning.

Finally, be mindful of any user personal data (GDPR considerations, etc.). If an employee asks for their data or wants to ensure it’s deleted, the lifecycle will indeed delete it, but before deletion you still have control to fulfil data subject requests by exporting or removing content. Once it’s deleted by Microsoft, you can consider that a final deletion event.


Conclusion

A Microsoft 365 Business Premium tenant that isn’t renewed goes through a structured wind-down process over roughly 120 days, giving administrators opportunities to save the subscription or salvage the data. In summary:

  • Expiration Day 0: The subscription enters Expired (grace period) for 30 days. Everything remains fully functional for users and admins during this time[1]. Admins should use this time to renew or plan next steps.
  • Day 30: If not renewed, it moves to Disabled. The next 90 days involve suspended service – users lose all access, but data is still held intact in the backend[2]. Only admins can access the environment (for recovery or reactivation)[1]. This is the final window to act: either renew the subscription to promptly restore functionality, or export all necessary data if the decision is to discontinue the service[1].
  • Around Day 120: The tenant enters Deleted status. Microsoft permanently deletes all data in the tenant and releases the associated Azure AD domain[2][1]. At this point, nothing can be recovered and the subscription cannot be brought back.

Throughout these stages, Microsoft provides clear warnings to admins and maintains data for a reasonable period, but it is ultimately the administrator’s responsibility to take action to avoid data loss. By understanding the stages and proactively managing each step – whether that means timely renewal, data backup, or communications – an organization can handle a subscription non-renewal in a controlled, safe manner without unexpected surprises.

Remember: if you ever find yourself unsure, refer to Microsoft’s documentation and reach out to Microsoft Support during the grace or disabled period. Once the data is deleted, even Microsoft cannot assist in recovery[1]. Planning and prompt action are your best tools to protect your digital assets when a Business Premium subscription lapses.

References

[1] What happens to my data and access when my Microsoft 365 for business …

[2] What happens if my subscription to Microsoft 365 Business Standard expires?

[3] Here’s What Happens When Your Office 365 Subscription Expires – SysTools

[4] Subscription Lifecycle States – Partner Center | Microsoft Learn

[5] Data retention, deletion, and destruction in Microsoft 365

Analysis of Intune Android Compliance Policy Settings for Strong Security

This report reviews each setting in the provided Android Intune Compliance Policy JSON and evaluates whether it aligns with best practices for strong device security. For each setting, we explain its purpose, available configuration options, and why the chosen value is configured to maximize security. Overall, the policy enforces a defense-in-depth approach – requiring a strong unlock password, up-to-date system software, device encryption, and other controls – which closely follows industry security benchmarks[1]. The analysis below confirms that every configured setting reflects accepted best practices to protect Android devices and the sensitive data on them.

Password Security Requirements

Requiring a strong device PIN/password is fundamental to mobile security. This policy’s System Security section mandates a lock screen password with specific complexity rules. These settings are all considered best practice, as they greatly reduce the risk of unauthorized device access[2][3]:

  • Require Password to Unlock DeviceEnabled (Require). This forces users to set a lock screen PIN/password. It is a baseline security best practice so that no device can be accessed without authentication[2]. Purpose: Ensures the device isn’t left unprotected. Options: “Not configured” (no requirement) or “Require” a password. Rationale: Marking this as “Require” is essential – devices must be password-protected to be considered compliant[2], which prevents unauthorized access to corporate data.
  • Required Password TypeAlphanumeric. This setting specifies the complexity of the password. Options range from numeric PINs to alphanumeric with symbols[4][5]. Requiring alphanumeric means the password must include letters (and usually numbers), not just digits, which significantly increases its strength[3]. Purpose: Enforce a complex password (as opposed to a simple PIN). Options: Numeric (digits only), Numeric complex (no simple patterns like 1234), Alphabetic (letters only), Alphanumeric (letters + numbers), or Alphanumeric with symbols[4]. Rationale: Alphanumeric passwords are far harder to crack than 4-digit PINs. Best practice from security audits is to require at least alphanumeric complexity[3], which this policy does. This ensures the device lock is not easily guessable.
  • Minimum Password Length6 characters. This sets the shortest allowed length for the PIN/password. Longer passwords are more secure. Intune allows 4–16; industry guidance recommends at least 5 or more characters[6]. The policy’s value of 6 exceeds the minimum recommendation, which is good for security (e.g. a 6-digit PIN has 1 million combinations versus 10,000 for 4-digit). Purpose: Prevent very short, trivial PINs. Options: 4–16. Rationale: A minimum length of 6 is aligned with best practices (Tenable recommends 5 or more for compliance)[6]. This length increases resistance to brute-force guessing while still being reasonable for users to remember.
  • Maximum Minutes of Inactivity Before Password is Required5 minutes. This setting (often called device auto-lock timeout) controls how quickly the device locks itself when idle. A low value means the device will require re-authentication sooner. Here it’s set to 5 minutes, which is in line with strict security guidelines (Tenable suggests 5 minutes or less)[7]. Purpose: Limit how long an unattended device stays unlocked. Options: Various minute values (1, 5, 15, etc.) or not configured. Rationale: 5 minutes of inactivity before auto-lock is a best practice balance between security and usability[7]. It ensures a lost or idle device will secure itself quickly, minimizing the window for an attacker to pick it up and access data. Short timeouts greatly reduce risk if a user forgets to lock their phone.
  • Password Expiration (Days)90 days. This defines how often the user must change their device password. The policy requires a password change after 90 days (about 3 months). Regular rotation of passwords is a traditional security practice to limit exposure from any one credential. Purpose: Prevent use of the same password indefinitely. Options: 1–255 days, or not configured. Rationale: 90 days is a commonly recommended maximum password age in many security standards[8]. Tenable’s best-practice audit recommends 90 days or fewer for mobile devices[8]. For strong security, forcing periodic changes can mitigate the impact if a password was unknowingly compromised – the window of misuse is limited. (Note: Some modern guidelines put less emphasis on frequent expiration in favor of complexity, but 90-day expiry is still widely used in compliance policies and thus is reasonable here.)
  • Password History (Prevent Reuse)Last 5 passwords. This ensures the user cannot cycle back to recently used passwords when changing it. The policy likely prevents reuse of at least the previous 5 passwords (meaning the user must come up with 6 unique passwords before an old one can be used again). Purpose: Enforce password uniqueness across changes. Options: 1–24 previous passwords remembered (Intune allows up to 24). Rationale: Reusing old passwords defeats the purpose of expiration. Requiring a history of 5 or more past passwords not to be reused is recommended so users don’t just alternate between two favorites[4]. This policy’s setting aligns with that guidance. It forces truly new passwords at each reset, maintaining effective security over time.

Together, these password policies ensure the device has a robust lock screen defense: a nontrivial PIN/passcode that must be changed regularly and cannot be easily bypassed or guessed. This complies with industry best practices (for example, CIS Benchmarks and security auditors require a device lock PIN of sufficient length and complexity and short idle lock time)[1]. Enforcing these settings makes it far less likely for an unauthorized person to unlock a lost or stolen device and thereby protects the enterprise data on it.

Device Encryption

Requiring encryption of the device storage is another cornerstone of mobile security. This policy mandates encryption, meaning the data on the phone cannot be read without the device being unlocked. This is unequivocally a best practice for strong security:

  • Encryption of Data Storage on DeviceRequire. The compliance rule is set so that the device must be encrypted (usually, Android devices automatically encrypt when a PIN/password is set, so this goes hand-in-hand with the password requirement). Purpose: Protect data at rest by encryption, so that even if the device is stolen and its storage is removed, the data remains scrambled without the encryption key. Options: “Require” or “Not configured”. Rationale: Marking encryption as Required is considered an essential security baseline. Tenable’s audit specifies that “Encryption of data storage on device” should be set to Require[9]. This ensures that all sensitive information on the phone (emails, files, app data) is encrypted by the OS. In practice, this means an attacker can’t simply connect the device to a computer or remove its SD card to extract data – they would need the user’s passcode to decrypt it. Requiring encryption is a standard best practice and is enabled by default in this policy[9].

In summary, the policy’s encryption setting ensures data confidentiality even if physical device security fails. It aligns with strong security principles and most regulatory requirements (many frameworks mandate full-device encryption for mobile devices).

Device Security Settings (App Sources and Debugging)

The policy includes additional system security rules to prevent risky device configurations. These settings block the user from enabling sources or modes that could introduce malware or vulnerabilities, which is consistent with best practices for hardening Android devices:

  • Block Apps from Unknown SourcesBlock (Enabled). This compliance check likely verifies that the device is not allowing app installations from outside the official app store. In other words, the user must not turn on the Android setting that permits installs from unknown sources. Purpose: Ensure only vetted apps (from Google Play or the managed Play Store) can be installed, reducing the risk of malware. Options: Not configured, or Block. Rationale: Blocking unknown sources is strongly recommended by security experts[10]. Sideloading apps (installing APK files from random websites or USB) bypasses app vetting and can lead to malware infections. The policy marks a device non-compliant if that setting is enabled, thus users are forced to keep it off (which is the secure state)[10]. This aligns with best practice to allow installs only from trusted app stores.
  • Block USB Debugging (Developer Mode)Block (Enabled). This setting ensures that the device is not in Developer mode with USB debugging enabled. USB debugging is a developer feature that could be exploited to bypass certain security controls or install apps via USB. Purpose: Prevent the device from running in a state that is meant for development/testing, which could expose it to abuse. Options: Not configured, or Block. Rationale: **Blocking USB debugging is a known best

References

[1] Tenable Best Practices for Microsoft Intune Android v1.0

[2] Android Compliance Policy – Require a password to unlock mobil …

[3] Android Compliance Policy – Required password type – Tenable

[4] Android Compliance Policy – Number of previous passwords to pr …

[5] IntuneDeviceCompliancePolicyAndroidDeviceOwner – Microsoft365DSC

[6] Android Compliance Policy – Minimum password length – Tenable

[7] Android Compliance Policy – Maximum minutes of inactivity befo …

[8] Android Compliance Policy – Password expiration (days)

[9] Android Compliance Policy – Encryption of data storage on device

[10] Android Compliance Policy – Block apps from unknown sources

Robert.Agent now recommends improved questions

bp1

I continue to work on my autonomous email agent created with Copilot Studio. a recent addition is that now you might get a response that includes something like this at the end of the information returned:

image

It is a suggestion for an improved prompt to generate better answers based on the original question.

The reason I created this was I noticed many submissions were not writing ‘good’ prompts. In fact, most submissions seem better suited to search engines than for AI. The easy solution was to get Copilot to suggest how to ask better questions.

Give it a go and let me know what you think.

Analysis of iOS Intune Compliance Policy for Strong Security

Modern enterprises use Intune compliance policies to enforce best practice security settings on iPhones and iPads. The provided JSON defines an iOS compliance policy intended to ensure devices meet strong security standards. Below, we evaluate each setting in this policy, explain its purpose and options, and verify that it aligns with best practices for maximum security. We also discuss how these settings map to industry guidelines (like CIS benchmarks and Microsoft’s Zero Trust model) and the implications of deviating from them. Finally, we consider integration with other security measures and recommendations for maintaining the policy over time.

Key Security Controls in the Compliance Policy

The following sections break down each policy setting in detail, describing what it does, the available options, and why its configured value is considered a security best practice.

1. Managed Email Profile Requirement

Setting: Require managed email profile on the device.\ Policy Value: Required (Not Not Configured).\ Purpose & Options: This setting ensures that only an Intune-managed email account/profile is present on the device. If set to “Require”, the device is noncompliant unless the email account is deployed via Intune’s managed configuration[1]. The default Not configured option means any email setup is allowed (no compliance enforcement)[1]. By requiring a managed email profile, Intune can verify the corporate email account is set up with the proper security (enforced encryption, sync settings, etc.) and not tampered with by the user. If a user already added the email account manually, they must remove it and let Intune deploy it; otherwise the device is marked noncompliant[1].

Why it’s a Best Practice: Requiring a managed email profile protects corporate email data on the device. It prevents scenarios where a user might have a work email account configured outside of Intune’s control (which could bypass policies for encryption or remote wipe). With this requirement, IT can ensure the email account uses approved settings and can be wiped if the device is lost or compromised[1]. In short, it enforces secure configuration of the email app in line with company policy. Not using this setting (allowing unmanaged email) could lead to insecure email storage or difficulty revoking access in a breach. Making it required aligns with strong security practices, especially if email contains sensitive data.

Trade-offs: One consideration is user experience: if a user sets up email on their own before enrollment, Intune will flag the device until that profile is removed[1]. IT should educate users to let Intune handle email setup. In BYOD scenarios where employees prefer using native Mail app with personal settings, this requirement might seem intrusive. However, for maximum security of corporate email, this best practice is recommended. It follows the Zero Trust principle of only permitting managed, compliant apps for corporate data.

2. Device Health: Jailbreak Detection

Setting: Mark jailbroken (rooted) devices as compliant or not.\ Policy Value: Block (mark as not compliant if device is jailbroken)[1].\ Purpose & Options: This control checks if the iOS device is jailbroken (i.e., has been modified to remove Apple’s security restrictions). Options are Not configured (ignore jailbreak status) or Block (flag jailbroken devices as noncompliant)[1]. By blocking, Intune will consider any jailbroken device as noncompliant, preventing it from accessing company resources through Conditional Access. There’s no “allow” option – the default is simply not to evaluate, but best practice is to evaluate and block.

Why it’s a Best Practice: Jailbroken devices are high risk and should never be allowed in a secure environment[2]. Jailbreaking bypasses many of Apple’s built-in security controls (code signing, sandboxing, etc.), making the device more vulnerable to malware, data theft, and unauthorized access[2][2]. An attacker or the user could install apps from outside the App Store, escalate privileges, or disable security features on a jailbroken phone. By marking these devices noncompliant, Intune enforces a zero-tolerance policy for compromised devices – aligning with Zero Trust (“assume breach”) by treating them as untrusted[2]. Microsoft explicitly notes that jailbroken iOS devices “bypass built-in security controls, making them more vulnerable”[2]. This setting is easy to implement and has low user impact (legitimate users typically don’t jailbreak), but provides a big security payoff[2].

Allowing jailbroken devices (by not blocking) would be contrary to security best practices. Many security frameworks (CIS, NIST) recommend disallowing rooted/jailbroken devices on corporate networks. For example, the Microsoft 365 Government guidance includes ensuring no jailbroken devices can connect. In our policy, “Block” is absolutely a best practice, as it ensures compliance = device integrity. Any device that is detected as jailbroken will be stopped from accessing company data, protecting against threats that target weakened devices.

Additional Note: Intune’s detection is not foolproof against the latest jailbreak methods, but it catches common indicators. To improve detection (especially in iOS 16+), Location Services may be required (as noted by Microsoft Intune experts) – Intune can use location data to enhance jailbreak detection reliability. As part of maintaining this policy, ensure users have not disabled any phone settings that would hinder jailbreak checks (an Intune advisory suggests keeping certain system settings enabled for detection, though Intune prompts the user if needed).

3. Device Health: Threat Level (Mobile Threat Defense)

Setting: Maximum allowed device threat level, as evaluated by a Mobile Threat Defense (MTD) service.\ Policy Value: Secured (No threats allowed) – if an MTD integration is in use.\ Purpose & Options: This setting works in conjunction with a Mobile Threat Defense solution (like Microsoft Defender for Endpoint on iOS, or third-party MTD apps such as Lookout, MobileIron Threat Defense, etc.). It lets you choose the highest acceptable risk level reported by that threat detection service for the device to still be compliant[1]. The options typically are: Secured (no threats), Low, Medium, High, or Not configured[1]. For example, “Low” means the device can have only low-severity threats (as determined by MTD) and still be compliant, but anything medium or high would make it noncompliant[1]. “Secured” is the most stringent – it means any threat at all triggers noncompliance[1]. Not configured would ignore MTD signals entirely.

In the context of a strong security policy, setting this to Secured means even minor threats (low severity malware, suspicious apps, etc.) cause the device to be blocked[1]. This is indeed what our policy does, assuming an MTD is in place. (If no MTD service is connected to Intune, this setting wouldn’t apply; but the JSON likely has it set anticipating integration with something like Defender.)

Why it’s a Best Practice: Mobile Threat Defense adds dynamic security posture info that pure device settings can’t cover. By requiring a Secured threat level, the policy ensures that only devices with a completely clean bill of health (no detected threats) can access corporate data[1]. This is aligned with a high-security or “Level 3” compliance approach[3]. Microsoft’s High Security baseline for iOS specifically recommends requiring the device to be at the highest security threat level (Secured) if you have an MTD solution[3][3]. The rationale is that even “low” threats can represent footholds or unresolved issues that, in a highly targeted environment, could be exploited. For example, a sideloaded app flagged as low-risk adware might be harmless – or it might be a beachhead for a later attack. A Secured-only stance means any threat is unacceptable until remediated.

This stringent setting makes sense for organizations that prioritize security over convenience, especially those facing sophisticated threats. Users with malicious apps or malware must clean their device (usually the MTD app will instruct them to remove the threat) before they regain access. It’s a preventative control against mobile malware, man-in-the-middle attacks, OS exploits, etc., as identified by the MTD tool.

Options and Balance: Some organizations without an MTD solution leave this Not configured, which effectively ignores device threat level. While simpler, that misses an opportunity to enforce malware scanning compliance. Others might set it to Low or Medium to allow minor issues without disruption. However, for maximum security, “Secured” is ideal – it is explicitly called out in Microsoft’s level 3 (high security) recommendations[3]. It’s worth noting that using this setting requires deploying an MTD app on the devices (such as the Microsoft Defender app for Endpoint on iOS or a partner app). For our strong security baseline, it’s implied that such a solution is in place or planned, which is why Secured is chosen.

If not implemented: If your organization does not use any MTD/Defender for mobile, this setting would typically be left not configured in the policy (since there’s no data to evaluate). In that case, you rely on the other controls (like jailbreak detection, OS version, etc.) alone. But to truly maximize security, incorporating threat defense is recommended. Should you decide to integrate it later, this policy value can be enforced to immediately leverage it.

4. Device Properties: Minimum OS Version

Setting: Minimum iOS operating system version allowed.\ Policy Value: iOS 16.0 (for example) – i.e., devices must be on iOS 16.0 or above.\ Purpose & Options: This compliance rule sets the oldest OS version that is considered compliant. Any device running an iOS version lower than this minimum will be flagged as noncompliant[1]. The admin specifies a version string (e.g. “16.0”). Available options: you provide a version – or leave Not configured to not enforce a minimum[1][1]. When enforced, if a device is below the required version, Intune will prompt the user with instructions to update iOS and will block corporate access until they do[1]. This ensures devices aren’t running outdated iOS releases that may lack important security fixes.

Why it’s a Best Practice: Requiring a minimum OS version is crucial because older iOS versions can have known vulnerabilities. Apple regularly releases security updates for iOS; attackers often target issues that have been patched in newer releases. By setting (and updating) a minimum version, the organization essentially says “we don’t allow devices that haven’t applied critical updates from the last X months/year.” This particular policy uses iOS 16.0 as the baseline (assuming iOS 17 is current, this corresponds to “N-1”, one major version behind the latest)[3]. Microsoft’s guidance is to match the minimum to the earliest supported iOS version for Microsoft 365 apps, typically the last major version minus one[3]. For example, if iOS 17 is current, Microsoft 365 apps might support iOS 16 and above – so requiring at least 16.x is sensible[3]. In the JSON provided, the exact version might differ depending on when it was authored (e.g., if created when iOS 15 was current, it might require >= iOS 14). The principle remains: enforce updates.

This is absolutely a best practice for strong security. It’s reflected in frameworks like the CIS iOS Benchmark, which suggests devices should run the latest iOS or within one version of it (and definitely not run deprecated versions). By enforcing a minimum OS, devices with obsolete software (and thus unpatched vulnerabilities) are barred from corporate access. Users will have to upgrade their OS, which improves overall security posture across all devices.

Management Considerations: The admin should periodically raise this minimum as new iOS versions come out and older ones reach end-of-support or become insecure. For instance, if currently set to 16.0, once iOS 18 is released and proven stable, one might bump minimum to 17.0. Microsoft recommends tracking Apple’s security updates and adjusting the compliance rule accordingly[3][3]. Not doing so could eventually allow devices that are far behind on patches.

One challenge: older devices that cannot update to newer iOS will fall out of compliance. This is intended – such devices likely shouldn’t access sensitive data if they can’t be updated. However, it may require exceptions or phased enforcement if, say, some users have hardware stuck on an older version. In a maximum security mindset, those devices would ideally be replaced or not allowed for corporate use.

Maximum OS Version (Not Used): The policy JSON might also have fields for a Maximum OS Version, but in best-practice compliance this is often Not configured (or left empty) unless there’s a specific need to block newer versions. Maximum OS version is usually used to prevent devices from updating beyond a tested version—often for app compatibility reasons, not for security. It’s generally not a security best practice to block newer OS outright, since newer OS releases tend to improve security (except perhaps temporarily until your IT tests them). So likely, the JSON leaves osMaximumVersion unset (or uses it only in special scenarios). Our focus for strong security is on minimum version – ensuring updates are applied.

5. Device Properties: Minimum OS Build (Rapid Security Response)

Setting: Minimum allowed OS build number.\ Policy Value: Possibly set to enforce Rapid Security Response patches (or Not Configured).\ Purpose & Options: This lesser-used setting specifies the minimum iOS build number a device must have[1]. Apple’s Rapid Security Response (RSR) updates increment the build without changing the major/minor iOS version (for example, iOS 16.5 with RSR might have a build like 20F74). By setting a minimum build, an organization can require that RSR (or other minor security patches) are applied. If a device’s build is lower (meaning it’s missing some security patch), it will be noncompliant[1]. Options are to set a specific build string or leave Not configured. The JSON may include a build requirement if it aims to enforce RSR updates.

Why it’s a Best Practice: Apple now provides critical security patches through RSR updates that don’t change the iOS version. For example, in iOS 16 and 17, RSR patches address urgent vulnerabilities. If your compliance policy only checks the iOS version (e.g., 16.0) and not the build, a device could technically be on 16.0 but missing many patches (if Apple released 16.0.1, 16.0.2, etc. or RSR patches). By specifying a minimum build that corresponds to the latest security patch, you tighten the update requirement further. This is definitely a security best practice for organizations that want to be extremely proactive on patching. Microsoft’s documentation suggests using this feature to ensure devices have applied supplemental security updates[1].

In practice, not all organizations use this, since it requires tracking the exact build numbers of patches. But since our scenario is “strong security”, if the JSON included a minimum build, it indicates they want to enforce even minor patches. For example, if Apple released an RSR to fix a WebKit zero-day, the policy could set the minimum build to the version after that patch. This would block devices that hadn’t applied the RSR (even if their iOS “version” number is technically compliant). This is above and beyond baseline – it aligns with high-security environments (perhaps those concerned with zero-day exploits).

Configuration: If the policy JSON doesn’t explicitly set this, that suggests using the OS version alone. But given best practices, we would recommend configuring it when feasible. The policy author might update it whenever a critical patch is out. By doing so, they compel users to install not just major iOS updates but also the latest security patches that Apple provides, achieving maximum security coverage.

Maximum OS Build: Similarly, an admin could set a maximum build if they wanted to freeze at a certain patch level, but again, that’s not common for security – more for controlling rollouts. Most likely, osMaximumBuildVersion is not set in a best-practice policy (unless temporarily used to delay adoption of a problematic update).

6. Microsoft Defender for Endpoint – Device Risk Score

Setting: Maximum allowed machine risk score (Defender for Endpoint integration).\ Policy Value: Clear (only “Clear” risk is acceptable; anything higher is noncompliant).\ Purpose & Options: This setting is similar in spirit to the MTD threat level, but specifically for organizations using Microsoft Defender for Endpoint (MDE) on iOS. MDE can assess a device’s security risk based on factors like OS vulnerabilities, compliance, and any detected threats (MDE on mobile can flag malicious websites, phishing attempts, or device vulnerabilities). The risk scores are typically Clear, Low, Medium, High (Clear meaning no known risks). In Intune, you can require the device’s MDE-reported risk to be at or below a certain level for compliance[1]. Our policy sets this to Clear, the strictest option, meaning the device must have zero risk findings by Defender to be compliant[3]. If Defender finds anything that raises the risk to Low, Medium, or High, the device will be marked noncompliant. The alternative options would be allowing Low or Medium risk, or Not configured (ignoring Defender’s risk signal).

Why it’s a Best Practice: Requiring a “Clear” risk score from MDE is indeed a high-security best practice, consistent with a zero-tolerance approach to potential threats. It ensures that any device with even a minor security issue flagged by Defender (perhaps an outdated OS, or a known vulnerable app, or malware) is not allowed until that issue is resolved. Microsoft’s Level 3 (High Security) guidance for iOS explicitly adds this requirement on top of the baseline Level 2 settings[3]. They note that this setting should be used if you have Defender for Endpoint, to enforce the highest device risk standard[3].

Defender for Endpoint might mark risk as Medium for something like “OS version is two updates behind” or “phishing site access attempt detected” – with this compliance policy, those events would push the device out of compliance immediately. This is a very security-conscious stance: it leverages Microsoft’s threat intelligence on the device’s state in real time. It’s analogous to having an agent that can say “this phone might be compromised or misconfigured” and acting on that instantly.

Combining MDE risk with the earlier MTD setting might sound redundant, but some organizations use one or the other, or even both for layered security. (Defender for Endpoint can serve as an MTD on iOS in many ways, though iOS’s version of MDE is somewhat limited compared to on Windows – it primarily focuses on network/phishing protection and compliance, since iOS sandboxing limits AV-style scanning.)

In summary, this policy’s choice of Clear means only perfectly healthy devices (as judged by Defender) pass the bar. This is the most secure option and is considered best practice when maximum security is the goal and Defender for Endpoint is part of the toolset[3]. Not configuring it or allowing higher risk might be chosen in lower-tier security configurations to reduce friction, but those introduce more risk.

Note: If an organization doesn’t use Defender for Endpoint on iOS, this setting would be not configured (similar to the MTD case). But since this is a best practice profile, it likely assumes the use of Defender (or some MTD). Microsoft even states that you don’t have to deploy both an MTD and Defender – either can provide the signal[3]. In our context, either “Device Threat Level: Secured” (MTD) or “MDE risk: Clear” (Defender) or both could be in play. Both being set is belt-and-suspenders (and requires both agents), but would indeed ensure no stone unturned for device threats.

7. System Security: Require a Device Passcode

Setting: Device must have a password/PIN to unlock.\ Policy Value: Require (device must be protected by a passcode)[1].\ Purpose & Options: This fundamental setting mandates that the user has set a lock screen passcode (which can be a PIN, password, or biometric with fallback to PIN). Options are Require or Not configured (which effectively means no compliance check on passcode)[1]. By requiring a password, Intune ensures the device is not left unlocked or protected only by swipe (no security). On iOS, any device with a passcode automatically has full-device encryption enabled in hardware[1], so this setting also ensures device encryption is active (since iOS ties encryption to having a PIN/password). If a user had no passcode, Intune will continuously prompt them to set one until they do (the docs note users are prompted every 15 minutes to create a PIN after this policy applies)[1].

Why it’s a Best Practice: It’s hard to overstate – requiring a device passcode is one of the most basic and critical security practices for any mobile device. Without a PIN/Password, if a device is lost or stolen, an attacker has immediate access to all data on it. With our policy, a device lacking a passcode is noncompliant and will be blocked from company resources; plus Intune will nag the user to secure their device[1]. This aligns with essentially every security framework (CIS, NIST, etc.): devices must use authentication for unlock. For instance, the CIS Apple iOS Benchmark requires a passcode be set and complex[4], and the first step in Zero Trust device security is to ensure devices are not openly accessible.

By enforcing this, the policy also leverages iOS’s data encryption. Apple hardware encryption kicks in once a PIN is set, meaning data at rest on the phone is protected by strong encryption tied to the PIN (or biometric)[1]. Our policy thereby guarantees that any device with company data has that data encrypted (which might be an explicit compliance requirement under regulations like GDPR, etc., met implicitly through this control). Microsoft notes this in their docs: “iOS devices that use a password are encrypted”[1] – so requiring the password achieves encryption without a separate setting.

No Password = Not Allowed: The default without this enforcement would be to allow devices even if they had no lock. That is definitely not acceptable for strong security. Thus “Require” is absolutely best practice. This is reflected in Microsoft’s baseline (they configure “Require” for password in even the moderate level)[3]. An Intune compliance policy without this would be considered dangerously lax.

User Impact: Users will be forced to set a PIN if they didn’t have one, which is a minimal ask and now common practice. Some might wonder if Face ID/Touch ID counts – actually, biometrics on iOS still require a PIN as backup, so as long as a PIN is set (which it must be to enable Face/Touch ID), this compliance is satisfied. Therefore biometric users are fine – they won’t have to enter PIN often, but the device is still secure. There’s essentially no drawback, except perhaps initial setup inconvenience. Given the stakes (device access control), this is non-negotiable for any security-conscious org.

8. System Security: Disallow Simple Passcodes

Setting: Block the use of simple passcodes (like repeating or sequential numbers).\ Policy Value: Block (simple passwords are not allowed)[1].\ Purpose & Options: When this compliance rule is Blocked, Intune will treat the device as noncompliant if the user sets an overly simple passcode. “Simple” in iOS typically means patterns like 1111, 1234, 0000, 1212, or other trivial sequences/repeats[5]. If Not configured (the default), the user could potentially use such easy PINs[1]. By blocking simple values, the user must choose a more complex PIN that is not a common pattern. iOS itself has a concept of “Simple Passcode” in configuration profiles – disabling simple means iOS will enforce that complexity when the user creates a PIN.

Why it’s a Best Practice: Simple PINs are easily guessable – they drastically reduce the security of the device. For example, an attacker who steals a phone can easily try “0000” or “1234” first. Many users unfortunately choose these because they’re easy to remember. According to CIS benchmarks, repeating or sequential characters should be disallowed for device PINs[5]. The rationale: “Simple passcodes include repeating, ascending, or descending sequences that are more easily guessed.”[5]. Our policy adheres to that guidance by blocking them.

This restriction significantly increases the effective strength of a 6-digit PIN. There are 1 million possible 6-digit combinations (000000–999999). If simple patterns were allowed, a large portion of users might use one of perhaps 20 very common patterns, which an attacker would certainly attempt first. Blocking those forces diversity. Apple’s own configuration documentation encourages disabling simple values for stronger security in managed deployments.

From a best-practice standpoint, this setting complements the minimum length: it’s not enough to require a PIN, you also require it to have some complexity. It aligns with the principle of using hard-to-guess passwords. In Microsoft’s recommended configuration, they set “simple passwords: Block” even at the enhanced (Level 2) security tier[3]. It’s essentially a baseline requirement when enforcing passcode policies.

User Impact: If a user attempts to set a passcode like 123456, the device (with Intune policy applied) will not accept it. They’ll be required to choose a more complex PIN (e.g., 865309 or some non-pattern). Generally this is a minor inconvenience for a major gain in security. Over time, users typically adapt and choose something memorable yet not straight-line. Admins might provide guidance or passcode creation rules as part of user education.

Bottom line: Blocking simple passcodes is definitely best practice for strong security, eliminating the weakest PIN choices and significantly improving resistance to brute-force guessing[5].

9. System Security: Minimum Passcode Length

Setting: The minimum number of characters/digits in the device passcode.\ Policy Value: 6 characters (minimum).\ Purpose & Options: This sets how long the PIN/password must be at minimum. Intune allows configuring any length, but common values are 4 (very weak), 6 (moderate), or higher for actual passwords. Microsoft supports 4 and up for PIN, but 6 is the recommended minimum for modern iOS devices[3]. The policy here uses 6, meaning a 4-digit PIN would be noncompliant – the user must use six or more digits/characters. Options: an admin could set 8, 10, etc., depending on desired security, or leave Not configured (no minimum beyond iOS’s default, which is 4). By enforcing 6, we go beyond the default low bar.

Why it’s a Best Practice: Historically, iPhones allowed a 4-digit PIN. But security research and standards (like CIS) have since moved to 6 as a minimum to provide better security against guessing. A 4-digit PIN has only 10,000 combinations; a 6-digit PIN has 1,000,000 – that’s a two-order-of-magnitude increase in security. Per the CIS iOS benchmark: “Ensure minimum passcode length is at least 6 or greater”[4]. Their rationale: six characters provides reasonable assurance against passcode attacks[4]. Many organizations choose 6 because it strikes a balance between security and usability on a mobile device. Our policy’s value of 6 is aligned with both CIS and Microsoft’s guidance (the Level 2 baseline uses 6 as a default example)[3].

For even stronger security, some high-security environments might require 8 or more (especially if using alphanumeric passcodes). But requiring more than 6 digits on a phone can significantly hurt usability—users might start writing down passcodes if they’re too long/complex. Six is considered a sweet spot: it’s the default for modern iPhones now (when you set a PIN on a new iPhone, Apple asks for 6 by default, indicating Apple’s own move toward better security). Attackers faced with a 6-digit PIN and 10-attempt limit (with device wipe after 10, if enabled by MDM separately) have virtually no chance to brute force offline, and online (on-device) guessing is rate-limited.

Thus, setting 6 as minimum is best practice. It ensures no one can set a 4-digit code (which is too weak by today’s standards)[4]. Some orgs might even consider this the bare minimum and opt for more, but 6 is widely accepted as a baseline for strong mobile security.

Note: The policy says “Organizations should update this setting to match their password policy” in Microsoft’s template[3]. If an org’s policy says 8, they should use 8. But for most, 6 is likely the standard for mobile. The key is: we have a defined minimum > 0. Not setting a minimum (or setting it to 4) would not be best practice. Our profile doing 6 shows it’s aiming for solid security but also keeping user convenience somewhat in mind (since they didn’t jump to, say, 8).

User Impact: Users with a 4-digit PIN (if any exist nowadays) would be forced to change to 6 digits. Most users likely already use 6 due to OS nudges. If they use an alphanumeric password, it must be at least 6 characters. Generally acceptable for users – 6-digit PINs are now common and quick to enter (especially since many use Face ID/Touch ID primarily and only enter the PIN occasionally).

In summary, min length = 6 is a best practice baseline for strong security on iOS, aligning with known guidelines[4].

10. System Security: Required Passcode Type

Setting: Type/complexity of passcode required (numeric, alphanumeric, etc.).\ Policy Value: Numeric (PIN can be purely numeric digits)[3].\ Purpose & Options: Intune allows specifying what kind of characters the device password must contain. The typical options are Numeric (numbers only), Alphanumeric (must include both letters and numbers), or ** device default/Not configured**[1]. If set to Alphanumeric, the user must create a passcode that has at least one letter and one number (and they can include symbols if they want). If Numeric (as our policy), the user can just use digits (no letter required)[1]. Apple’s default on iPhones is actually a 6-digit numeric PIN unless changed to a custom alphanumeric code by the user. So our policy’s Numeric requirement means “we will accept the standard PIN format” – we’re not forcing letters. We are however also blocking simple patterns and requiring length 6, so it’s a complex numeric PIN effectively.

Why it’s configured this way: You might wonder, wouldn’t Alphanumeric be more secure? In pure theory, yes – an alphanumeric password of the same length is stronger than numeric. However, forcing alphanumeric on mobile can impact usability significantly. Typing a complex alphanumeric password every unlock (or even occasionally) is burdensome for users, especially if Face/Touch ID fails or after reboots. Many organizations compromise by allowing a strong numeric PIN, which still provides good security given the other controls (length and device auto-wipe on excessive attempts, etc.). Microsoft’s Level 2 (enhanced) security guidance actually shows Numeric as the recommended setting, with a note “orgs should match their policy”[3]. At Level 3 (high security), Microsoft did not explicitly change it to Alphanumeric in the example (they kept focus on expiration)[3], which implies even high-security profiles might stick to numeric but compensate by other means (like requiring very long numeric or frequent changes).

Is Numeric a best practice? It is a reasonable best practice for most cases: a 6-digit random numeric PIN, especially with the simple sequence restriction and limited attempts, is quite secure. Consider that iOS will erase or lockout after 10 failed tries (if that’s enabled via a separate device configuration profile, which often accompanies compliance). That means an attacker can’t even brute force all 1,000,000 possibilities – they get at most 10 guesses, which is a 0.001% chance if the PIN is random. In contrast, forcing an alphanumeric password might encourage users to use something shorter but with a letter, or they might write it down, etc. The policy likely chose Numeric 6 to maximize adoption and compliance while still being strong. This is consistent with many corporate mobile security policies and the CIS benchmarks (which do not require alphanumeric for mobile, just a strong PIN).

However, for maximum security, an organization might opt for Alphanumeric with a higher minimum length (e.g., 8 or more). That would make unlocking even harder to brute force (though again, iOS has built-in brute force mitigations). Our analysis is that the provided policy is striking a balance: it’s implementing strong security that users will realistically follow. Numeric is called best practice in many guides because trying to impose full computer-style passwords on phones can backfire (users might not comply or might resort to insecure behaviors to cope).

Conclusion on Type: The chosen value Numeric with other constraints is a best practice for most secure deployments. It definitely improves on a scenario where you let device default (which might allow 4-digit numeric or weak patterns if not otherwise blocked). It also reflects real-world use: most users are used to a PIN on phones. For a security-maximal stance, one could argue Alphanumeric is better, but given that our policy already covers length, complexity, and other factors, numeric is justified. So yes, this setting as configured is consistent with a best-practice approach (and one endorsed by Microsoft’s own templates)[3].

If an organization’s policy says “all device passwords must have letters and numbers”, Intune can enforce that by switching this to Alphanumeric. That would be even stricter. But one must weigh usability. If after deployment it’s found that numeric PINs are being compromised (which is unlikely if other controls are in place), then revisiting this could be an enhancement. For now, our strong security policy uses numeric and relies on sufficient length and non-sequence to ensure strength.

11. System Security: Minimum Special Characters

Setting: Minimum number of non-alphanumeric characters required in the passcode.\ Policy Value: 0 (since the policy only requires numeric, this isn’t applicable).\ Purpose & Options: This setting only matters if Alphanumeric passwords are required. It lets you enforce that a certain number of characters like ! @ # $ % (symbols) be included[1]. For example, you could require at least 1 special character to avoid passwords that are just letters and numbers. In our policy, because passcode type is Numeric, any value here would be moot – a numeric PIN won’t have symbols or letters at all. It’s likely left at 0 or not configured. If the JSON has it, it’s probably 0. We mention it for completeness.

Why it’s configured this way: In a maximum security scenario with alphanumeric passwords, one might set this to 1 or more for complexity. But since the policy chose Numeric, there’s no expectation of symbols. Setting it to 0 simply means no additional symbol requirement (the default). That’s appropriate here.

If the organization later decided to move to alphanumeric passcodes, increasing this to 1 would then make sense (to avoid users picking simple alphabetic words or just letters+numbers without any symbol). But as things stand, this setting isn’t contributing to security in the numeric-PIN context, and it doesn’t detract either—it’s effectively neutral.

In summary, 0 is fine given numeric PINs. If Alphanumeric were enforced, best practice would be at least 1 special char to ensure complexity (especially if minimum length is not very high). But since we are not requiring letters at all, this is not a factor.

(It’s worth noting iOS on its own does not require special chars in PINs by default; this is purely an extra hardening option available through MDM for password-type codes.)

12. System Security: Maximum Inactivity Time (Auto-Lock)

Setting: Maximum minutes of inactivity before the device screen locks.\ Policy Value: 5 minutes.\ Purpose & Options: This compliance rule ensures that the device is set to auto-lock after no more than X minutes of user inactivity[1]. The policy value of 5 minutes means the user’s Auto-Lock (in iOS Settings) must be 5 minutes or less. If a user tried to set “Never” or something longer than 5, Intune would mark the device noncompliant. Options range from “Immediately” (which is essentially 0 minutes) up through various durations (1, 2, 3, 4, 5, 15 minutes, etc.)[1]. Not configured would not enforce any particular lock timeout.

Why it’s a Best Practice: Limiting the auto-lock timer reduces the window of opportunity for an unauthorized person to snatch an unlocked device or for someone to access it if the user leaves it unattended. 5 minutes of inactivity is a common security recommendation for maximum idle time on mobile devices. Many security standards suggest 5 minutes or less; some high-security environments even use 2 or 3 minutes. Microsoft’s enhanced security example uses 5 minutes for iOS[3]. This strikes a balance between security and usability: the phone will lock fairly quickly when not in use, but not so instantly that it frustrates the user while actively reading something. Without this, a user might set their phone to never lock or to a very long timeout (some users do this for convenience), which is risky because it means the phone could be picked up and used without any authentication if the user leaves it on a desk, etc.

By enforcing 5 minutes, the policy ensures devices lock themselves in a timely manner. That way, even if a user forgets to lock their screen, it won’t sit accessible for more than 5 minutes. Combined with requiring a passcode immediately on unlock (next setting), this means after those 5 minutes, the device will demand the PIN again. This is definitely best practice: both NIST and CIS guidelines emphasize automatic locking. For instance, older U.S. DoD STIGs for mobile mandated a 15-minute or shorter lock; many organizations choose 5 to be safer. It aligns with the concept of least privilege and time-based access — you only stay unlocked as long as needed, then secure the device.

User Impact: Users might notice their screen going black quicker. But 5 minutes is usually not too intrusive; many users have that as default. (In fact, iOS itself often limits how long you can set auto-lock: on some devices, if certain features like managed email or Exchange policies are present, “Never” is not an option. Often the max is 5 minutes unless on power or such. This is partly an OS limitation for security.) So, in practice, this likely doesn’t bother most. If someone had it set to 10 or “Never” before, Intune compliance will force it down to 5.

From security perspective, 5 minutes or even less is recommended. One could tighten to 1 or 2 minutes if ultra-secure, but that might annoy users who have to constantly wake their phone. So 5 is a solid compromise that’s considered a best practice in many mobile security benchmarks (some regulatory templates use 5 as a standard).

13. System Security: Grace Period to Require Passcode

Setting: Maximum time after screen lock before the password is required again.\ Policy Value: 5 minutes (set equal to the auto-lock time).\ Purpose & Options: This setting (often called “Require Password after X minutes”) defines if the device was just locked, how soon it requires the PIN to unlock again[1]. iOS has a feature where you can set “require passcode immediately” or after a short delay (like if you lock the phone and then wake it again within, say, 1 minute, you might not need to re-enter the PIN). Security policies often mandate that the passcode be required immediately or very shortly after lock. In our policy, they set 5 minutes. That likely means if the device locks (say due to inactivity or user pressing power button), and the user comes back within 4 minutes, they might not need to re-enter PIN (depending on iOS setting). But beyond 5 minutes, it will always ask. Options range from Immediately up to several minutes or hours[1]. The default not configured would allow whatever the user sets (which could be 15 minutes grace, for example).

Why it’s a Best Practice: Ideally, you want the device to require the passcode as soon as it’s locked or very soon after, to prevent someone from quickly waking it and bypassing PIN if the lock was recent. By setting 5 minutes, the policy still gives a small usability convenience window (the user who locks and unlocks within 5 min might not need to re-enter PIN) but otherwise will always prompt. Many security pros recommend “Immediately” for maximum security, which means always enter PIN on unlock (except when using biometric, which counts as entering it). Our policy uses 5 minutes, likely to align with the auto-lock setting. In effect, this combination means: if the device auto-locks after 5 minutes of idle, and this setting is 5, then effectively whenever the auto-lock kicks in, the PIN will be needed (because by the time 5 min of inactivity passed and it locked, the grace period equals that, so PIN required). If the user manually locks the device and hands it to someone within less than 5 minutes, theoretically they could open it without PIN—unless the device was set by the user to require immediately. Often, MDM policies when set equal like this cause the device to default to immediate requirement (need to double-check iOS behavior, but generally the shorter of the two times rules the actual experience).

In high-security configurations, it’s common to set this to Immediately[1]. If I recall, the CIS benchmark for iOS suggests require passcode immediately or very short delay. But 5 minutes is still within a reasonable security range. The key is, they did not leave it open-ended. They explicitly capped it. This ensures a uniform security posture: you won’t have some devices where user set “require passcode after 15 minutes” (which is the max Apple allows for grace) quietly lurking.

Because our policy aligns these 5-minute values, the practical effect is close to immediate requirement after idle timeout. This is a best practice given usability considerations. It means if a device was locked due to inactivity, it always needs a PIN to get back in (no free unlock). Only in the edge case of manual lock/unlock within 5 min would it not prompt. One might tighten this to 1 minute or Immediately for more security, at cost of convenience.

Conclusion: Having any requirement (not “Not configured”) is the main best practice. 5 minutes is a reasonable secure choice, matching common guidance (for instance, U.K. NCSC guidance suggests short lock times with immediate PIN on resume). For an ultra-secure mode, immediate would be even better – but what’s chosen here is still within best practice range. It certainly is far superior to letting a device sit unlocked or accessible without PIN for long periods. So it checks the box of strong security.

14. System Security: Password Expiration

Setting: Days until the device passcode must be changed.\ Policy Value: 365 days (1 year).\ Purpose & Options: This compliance setting forces the user to change their device PIN/password after a certain number of days[1]. In our policy, it’s set to 365, meaning roughly once a year the user will be required to pick a new passcode. Options can range from as low as 30 days to as high as e.g. 730 days, or Not configured (no forced change). If configured, when the passcode age reaches that threshold, Intune will mark the device noncompliant until the user updates their passcode to a new one they haven’t used recently. iOS doesn’t natively expire device PINs on its own, but Intune’s compliance checking can detect the age based on last set time (which on managed devices it can query).

Why it’s a Best Practice: Password (or PIN) rotation requirements have long been part of security policies to mitigate the risk of compromised credentials. For mobile device PINs, it’s somewhat less common to enforce changes compared to network passwords, but in high-security contexts it is done. Microsoft’s Level 3 high-security recommendation for iOS adds a 365-day expiration whereas the lower level didn’t have any expiration[3]. This suggests that in Microsoft’s view, annual PIN change is a reasonable step for the highest security tier. The thinking is: if somehow a PIN was compromised or observed by someone, forcing a change periodically limits how long that knowledge is useful. It also ensures that users are not using the same device PIN indefinitely for many years (which could become stale or known to ex-employees, etc.).

Modern security guidance (like NIST SP 800-63 and others) has moved away from frequent password changes for user accounts, unless there’s evidence of compromise. However, device PINs are a slightly different story – they are shorter and could be considered less robust than an account password. Requiring a yearly change is a light-touch expiration policy (some orgs might do 90 days for devices, but that’s fairly aggressive). One year balances security and user burden. It’s essentially saying “refresh your device key annually”. That is considered acceptable in strong security environments, and not too onerous for users (once a year).

Why not more often? Changing too frequently (like every 30 or 90 days) might degrade security because users could choose weaker or very similar PINs when forced often. Once a year is enough that it could thwart an attacker who learned an old PIN, while not making users circumvent policies. Our policy’s 365-day expiry thus fits a best practice approach that’s also reflected in the high-security baseline by Microsoft[3].

Trade-offs: Some argue that if a PIN is strong and not compromised, forcing a change isn’t necessary and can even be counterproductive by encouraging patterns (like PIN ending in year, etc.). But given this is for maximum security, the conservative choice is to require changes periodically. The user impact is minimal (entering a new PIN once a year and remembering it). Intune will alert the user when their PIN is “expired” by compliance rules, guiding them to update it.

Conclusion: While not every company enforces device PIN expiration, as a strong security best practice it does add an extra layer. Our profile’s inclusion of 365-day expiration is consistent with an environment that doesn’t want any credential (even a device unlock code) to remain static forever[3]. It’s a best practice in the context of high security, and we agree with its use here.

15. System Security: Prevent Reuse of Previous Passcodes

Setting: Number of recent passcodes disallowed when setting a new one.\ Policy Value: 5 (cannot reuse any of the last 5 passcodes).\ Purpose & Options: This goes hand-in-hand with the expiration policy. It specifies how many of the user’s most recent passcodes are remembered and blocked from being reused[1]. With a value of 5, when the user is forced to change their PIN, they cannot cycle back to any of their last 5 previously used PINs. Options are any number, typically 1–24, or Not configured (no memory of old PINs, meaning user could alternate between two PINs). Our policy chooses 5, which is a common default for preventing trivial reuse.

Why it’s a Best Practice: If you require password changes, you must also prevent immediate reuse of the same password, otherwise users might just swap between two favorites (like “111111” to “222222” and back to “111111”). By remembering 5, the policy ensures the user can’t just flip between a small set of PINs[1]. They will have to come up with new ones for at least 5 cycles. This promotes better security because it increases the chance that an old compromised PIN isn’t reused. It also encourages users to not just recycle – hopefully each time they choose something unique (at least in a series of 6 or more unique PINs).

The number “5” is somewhat arbitrary but is a standard in many policies (Active Directory password policy often uses 5 or 24). Microsoft’s high-security iOS example uses 365 days expiry but did not explicitly list the history count – likely they do set something, and 5 is often a baseline. CIS benchmarks for mobile device management also suggest preventing at least last 5 passcodes on reuse to avoid alternating patterns.

In short, since our policy does expiration, having a history requirement is necessary to fulfill the intent of expiration. 5 is a reasonable balance (some might choose 3 or 5; some stricter orgs might say 10). Using 5 is consistent with best practices to ensure credential freshness.

User Impact: Minimal – it only matters when changing the PIN. The user just has to pick something they haven’t used recently. Given a year has passed between changes, many might not even remember their 5 PINs ago. If they try something too similar or the same as last time, Intune/iOS will reject it and they’ll choose another. It’s a minor inconvenience but an important piece of enforcing genuine password updates.

Therefore, this setting, as configured, is indeed part of the best practice approach to maintain passcode integrity over time. Without it, the expiration policy would be weaker (users could rotate among two favorites endlessly).

16. Device Security: Restricted Apps

Setting: Block compliance if certain apps are installed (by bundle ID).\ Policy Value: Not configured (no specific restricted apps listed in baseline).\ Purpose & Options: This feature lets admins name particular iOS apps (by their unique bundle identifier) that are not allowed on devices. If a device has any of those apps installed, it’s marked noncompliant[1]. Typically, organizations use this to block known risky apps (e.g., apps that violate policy, known malware apps if any, or maybe unsanctioned third-party app stores, etc.). The JSON policy can include a list of bundle IDs under “restrictedApps”. In a general best-practice baseline, it’s often left empty because the choice of apps is very organization-specific.

Why it’s (not) configured here: Our policy is designed for broad strong security, and doesn’t enumerate any banned apps by default. This makes sense – there isn’t a one-size-fits-all list of iOS apps to block for compliance. However, an organization may decide to add apps to this list over time. For instance, if a certain VPN app or remote-control app is considered insecure, they might add its bundle ID. Or if an app is known to be a root/jailbreak tool, they could list it (though if the device was jailbroken the other control already catches it).

Is this a best practice? The best practice approach is to use this setting judiciously to mitigate specific risks. It’s not a required element of every compliance policy. Many high-security orgs do add a few disallowed apps (for example, maybe banning “Tor Browser” or “Cydia” store which only appears on jailbroken devices) as an extra safety net. In our evaluation, since none are listed, we assume default. That’s fine – it’s better to have no blanket restrictions than to accidentally restrict benign apps. We consider it neutral in terms of the policy’s strength.

However, we mention it because as an additional enhancement (Sub-question 10), an organization could identify and restrict certain apps for even stronger security. For example, if you deem that users should not have any unmanaged cloud storage apps or unapproved messaging apps that could leak data, you could list them here. Each added app tightens security but at the cost of user freedom. Best practice is to ban only those apps that pose a clear security threat or violate compliance (e.g., an antivirus app that conflicts with corporate one, or a known malicious app). Given the evolving threat landscape, administrators should review if any emerging malicious apps on iOS should be flagged.

Conclusion on apps: No specific app restrictions are in the base policy, which is fine as a starting point. It’s something to keep in mind as a customizable part of compliance. The policy as provided is still best practice without any entries here, since all other critical areas are covered.

If not used, this setting doesn’t affect compliance. If used, it can enhance security by targeting specific risks. In a max security regime, you might see it used to enforce that only managed apps are present or that certain blacklisted apps never exist. That would be an additional layer on top of our current policy.


Comparison to Industry Best Practices and Additional Considerations

All the settings above align well with known industry standards for mobile security. Many of them map directly to controls in the CIS (Center for Internet Security) Apple iOS Benchmark or government mobility guidelines, as noted. For example, CIS iOS guidance calls for a mandatory passcode with minimum length 6 and no simple sequences[4][5], exactly what we see in this policy. The Australian Cyber Security Centre and others similarly advise requiring device PIN and up-to-date OS for BYOD scenarios – again reflected here.

Critically, these compliance rules implement the device-side of a Zero Trust model: only devices that are fully trusted (secured, managed, up-to-date) can access corporate data. They work in tandem with Conditional Access policies which would, for instance, block noncompliant devices from email or SharePoint. The combination ensures that even if a user’s credentials are stolen, an attacker still couldn’t use an old, insecure phone to get in, because the device would fail compliance checks.

Potential Drawbacks or Limitations: There are few downsides to these strong settings, but an organization should be aware of user impact and operational factors:

  • User Experience: Some users might initially face more prompts (e.g., to update iOS or change their PIN). Proper communication and IT support can mitigate frustration. Over time, users generally accept these as standard policy, especially as mobile security awareness grows.
  • Device Exclusions: Very strict OS version rules might exclude older devices. For instance, an employee with an iPhone that cannot upgrade to iOS 16 will be locked out. This is intentional for security, but the organization should have a plan (perhaps providing updated devices or carving out a temporary exception group if absolutely needed for certain users – though exceptions weaken security).
  • Biometric vs PIN: Our policy doesn’t explicitly mention biometrics; Intune doesn’t control whether Face ID/Touch ID is used – it just cares that a PIN is set. Some security frameworks require biometrics be enabled or disabled. Here we implicitly allow them (since iOS uses them as convenience on top of PIN). This is usually fine and even preferable (biometrics add another factor, though not explicitly checked by compliance). If an organization wanted to disallow Touch/Face ID (some high-security orgs do, fearing spoofing/legal issues), that would be a device configuration profile setting, not a compliance setting. As is, allowing biometrics is generally acceptable and helps usability without hurting security.
  • Reliance on Additional Tools: Two of our settings (device threat level, MDE risk) rely on having additional security apps (MTD/Defender) deployed. If those aren’t actually present, those settings do nothing (or we’d not configure them). If they are present, great – we get that extra protection. Organizations need the licensing (Defender for Endpoint or third-party) and deployment in place. For Business Premium (which the repository name hints at), Microsoft Defender for Endpoint is included, so it makes sense to use it. Without it, one could drop those settings and still have a solid compliance core.
  • Maintenance Effort: As mentioned, minimum OS version and build must be kept updated. This policy is not “set and forget” – admins should bump the minimum OS every so often. For example, when iOS 18 comes and is tested, require at least 17.0. And if major vulnerabilities hit, possibly use the build number rule to enforce rapid patch adoption. This requires tracking Apple’s release cycle and possibly editing the JSON or using Intune UI periodically. That is the price of staying secure: complacency can make a “best practice” policy become outdated. A device compliance policy from 2 years ago that still only requires iOS 14 would be behind the times now. So, regular reviews are needed (Recommendation: review quarterly or with each iOS release).
  • Conditional Access dependency: The compliance policy by itself just marks devices. To actually block access, one must have Azure AD Conditional Access policies that require device to be compliant for certain apps/data. It sounds like context, but worth noting: to realize the “best practice” outcome (no insecure device gets in), you must pair this with CA. That is presumably in place if they’re talking about Intune compliance (since that’s how it enforces). If not properly configured, a noncompliant device might still access data – so ensure CA policies are set (e.g., “Require compliant device” for all cloud apps or at least email/O365 apps).
  • Monitoring and Response: IT should watch compliance reports. For example, if a device shows as noncompliant due to, say, “Jailbroken = true,” that’s a serious red flag – follow up with the user, as it could indicate a compromise or at least a policy violation. Similarly, devices not updating OS should be followed up on – perhaps the user clicked “later” on updates; a gentle nudge or help might be needed. The compliance policy can even be set to send a notification after X days of noncompliance (e.g., email user if after 1 week they still aren’t updated). Those actions for noncompliance are configured in Intune (outside the JSON’s main rule set) and are part of maintaining compliance. Best practice is to at least immediately mark noncompliant[3] (which we do) and possibly notify and eventually retire the device if prolonged.

Other Additional Security Settings (if we wanted to enhance further):

  • Device Encryption: On iOS, as noted, encryption is automatic with a passcode. So we don’t need a separate compliance check for “encryption enabled” (unlike on Android, where that’s a setting). This is covered by requiring a PIN.
  • Device must be corporate-owned or supervised: Intune compliance policies don’t directly enforce device ownership type. But some orgs might only allow “Corporate” devices to enroll. Not applicable as a JSON setting here, but worth noting as a broader practice: supervised (DEP) iOS devices have more control. If this policy were for corporate-managed iPhones, they likely are supervised, which allows even stricter config (but that’s beyond compliance realm). For BYOD, this policy is about as good as you can do without going to app protection only.
  • Screen capture or backup restrictions: Those are more Mobile Device Configuration policies (not compliance). For example, one might disallow iCloud backups or require Managed Open-In to control data flow. Those are implemented via Configuration Profiles, not via compliance. So they’re out of scope for this JSON, but they would complement security. Our compliance policy is focusing on device health and basics.
  • Jailbreak enhanced detection: Ensure Intune’s device settings (like location services) are correctly set if needed, as mentioned, to improve jailbreak detection. Possibly communicate to users that for security, they shouldn’t disable certain settings.

Default iOS vs This Policy: By default, an iPhone imposes very few of these restrictions on its own. Out of the box: a passcode is optional (though encouraged), simple PINs are allowed (and even default to 6-digit but could be 111111), auto-lock could be set to Never, and obviously no concept of compliance. So compared to that, this Intune policy greatly elevates the security of any enrolled device. It essentially brings an unmanaged iPhone up to enterprise-grade security standards:

  • If a user never set a PIN, now they must.
  • If they chose a weak PIN, now they must strengthen it.
  • If they ignore OS updates, now they have to update.
  • If they somehow tampered (jailbroke) the device, now it gets quarantined.
  • All these improvements happen without significantly hindering normal use of the phone for legitimate tasks – it mostly works in the background or at setup time.

Recent Updates or Changes in Best Practices: The mobile threat landscape evolves, but as of the current date, these settings remain the gold standard fundamentals. One new element in iOS security is the Rapid Security Response updates, which we’ve covered by possibly using the build version check. Also, the emergence of advanced phishing on mobile has made tools like Defender for Endpoint on mobile more important – hence integrating compliance with device risk (which our policy does) is a newer best practice (a few years ago, not many enforced MTD risk in compliance, now it’s recommended for higher security). The policy reflects up-to-2025 thinking (for instance, including Defender integration[3], which is relatively new).

Apple iOS 17 and 18 haven’t introduced new compliance settings, but one might keep an eye on things like Lockdown Mode (extreme security mode in iOS) – not an Intune compliance check currently, but in the future perhaps there could be compliance checks for that for highest-risk users. For now, our policy covers the known critical areas.

Integration with Other Security Measures: Lastly, it’s worth noting how this compliance policy fits into the overall security puzzle:

  • It should be used alongside App Protection Policies (MAM) for scenarios where devices aren’t enrolled or to add additional protection inside managed apps (especially for BYOD, where you might want to protect data even if a compliance gap occurs).
  • It complements Conditional Access as discussed.
  • It relies on Intune device enrollment – which itself requires user buy-in (users must enroll their device in Intune Company Portal). Communicating the why (“we have these policies to keep company data safe and keep your device safe too”) can help with user acceptance.
  • These compliance settings also generate a posture that can be fed into a Zero Trust dashboard or risk-based access solutions.

Maintaining and Updating Over Time:\ To ensure these settings remain effective, an organization should:

  • Update OS requirements regularly: As mentioned, keep track of iOS releases and set a schedule to bump the minimum version after verifying app compatibility. A good practice is to lag one major version behind current (N-1)[3], and possibly enforce minor updates within that via build numbers after major security fixes.
  • Monitor compliance reports: Use Intune’s reporting to identify devices frequently falling out of compliance. If a particular setting is commonly an issue (say many devices show as noncompliant due to pending OS update), consider if users need more time or if you need to adjust communication. But don’t drop the setting; rather, help users meet it.
  • Adjust to new threats: If new types of threats emerge, consider employing additional controls. For example, if a certain malicious app trend appears, use the Restricted Apps setting to block those by ID. Or if SIM swapping/ESIM vulnerabilities become a concern, maybe integrate carrier checks if available.
  • Train users: Make sure users know how to maintain compliance: e.g., how to update iOS, how to reset their PIN if they forget the new one after change, etc. Empower them to do these proactively.
  • Review password policy alignment: Ensure the mobile PIN requirements align with your overall corporate password policy framework. If the company moves to passwordless or other auth, device PIN is separate but analogous – keep it strong.
  • Consider feedback: If users have issues (for instance, some older device struggling after OS update), have a process for exceptions or support. Security is the priority, but occasionally a justified exception might be temporarily granted (with maybe extra monitoring). Intune allows scoping policies to groups, so you could have a separate compliance policy for a small group of legacy devices with slightly lower requirements, if absolutely needed, rather than weakening it for all.

In conclusion, each setting in the iOS Intune compliance JSON is indeed aligned with best practices for strong security on mobile devices. Together, they create a layered defense: device integrity, OS integrity, and user authentication security are all enforced. This significantly lowers the risk of data breaches via lost or compromised iPhones/iPads. By understanding and following these settings, the organization ensures that only secure, healthy devices are trusted – a cornerstone of modern enterprise security. [2][3]

References

[1] iOS/iPadOS device compliance settings in Microsoft Intune

[2] Jailbroken/Rooted Devices | Microsoft Zero Trust Workshop

[3] iOS/iPadOS device compliance security configurations – Microsoft Intune

[4] 2.4.3 Ensure ‘Minimum passcode length’ is set to a value of ‘6… – Tenable

[5] 2.4.1 Ensure ‘Allow simple value’ is set to ‘Disabled’ | Tenable®

Analysis of Intune Windows 10/11 Compliance Policy Settings for Strong Security

This report examines each setting in the provided Intune Windows 10/11 compliance policy JSON and evaluates whether it represents best practice for strong security on a Windows device. For each setting, we explain its purpose, configuration options, and why the chosen value helps ensure maximum security.


Device Health Requirements (Boot Security & Encryption)

Require BitLockerBitLocker Drive Encryption is mandated on the OS drive (Require BitLocker: Yes). BitLocker uses the system’s TPM to encrypt all data on disk and locks encryption keys unless the system’s integrity is verified at boot[1]. The policy setting “Require BitLocker” ensures that data at rest is protected – if a laptop is lost or stolen, an unauthorized person cannot read the disk contents without proper authorization[1]. Options: Not configured (default, don’t check encryption) or Require (device must be encrypted with BitLocker)[1]. Setting this to “Require” is considered best practice for strong security, as unencrypted devices pose a high data breach risk[1]. In our policy JSON, BitLocker is indeed required[2], aligning with industry recommendations to encrypt all sensitive devices.

Require Secure Boot – This ensures the PC is using UEFI Secure Boot (Require Secure Boot: Yes). Secure Boot forces the system to boot only trusted, signed bootloaders. During startup, the UEFI firmware will verify that bootloader and critical kernel files are signed by a trusted authority and have not been modified[1]. If any boot file is tampered with (e.g. by a bootkit or rootkit malware), Secure Boot will prevent the OS from booting[1]. Options: Not configured (don’t enforce) or Require (must boot in secure mode)[1]. The policy requires Secure Boot[2], which is a best-practice security measure to maintain boot-time integrity. This setting helps ensure the device boots to a trusted state and is not running malicious firmware or bootloaders[1]. Requiring Secure Boot is recommended in frameworks like Microsoft’s security baselines and the CIS benchmarks for Windows, provided the hardware supports it (most modern PCs do)[1].

Require Code Integrity – Code integrity (a Device Health Attestation setting) validates the integrity of Windows system binaries and drivers each time they are loaded into memory. Enforcing this (Require code integrity: Yes) means that if any system file or driver is unsigned or has been altered by malware, the device will be reported as non-compliant[1]. Essentially, it helps detect kernel-level rootkits or unauthorized modifications to critical system components. Options: Not configured or Require (must enforce code integrity)[1]. The policy requires code integrity to be enabled[2], which is a strong security practice. This setting complements Secure Boot by continuously verifying system integrity at runtime, not just at boot. Together, Secure Boot and Code Integrity reduce the risk of persistent malware or unauthorized OS tweaks going undetected[1].

By enabling BitLocker, Secure Boot, and Code Integrity, the compliance policy ensures devices have a trusted startup environment and encrypted storage – foundational elements of a secure endpoint. These Device Health requirements align with best practices like Microsoft’s recommended security baselines (which also require BitLocker and Secure Boot) and are critical to protect against firmware malware, bootkits, and data theft[1][1]. Note: Devices that lack a TPM or do not support Secure Boot will be marked noncompliant, meaning this policy effectively excludes older, less secure hardware from the compliant device pool – which is intentional for a high-security stance.

Device OS Version Requirements

Minimum OS version – This policy defines the oldest Windows OS build allowed on a device. In the JSON, the Minimum OS version is set to 10.0.19043.10000 (which corresponds roughly to Windows 10 21H1 with a certain patch level)[2]. Any Windows device reporting an OS version lower than this (e.g. 20H2 or an unpatched 21H1) will be marked non-compliant. The purpose is to block outdated Windows versions that lack recent security fixes. End users on older builds will be prompted to upgrade to regain compliance[1]. Options: admin can specify any version string; leaving it blank means no minimum enforcement[1]. Requiring a minimum OS version is a best practice to ensure devices have received important security patches and are not running end-of-life releases[1]. The chosen minimum (10.0.19043) suggests that Windows 10 versions older than 21H1 are not allowed, which is reasonable for strong security since Microsoft no longer supports very old builds. This helps reduce vulnerabilities – for example, a device stuck on an early 2019 build would miss years of defenses (like improved ransomware protection in later releases). The policy’s min OS requirement aligns with guidance to keep devices updated to at least the N-1 Windows version or newer.

Maximum OS version – In this policy, no maximum OS version is configured (set to “Not configured”)[2]. That means devices running newer OS versions than the admin initially tested are not automatically flagged noncompliant. This is usually best, because setting a max OS version is typically used only to temporarily block very new OS upgrades that might be unapproved. Leaving it not configured (no upper limit) is often a best practice unless there’s a known issue with a future Windows release[1]. In terms of strong security, not restricting the maximum OS allows devices to update to the latest Windows 10/11 feature releases, which usually improves security. (If an organization wanted to pause Windows 11 adoption, they might set a max version to 10.x temporarily, but that’s a business decision, not a security improvement.) So the policy’s approach – no max version limit – is fine and does align with security best practice in most cases, as it encourages up-to-date systems rather than preventing them.

Why enforce OS versions? Keeping OS versions current ensures known vulnerabilities are patched. For example, requiring at least build 19043 means any device on 19042 or earlier (which have known exposures fixed in 19043+) will be blocked until updated[1]. This reduces the attack surface. The compliance policy will show a noncompliant device “OS version too low” with guidance to upgrade[1], helping users self-remediate. Overall, the OS version rules in this policy push endpoints to stay on supported, secure Windows builds, which is a cornerstone of strong device security.

*(The policy also lists “Minimum/Maximum OS version for *mobile devices” with the same values (10.0.19043.10000 / Not configured)[2]. This likely refers to Windows 10 Mobile or Holographic devices. It’s largely moot since Windows 10 Mobile is deprecated, but having the same minimum for “mobile” ensures something like a HoloLens or Surface Hub also requires an up-to-date OS. In our case, both fields mirror the desktop OS requirement, which is fine.)

Configuration Manager Compliance (Co-Management)

Require device compliance from Configuration Manager – This setting is Not configured in the JSON (i.e. it’s left at default)[2]. It applies only if the Windows device is co-managed with Microsoft Endpoint Configuration Manager (ConfigMgr/SCCM) in addition to Intune. Options: Not configured (Intune ignores ConfigMgr’s compliance state) or Require (device must also meet all ConfigMgr compliance policies)[1].

In our policy, leaving it not configured means Intune will not check ConfigMgr status – effectively the device only has to satisfy the Intune rules to be marked compliant. Is this best practice? For purely Intune-managed environments, yes – if you aren’t using SCCM baselines, there’s no need to require this. If an organization is co-managed and has on-premises compliance settings in SCCM (like additional security baselines or antivirus status monitored by SCCM), a strong security stance might enable this to ensure those are met too[1]. However, enabling it without having ConfigMgr compliance policies could needlessly mark devices noncompliant as “not reporting” (Intune would wait for a ConfigMgr compliance signal that might not exist).

So, the best practice depends on context: In a cloud-only or lightly co-managed setup, leaving this off (Not Configured) is correct[1]. If the organization heavily uses Configuration Manager to enforce other critical security settings, then best practice would be to turn this on so Intune treats any SCCM failure as noncompliance. Since this policy likely assumes modern management primarily through Intune, Not configured is appropriate and not a security gap. (Admins should ensure that either Intune covers all needed checks, or if not, integrate ConfigMgr compliance by requiring it. Here Intune’s own checks are quite comprehensive.)

System Security: Password Requirements

A very important part of device security is controlling access with strong credentials. This policy enforces a strict device password/PIN policy under the “System Security” category:

  • Require a password to unlockYes (Required). This means the device cannot be unlocked without a password or PIN. Users must authenticate on wake or login[1]. Options: Not configured (no compliance check on whether a device has a lock PIN/password set) or Require (device must have a lock screen password/PIN)[1]. Requiring a password is absolutely a baseline security requirement – a device with no lock screen PIN is extremely vulnerable (anyone with physical access could get in). The policy correctly sets this to Require[2]. Intune will flag any device without a password as noncompliant, likely forcing the user to set a Windows Hello PIN or password. This is undeniably best practice; all enterprise devices should be password/PIN protected.
  • Block simple passwordsYes (Block). “Simple passwords” refers to very easy PINs like 0000 or 1234 or repeating characters. The setting is Simple passwords: Block[1]. When enabled, Intune will require that the user’s PIN/passcode is not one of those trivial patterns. Options: Not configured (allow any PIN) or Block (disallow common simple PINs)[1]. Best practice is to block simple PINs because those are easily guessable if someone steals the device. This policy does so[2], meaning a PIN like “1111” or “12345” would not be considered compliant. Instead, users must choose less predictable codes. This is a straightforward security best practice (also recommended by Microsoft’s baseline and many standards) to defeat casual guessing attacks.
  • Password typeAlphanumeric. This setting specifies what kinds of credentials are acceptable. “Alphanumeric” in Intune means the user must set a password or PIN that includes a mix of letters and numbers (not just digits)[1]. The other options are “Device default” (which on Windows typically allows a PIN of just numbers) or explicitly Numeric (only numbers allowed)[1]. Requiring Alphanumeric effectively forces a stronger Windows Hello PIN – it must include at least one letter or symbol in addition to digits. The policy sets this to Alphanumeric[2], which is a stronger stance than a simple numeric PIN. It expands the space of possible combinations, making it much harder for an attacker to brute-force or guess a PIN. This is aligned with best practice especially if using shorter PIN lengths – requiring letters and numbers significantly increases PIN entropy. (If a device only allows numeric PINs, a 6-digit PIN has a million possibilities; an alphanumeric 6-character PIN has far more.) By choosing Alphanumeric, the admin is opting for maximum complexity in credentials.
    • Note: When Alphanumeric is required, Intune enables additional complexity rules (next setting) like requiring symbols, etc. If instead it was set to “Numeric”, those complexity sub-settings would not apply. So this choice unlocks the strongest password policy options[1].
  • Password complexity requirementsRequire digits, lowercase, uppercase, and special characters. This policy is using the most stringent complexity rule available. Under Intune, for alphanumeric passwords/PINs you can require various combinations: the default is “digits & lowercase letters”; but here it’s set to “require digits, lowercase, uppercase, and special characters”[1]. That means the user’s password (or PIN, if using Windows Hello PIN as an alphanumeric PIN) must include at least one lowercase letter, one uppercase letter, one number, and one symbol. This is essentially a classic complex password policy. Options: a range from requiring just some character types up to all four categories[1]. Requiring all four types is generally seen as a strict best practice for high security (it aligns with many compliance standards that mandate a mix of character types in passwords). The idea is to prevent users from choosing, say, all letters or all numbers; a mix of character types increases password strength. Our policy indeed sets the highest complexity level[2]. This ensures credentials are harder to crack via brute force or dictionary attacks, albeit at the cost of memorability. It’s worth noting modern NIST guidance allows passphrases (which might not have all char types) as an alternative, but in many organizations, this “at least one of each” rule remains a common security practice for device passwords.
  • Minimum password length14 characters. This defines the shortest password or PIN allowed. The compliance policy requires the device’s unlock PIN/password to be 14 or more characters long[1]. Fourteen is a relatively high minimum; by comparison, many enterprise policies set min length 8 or 10. By enforcing 14, this policy is going for very strong password length, which is consistent with guidance for high-security environments (some standards suggest 12+ or 14+ characters for administrative or highly sensitive accounts). Options: 1–16 characters can be set (the admin chooses a number)[1]. Longer is stronger – increasing length exponentially strengthens resistance to brute-force cracking. At 14 characters with the complexity rules above, the space of possible passwords is enormous, making targeted cracking virtually infeasible. This is absolutely a best practice for strong security, though 14 might be considered slightly beyond typical user-friendly lengths. It aligns with guidance like using passphrases or very long PINs for device unlock. Our policy’s 14-char minimum[2] indicates a high level of security assurance (for context, the U.S. DoD STIGs often require 15 character passwords on Windows – 14 is on par with such strict standards).
  • Maximum minutes of inactivity before password is required15 minutes. This controls the device’s idle timeout, i.e. how long a device can sit idle before it auto-locks and requires re-authentication. The policy sets 15 minutes[2]. Options: The admin can define a number of minutes; when not set, Intune doesn’t enforce an inactivity lock (though Windows may have its own default)[1]. Requiring a password after 15 minutes of inactivity is a common security practice to balance security with usability. It means if a user steps away, at most 15 minutes can pass before the device locks itself and demands a password again. Shorter timers (5 or 10 min) are more secure (less window for an attacker to sit at a logged-in machine), whereas longer (30+ min) are more convenient but risk someone opportunistically using an unlocked machine. 15 minutes is a reasonable best-practice value for enterprises – it’s short enough to limit unauthorized access, yet not so short that it frustrates users excessively. Many security frameworks recommend 15 minutes or less for session locks. This policy’s 15-minute setting is in line with those recommendations and thus supports a strong security posture. It ensures a lost or unattended laptop will lock itself in a timely manner, reducing the chance for misuse.
  • Password expiration (days)365 days. This setting forces users to change their device password after a set period. Here it is one year[2]. Options: 1–730 days or not configured[1]. Requiring password change every 365 days is a moderate approach to password aging. Traditional policies often used 90 days, but that can lead to “password fatigue.” Modern NIST guidelines actually discourage frequent forced changes (unless there’s evidence of compromise) because overly frequent changes can cause users to choose weaker passwords or cycle old ones. However, annual expiration (365 days) is relatively relaxed and can be seen as a best practice in some environments to ensure stale credentials eventually get refreshed[1]. It’s basically saying “change your password once a year.” Many organizations still enforce yearly or biannual password changes as a precaution. In terms of strong security, this setting provides some safety net (in case a password was compromised without the user knowing, it won’t work indefinitely). It’s not as critical as the other settings; one could argue that with a 14-char complex password, forced expiration isn’t strictly necessary. But since it’s set, it reflects a security mindset of not letting any password live forever. Overall, 365 days is a reasonable compromise – it’s long enough that users can memorize a strong password, and short enough to ensure a refresh if by chance a password leaked over time. This is largely aligned with best practice, though some newer advice would allow no expiration if other controls (like multifactor auth) are in place. In a high-security context, annual changes remain common policy.
  • Number of previous passwords to prevent reuse5. This means when a password is changed (due to expiration or manual change), the user cannot reuse any of their last 5 passwords[1]. Options: Typically can set a value like 1–50 previous passwords to disallow. The policy chose 5[2]. This is a standard part of password policy – preventing reuse of recent passwords helps ensure that when users do change their password, they don’t just alternate between a couple of favorites. A history of 5 is pretty typical in best practices (common ranges are 5–10) to enforce genuine password updates. This setting is definitely a best practice in any environment with password expiration – otherwise users might just swap back and forth between two passwords. By disallowing the last 5, it will take at least 6 cycles (in this case 6 years, given 365-day expiry) before one could reuse an old password, by which time it’s hoped that password would have lost any exposure or the user comes up with a new one entirely. The policy’s value of 5 is fine and commonly recommended.
  • Require password when device returns from idle stateYes (Required). This particularly applies to mobile or Holographic devices, but effectively it means a password is required upon device wake from an idle or sleep state[1]. On Windows PCs, this corresponds to the “require sign-in on wake” setting. Since our idle timeout is 15 minutes, this ensures that when the device is resumed (after sleeping or being idle past that threshold), the user must sign in again. Options: Not configured or Require[1]. The policy sets it to Require[2], which is certainly what we want – it’d be nonsensical to have all the above password rules but then not actually lock on wake! In short, this enforces that the password/PIN prompt appears after the idle period or sleep, which is absolutely a best practice. (Without this, a device could potentially wake up without a login prompt, which would undermine the idle timeout.) Windows desktop devices are indeed impacted by this on next sign-in after an idle, as noted in docs[1]. So this setting ties the loop on the secure password policy: not only must devices have strong credentials, but those credentials must be re-entered after a period of inactivity, ensuring continuous protection.

Summary of Password Policy: The compliance policy highly prioritizes strong access control. It mandates a login on every device (no password = noncompliant), and that login must be complex (not guessable, not short, contains diverse characters). The combination of Alphanumeric, 14+ chars, all character types, no simple PINs is about as strict as Windows Intune allows for user sign-in credentials[1][2]. This definitely meets the definition of best practice for strong security – it aligns with standards like CIS benchmarks which also suggest enforcing password complexity and length. Users might need to use passphrases or a mix of PIN with letters to meet this, but that is intended. The idle lock at 15 minutes and requirement to re-authenticate on wake ensure that even an authorized session can’t be casually accessed if left alone for long. The annual expiration and password history add an extra layer to prevent long-term use of any single password or recycling of old credentials, which is a common corporate security requirement.

One could consider slight adjustments: e.g., some security frameworks (like NIST SP 800-63) would possibly allow no expiration if the password is sufficiently long and unique (to avoid users writing it down or making minor changes). However, given this is a “strong security” profile, the chosen settings err on the side of caution, which is acceptable. Another improvement for extreme security could be shorter idle time (like 5 minutes) to lock down faster, but 15 minutes is generally acceptable and strikes a balance. Overall, these password settings significantly harden the device against unauthorized access and are consistent with best practices.

Encryption of Data Storage on Device

Require encryption of data storage on deviceYes (Required). Separate from the BitLocker requirement in Device Health, Intune also has a general encryption compliance rule. Enabling this means the device’s drives must be encrypted (with BitLocker, in the case of Windows) or else it’s noncompliant[1]. In our policy, “Encryption: Require” is set[2]. Options: Not configured or Require[1]. This is effectively a redundant safety net given BitLocker is also specifically required. According to Microsoft, the “Encryption of data storage” check looks for any encryption present (on the OS drive), and specifically on Windows it checks BitLocker status via a device report[1]. It’s slightly less robust than the Device Health attestation for BitLocker (which needs a reboot to register, etc.), but it covers the scenario generally[1].

From a security perspective, requiring device encryption is unquestionably best practice. It ensures that if a device’s drive isn’t encrypted (for example, BitLocker not enabled or turned off), the device will be flagged. This duplicates the BitLocker rule; having both doesn’t hurt – in fact, Microsoft documentation suggests the simpler encryption compliance might catch the state even if attestation hasn’t updated (though the BitLocker attestation is more reliable for TPM verification of encryption)[1].

In practice, an admin could use one or the other. This policy enables both, which indicates a belt-and-suspenders approach: either way, an unencrypted device will not slip through. This is absolutely aligned with strong security – all endpoints must have storage encryption, mitigating the risk of data exposure from lost or stolen hardware. Modern best practices (e.g. CIS, regulatory requirements like GDPR for laptops with personal data) often mandate full-disk encryption; here it’s enforced twice. The documentation even notes that relying on the BitLocker-specific attestation is more robust (it checks at the TPM level and knows the device booted with BitLocker enabled)[1][1]. The generic encryption check is a bit more broad but for Windows equates to BitLocker anyway. The key point is the policy requires encryption, which we already confirmed is a must-have security control. If BitLocker was somehow not supported on a device (very rare on Windows 10/11, since even Home edition has device encryption now), that device would simply fail compliance – again, meaning only devices capable of encryption and actually encrypted are allowed, which is appropriate for a secure environment.

(Note: Since both “Require BitLocker” and “Require encryption” are turned on, an Intune admin should be aware that a device might show two noncompliance messages for essentially the same issue if BitLocker is off. Users would see that they need to turn on encryption to comply. Once BitLocker is enabled and the device rebooted, both checks will pass[1][1]. The rationale for using both might be to ensure that even if the more advanced attestation didn’t report, the simpler check would catch it.)

Device Security Settings (Firewall, TPM, AV, Anti-spyware)

This section of the policy ensures that essential security features of Windows are active:

  • FirewallRequire. The policy mandates that the Windows Defender Firewall is enabled on the device (Firewall: Require)[1]. This means Intune will mark the device noncompliant if the firewall is turned off or if a user/app tries to disable it. Options: Not configured (do not check firewall status) or Require (firewall must be on)[1]. Requiring the firewall is definitely best practice – a host-based firewall is a critical first line of defense against network-based attacks. The Windows Firewall helps block unwanted inbound connections and can enforce outbound rules as well. By ensuring it’s always on (and preventing users from turning it off), the policy guards against scenarios where an employee might disable the firewall and expose the machine to threats[1]. This setting aligns with Microsoft recommendations and CIS Benchmarks, which also advise that Windows Firewall be enabled on all profiles. Our policy sets it to Require[2], which is correct for strong security. (One thing to note: if there were any conflicting GPO or config that turns the firewall off or allows all traffic, Intune would consider that noncompliant even if Intune’s own config profile tries to enable it[1] – essentially, Intune checks the effective state. Best practice is to avoid conflicts and keep the firewall defaults to block inbound unless necessary[1].)
  • Trusted Platform Module (TPM)Require. This check ensures the device has a TPM chip present and enabled (TPM: Require)[1]. Intune will look for a TPM security chip and mark the device noncompliant if none is found or it’s not active. Options: Not configured (don’t verify TPM) or Require (TPM must exist)[1]. TPM is a hardware security module used for storing cryptographic keys (like BitLocker keys) and for platform integrity (measured boot). Requiring a TPM is a strong security stance because it effectively disallows devices that lack modern hardware security support. Most Windows 10/11 PCs do have TPM 2.0 (Windows 11 even requires it), so this is feasible and aligns with best practices. It ensures features like BitLocker are using TPM protection and that the device can do hardware attestation. The policy sets TPM to required[2], which is a best practice consistent with Microsoft’s own baseline (they recommend excluding non-TPM machines, as those are typically older or less secure). By enforcing this, you guarantee that keys and sensitive operations can be hardware-isolated. A device without TPM could potentially store BitLocker keys in software (less secure) or not support advanced security like Windows Hello with hardware-backed credentials. So from a security viewpoint, this is the right call. Any device without a TPM (or with it disabled) will need remediation or replacement, which is acceptable in a high-security environment. This reflects a zero-trust hardware approach: only modern, TPM-equipped devices can be trusted fully[1].
  • AntivirusRequire. The compliance policy requires that antivirus protection is active and up-to-date on the device (Antivirus: Require)[1]. Intune checks the Windows Security Center status for antivirus. If no antivirus is registered, or if the AV is present but disabled/out-of-date, the device is noncompliant[1]. Options: Not configured (don’t check AV) or Require (must have AV on and updated)[1]. It’s hard to overstate the importance of this: running a reputable, active antivirus/antimalware is absolutely best practice on Windows. The policy’s requirement means every device must have an antivirus engine running and not report any “at risk” state. Windows Defender Antivirus or a third-party AV that registers with Security Center will satisfy this. If a user has accidentally turned off real-time protection or if the AV signatures are old, Intune will flag it[1]. Enforcing AV is a no-brainer for strong security. This matches all industry guidance (e.g., CIS Controls highlight the need for anti-malware on all endpoints). Our policy does enforce it[2].
  • AntispywareRequire. Similar to antivirus, this ensures anti-spyware (malware protection) is on and healthy (Antispyware: Require)[1]. In modern Windows terms, “antispyware” is essentially covered by Microsoft Defender Antivirus as well (Defender handles viruses, spyware, all malware). But Intune treats it as a separate compliance item to check in Security Center. This setting being required means the anti-malware software’s spyware detection component (like Defender’s real-time protection for spyware/PUPs) must also be enabled and not outdated[1]. Options: Not configured or Require, analogous to antivirus[1]. The policy sets it to Require[2]. This is again best practice – it ensures comprehensive malware protection is in place. In effect, having both AV and antispyware required just double-checks that the endpoint’s security suite is fully active. If using Defender, it covers both; if using a third-party suite, as long as it reports to Windows Security Center for both AV and antispyware status, it will count. This redundancy helps catch any scenario where maybe virus scanning is on but spyware definitions are off (though that’s rare with unified products). For our purposes, requiring antispyware is simply reinforcing the “must have anti-malware” rule – clearly aligned with strong security standards.

Collectively, these Device Security settings (Firewall, TPM, AV, antispyware) ensure that critical protective technologies are in place on every device:

  • The firewall requirement guards against network attacks and unauthorized connections[1].
  • The TPM requirement ensures hardware-based security for encryption and identity[1].
  • The AV/antispyware requirements ensure continuous malware defense and that no device is left unprotected against viruses or spyware[1].

All are definitely considered best practices. In fact, running without any of these (no firewall, no AV, etc.) would be considered a serious security misconfiguration. This policy wisely enforces all of them. Any device not meeting these (e.g., someone attempts to disable Defender Firewall or uninstall AV) will get swiftly flagged, which is exactly what we want in a secure environment.

*(Side note: The policy’s reliance on Windows Security Center means it’s vendor-agnostic; e.g., if an organization uses Symantec or another AV, as long as that product reports a good status to Security Center, Intune will see the device as compliant for AV/antispyware. If a third-party AV is used that *disables* Windows Defender, that’s fine because Security Center will show another AV is active. The compliance rule will still require that one of them is active. So this is a flexible but strict enforcement of “you must have one”.)*

Microsoft Defender Anti-malware Requirements

The policy further specifies settings under Defender (Microsoft Defender Antivirus) to tighten control of the built-in anti-malware solution:

  • Microsoft Defender AntimalwareRequire. This means the Microsoft Defender Antivirus service must be running and cannot be turned off by the user[1]. If the device’s primary AV is Defender (as is default on Windows 10/11 when no other AV is installed), this ensures it stays on. Options: Not configured (Intune doesn’t ensure Defender is on) or Require (Defender AV must be enabled)[1]. Our policy sets it to Require[2], which is a strong choice. If a third-party AV is present, how does this behave? Typically, when a third-party AV is active, Defender goes into a passive mode but is still not “disabled” in Security Center terms – or it might hand over status. This setting primarily aims to prevent someone from turning off Defender without another AV in place. Requiring Defender antivirus to be on is a best practice if your organization relies on Defender as the standard AV. It ensures no one (intentionally or accidentally) shuts off Windows’ built-in protection[1]. It essentially overlaps with the “Antivirus: Require” setting, but is more specific. The fact that both are set implies this environment expects to use Microsoft Defender on all machines (which is common for many businesses). In a scenario where a user installed a 3rd party AV that doesn’t properly report to Security Center, having this required might actually conflict (because Defender might register as off due to third-party takeover, thus Intune might mark noncompliant). But assuming standard behavior, if third-party AV is present and reporting, Security Center usually shows “Another AV is active” – Intune might consider the AV check passed but the “Defender Antimalware” specifically could possibly see Defender as not the active engine and flag it. In any case, for strong security, the ideal is to have a consistent AV (Defender) across all devices. So requiring Defender is a fine security best practice, and our policy reflects that intention. It aligns with Microsoft’s own baseline for Intune when organizations standardize on Defender. If you weren’t standardized on Defender, you might leave this not configured and just rely on the generic AV requirement. Here it’s set, indicating a Defender-first strategy for antimalware.
  • Microsoft Defender Antimalware minimum version4.18.0.0. This setting specifies the lowest acceptable version of the Defender Anti-Malware client. The policy has defined 4.18.0.0 as the minimum[2]. Effect: If a device has an older Defender engine below that version, it’s noncompliant. Version 4.18.x is basically the Defender client that ships with Windows 10 and above (Defender’s engine is updated through Windows Update periodically, but the major/minor version has been 4.18 for a long time). By setting 4.18.0.0, essentially any Windows 10/11 with Defender should meet it (since 4.18 was introduced years ago). This catches only truly outdated Defender installations (perhaps if a machine had not updated its Defender platform in a very long time, or is running Windows 8.1/7, which had older Defender versions – though those OS wouldn’t be in a Win10 policy anyway). Options: Admin can input a specific version string, or leave blank (no version enforcement)[1]. The policy chose 4.18.0.0, presumably because that covers all modern Windows builds (for example, Windows 10 21H2 uses Defender engine 4.18.x). Requiring a minimum Defender version is a good practice to ensure the anti-malware engine itself isn’t outdated. Microsoft occasionally releases new engine versions with improved capabilities; if a machine somehow fell way behind (e.g., an offline machine that missed engine updates), it could have known issues or be missing detection techniques. By enforcing a minimum, you compel those devices to update their Defender platform. Version 4.18.0.0 is effectively the baseline for Windows 10, so this is a reasonable choice. It’s likely every device will already have a later version (like 4.18.210 or similar). As a best practice, some organizations might set this to an even more recent build number if they want to ensure a certain monthly platform update is installed. In any case, including this setting in the policy shows thoroughness – it’s making sure Defender isn’t an old build. This contributes to security by catching devices that might have the Defender service but not the latest engine improvements. Since the policy’s value is low (4.18.0.0), practically all supported Windows 10/11 devices comply, but it sets a floor that excludes any unsupported OS or really old install. This aligns with best practice: keep security software up-to-date, both signatures and the engine. (The admin should update this minimum version over time if needed – e.g., if Microsoft releases Defender 4.19 or 5.x in the future, they might raise the bar.)
  • Microsoft Defender security intelligence up-to-dateRequire. This is basically ensuring Defender’s virus definitions (security intelligence) are current (Security intelligence up-to-date: Yes)[1]. If Defender’s definitions are out of date, Intune will mark noncompliant. “Up-to-date” typically means the signature is not older than a certain threshold (usually a few days, defined by Windows Security Center’s criteria). Options: Not configured (don’t check definitions currency) or Require (must have latest definitions)[1]. It’s set to Require in our policy[2]. This is clearly a best practice – an antivirus is only as good as its latest definitions. Ensuring that the AV has the latest threat intelligence is critical. This setting will catch devices that, for instance, haven’t gone online in a while or are failing to update Defender signatures. Those devices would be at risk from newer malware until they update. By marking them noncompliant, it forces an admin/user to take action (e.g. connect to the internet to get updates)[1]. This contributes directly to security, keeping anti-malware defenses sharp. It aligns with common security guidelines that AV should be kept current. Since Windows usually updates Defender signatures daily (or more), this compliance rule likely treats a device as noncompliant if signatures are older than ~3 days (Security Center flag). This policy absolutely should have this on, and it does – another check in the box for strong security practice.
  • Real-time protectionRequire. This ensures that Defender’s real-time protection is enabled (Realtime protection: Require)[1]. Real-time protection means the antivirus actively scans files and processes as they are accessed, rather than only running periodic scans. If a user had manually turned off real-time protection (which Windows allows for troubleshooting, or sometimes malware tries to disable it), this compliance rule would flag the device. Options: Not configured or Require[1]. Our policy requires it[2]. This is a crucial setting: real-time protection is a must for proactive malware defense. Without it, viruses or spyware could execute without immediate detection, and you’d only catch them on the next scan (if at all). Best practice is to never leave real-time protection off except perhaps briefly to install certain software, and even then, compliance would catch that and mark the device not compliant with policy. So turning this on is definitely part of a strong security posture. The policy correctly enforces it. It matches Microsoft’s baseline and any sane security policy – you want continuous scanning for threats in real time. The Intune CSP for this ensures that the toggle in Windows Security (“Real-time protection”) stays on[1]. Even if a user is local admin, turning it off will flip the device to noncompliant (and possibly trigger Conditional Access to cut off corporate resource access), strongly incentivizing them not to do that. Good move.

In summary, the Defender-specific settings in this policy double-down on malware protection:

  • The Defender AV engine must be active (and presumably they expect to use Defender on all devices)[1].
  • Defender must stay updated – both engine version and malware definitions[1][1].
  • Real-time scanning must be on at all times[1].

These are all clearly best practices for endpoint security. They ensure the built-in Windows security is fully utilized. The overlap with the general “Antivirus/Antenna” checks means there’s comprehensive coverage. Essentially, if a device doesn’t have Defender, the general AV required check would catch it; if it does have Defender, these specific settings enforce its quality and operation. No device should be running with outdated or disabled Defender in a secure environment, and this compliance policy guarantees that.

(If an organization did use a third-party AV instead of Defender, they might not use these Defender-specific settings. The presence of these in the JSON indicates alignment with using Microsoft Defender as the standard. That is indeed a good practice nowadays, as Defender has top-tier ratings and seamless integration. Many “best practice” guides, including government blueprints, now assume Defender is the AV to use, due to its strong performance and integration with Defender for Endpoint.)

Microsoft Defender for Endpoint (MDE) – Device Threat Risk Level

Finally, the policy integrates with Microsoft Defender for Endpoint (MDE) by using the setting:

  • Require the device to be at or under the machine risk scoreMedium. This ties into MDE’s threat intelligence, which assesses each managed device’s risk level (based on detected threats on that endpoint). The compliance policy is requiring that a device’s risk level be Medium or lower to be considered compliant[1]. If MDE flags a device as High risk, Intune will mark it noncompliant and can trigger protections (like Conditional Access blocking that device). Options: Not configured (don’t use MDE risk in compliance) or one of Clear, Low, Medium, High as the maximum allowed threat level[1]. The chosen value “Medium” means: any device with a threat rated High is noncompliant, while devices with Low or Medium threats are still compliant[1]. (Clear would be the most strict – requiring absolutely no threats; High would be least strict – tolerating even high threats)[1].

Setting this to Medium is a somewhat balanced security stance. Let’s interpret it: MDE categorizes threats on devices (malware, suspicious activity) into risk levels. By allowing up to Medium, the policy is saying if a device has only low or medium-level threats, we still consider it compliant; but if it has any high-level threat, that’s unacceptable. High usually indicates serious malware outbreaks or multiple alerts, whereas low may indicate minimal or contained threats. From a security best-practice perspective, using MDE’s risk as a compliance criterion is definitely recommended – it adds an active threat-aware dimension to compliance. The choice of Medium as the cutoff is probably to avoid overly frequent lockouts for minor issues, while still reacting to major incidents.

Many security experts would advocate for even stricter: e.g. require Low or Clear (meaning even medium threats would cause noncompliance), especially in highly secure environments where any malware is concerning. In fact, Microsoft’s documentation notes “Clear is the most secure, as the device can’t have any threats”[1]. Medium is a reasonable compromise – it will catch machines with serious infections but not penalize ones that had a low-severity event that might have already been remediated. For example, if a single low-level adware was detected and quarantined, risk might be low and the device remains compliant; but if ransomware or multiple high-severity alerts are active, risk goes high and the device is blocked until cleaned[1].

In our policy JSON, it’s set to Medium[2], which is in line with many best practice guides (some Microsoft baseline recommendations also use Medium as the default, to balance security and usability). This is still considered a strong security practice because any device under an active high threat will immediately be barred. It leverages real-time threat intelligence from Defender for Endpoint to enhance compliance beyond just configuration. That means even if a device meets all the config settings above, it could still be blocked if it’s actively compromised – which is exactly what we want. It’s an important part of a Zero Trust approach: continuously monitor device health and risk, not just initial compliance.

One could tighten this to Low for maximum security (meaning even medium threats cause noncompliance). If an organization has low tolerance for any malware, they might do that. However, Medium is often chosen to avoid too many disruptions. For our evaluation: The inclusion of this setting at all is a best practice (many might forget to use it). The threshold of Medium is acceptable for strong security, catching big problems while allowing IT some leeway to investigate mediums without immediate lockout. And importantly, if set to Medium, only devices with severe threats (like active malware not neutralized) will be cut off, which likely correlates with devices that indeed should be isolated until fixed.

To summarize, the Defender for Endpoint integration means this compliance policy isn’t just checking the device’s configuration, but also its security posture in real-time. This is a modern best practice: compliance isn’t static. The policy ensures that if a device is under attack or compromised (per MDE signals), it will lose its compliant status and thus can be auto-remediated or blocked from sensitive resources[1]. This greatly strengthens the security model. Medium risk tolerance is a balanced choice – it’s not the absolute strictest, but it is still a solid security stance and likely appropriate to avoid false positives blocking users unnecessarily.

(Note: Organizations must have Microsoft Defender for Endpoint properly set up and the devices onboarded for this to work. Given it’s in the policy, we assume that’s the case, which is itself a security best practice – having EDR (Endpoint Detection & Response) on all endpoints.)

Actions for Noncompliance and Additional Considerations

The JSON policy likely includes Actions for noncompliance (the blueprint shows an action “Mark device noncompliant (1)” meaning immediate)[2]. By default, Intune always marks a device as noncompliant if it fails a setting – which is what triggers Conditional Access or other responses. The policy can also be configured to send email notifications, or after X days perform device retire/wipe, etc. The snippet indicates the default action to mark noncompliant is at day 1 (immediately)[2]. This is standard and aligns with security best practice – you want noncompliant devices to be marked as such right away. Additional actions (like notifying user, or disabling the device) could be considered but are not listed.

It’s worth noting a few maintenance and dependency points:

  • Updating the Policy: As new Windows versions release, the admin should review the Minimum OS version field and advance it when appropriate (for example, when Windows 10 21H1 becomes too old, they might raise the minimum to 21H2 or Windows 11). Similarly, the Defender minimum version can be updated over time. Best practice is to review compliance policies at least annually (or along with major new OS updates)[1][1] to keep them effective.
  • Device Support: Some settings have hardware prerequisites (TPM, Secure Boot, etc.). In a strong security posture, devices that don’t meet these (older hardware) should ideally be phased out. This policy enforces that by design. If an organization still has a few legacy devices without TPM, they might temporarily drop the TPM requirement or grant an exception group – but from a pure security standpoint, it’s better to upgrade those devices.
  • User Impact and Change Management: Enforcing these settings can pose adoption challenges. For example, requiring a 14-character complex password might generate more IT support queries or user friction initially. It is best practice to accompany such policy with user education and perhaps rollout in stages. The policy as given is quite strict, so ensuring leadership backing and possibly implementing self-service password reset (to handle expiry) would be wise. These aren’t policy settings per se, but operational best practices.
  • Complementary Policies: A compliance policy like this ensures baseline security configuration, but it doesn’t directly configure the settings on the device (except for password requirement which the user is prompted to set). It checks and reports compliance. To actually turn on things like BitLocker or firewall if they’re off, one uses Configuration Profiles or Endpoint Security policies in Intune. Best practice is to pair compliance policies with configuration profiles that enable the desired settings. For instance, enabling BitLocker via an Endpoint Security policy and then compliance verifies it’s on. The question focuses on compliance policy, so our scope is those checks, but it’s assumed the organization will also deploy policies to turn on BitLocker, firewall, Defender, etc., making it easy for devices to become compliant.
  • Protected Characteristics: Every setting here targets technical security and does not discriminate or involve user personal data, so no concerns there. From a privacy perspective, the compliance data is standard device security posture info.

Conclusion

Overall, each setting in this Windows compliance policy aligns with best practices for securing Windows 10/11 devices. The policy requires strong encryption, up-to-date and secure OS versions, robust password/PIN policies, active firewall and anti-malware, and even ties into advanced threat detection (Defender for Endpoint)[2][2]. These controls collectively harden the devices against unauthorized access, data loss, malware infections, and unpatched vulnerabilities.

Almost all configurations are set to their most secure option (e.g., requiring vs not, or maximum complexity) as one would expect in a high-security baseline:

  • Data protection is ensured by BitLocker encryption on disk[1].
  • Boot integrity is assured via Secure Boot and Code Integrity[1].
  • Only modern, supported OS builds are allowed[1].
  • Users must adhere to a strict password policy (complex, long, regularly changed)[1].
  • Critical security features (firewall, AV, antispyware, TPM) must be in place[1][1].
  • Endpoint Defender is kept running in real-time and up-to-date[1].
  • Devices under serious threat are quarantined via noncompliance[1].

All these are considered best practices by standards such as the CIS Benchmark for Windows and government cybersecurity guidelines (for example, the ASD Essential Eight in Australia, which this policy closely mirrors, calls for application control, patching, and admin privilege restriction – many of which this policy supports by ensuring fundamental security hygiene on devices).

Are there any settings that might not align with best practice? Perhaps the only debatable one is the 365-day password expiration – modern NIST guidelines suggest you don’t force changes on a schedule unless needed. However, many organizations still view an annual password change as reasonable policy in a defense-in-depth approach. It’s a mild requirement and not draconian, so it doesn’t significantly detract from security; if anything, it adds a periodic refresh which can be seen as positive (with the understanding that user education is needed to avoid predictable changes). Thus, we wouldn’t call it a wrong practice – it’s an accepted practice in many “strong security” environments, even if some experts might opt not to expire passwords arbitrarily. Everything else is straightforwardly as per best practice or even exceeding typical baseline requirements (e.g., 14 char min is quite strong).

Improvements or additions: The policy as given is already thorough. An organization could consider tightening the Defender for Endpoint risk level to Low (meaning only absolutely clean devices are compliant) if they wanted to be extra careful – but that could increase operational noise if minor issues trigger noncompliance too often[1]. They could also reduce the idle timeout to, say, 5 or 10 minutes for devices in very sensitive environments (15 minutes is standard, though stricter is always an option). Another possible addition: enabling Jailbreak detection – not applicable for Windows (it’s more for mobile OS), Windows doesn’t have a jailbreak setting beyond what we covered (DHA covers some integrity). Everything major in Windows compliance is covered here.

One more setting outside of this device policy that’s a tenant-wide setting is “Mark devices with no compliance policy as noncompliant”, which we would assume is enabled at the Intune tenant level for strong security (so that any device that somehow doesn’t get this policy is still not trusted)[3]. The question didn’t include that, but it’s a part of best practices – likely the organization would have set it to Not compliant at the tenant setting to avoid unmanaged devices slipping through[3].

In conclusion, each listed setting is configured in line with strong security best practices for Windows devices. The policy reflects an aggressive security posture: it imposes strict requirements that greatly reduce the risk of compromise. Devices that meet all these conditions will be quite well-hardened against common threats. Conversely, any device failing these checks is rightfully flagged for remediation, which helps the IT team maintain a secure fleet. This compliance policy, especially when combined with Conditional Access (to prevent noncompliant devices from accessing corporate data) and proper configuration policies (to push these settings onto devices), provides an effective enforcement of security standards across the Windows estate[3][3]. It aligns with industry guidelines and should substantially mitigate risks such as data breaches, malware incidents, and unauthorized access. Each setting plays a role: from protecting data encryption and boot process to enforcing user credentials and system health – together forming a comprehensive security baseline that is indeed consistent with best practices.

[1][2]

References

[1] Windows compliance settings in Microsoft Intune

[2] Windows 10/11 Compliance Policy | ASD’s Blueprint for Secure Cloud

[3] Device compliance policies in Microsoft Intune