BitLocker Silent Enablement via Intune: The Deployment Pattern That Actually Sticks

image

Every MSP ends up doing BitLocker. Very few do it silently on the first attempt. Between TPM quirks, third-party encryption leftovers, and the three different Intune policy surfaces that all claim to configure BitLocker, the difference between “encrypted fleet with recoverable keys” and “2am call from a locked-out CEO” is a handful of settings. Here’s the pattern to deploy in production under Microsoft 365 Business Premium.

Prerequisites people skip

Silent means standard user, no prompts, no elevation — which is a higher bar than “BitLocker on.”

  • TPM 2.0, ready state, UEFI, Secure Boot on. If the TPM reports “not ready” or is owned by a prior OEM image, silent encryption aborts. Check with Get-Tpm before blaming policy.

  • Entra joined or Hybrid joined. Workgroup and AD-only devices have no key escrow target and will not silently encrypt.

  • No third-party encryption agent. McAfee DE, Sophos SafeGuard, legacy Dell Data Protection — all block BitLocker takeover until fully decrypted and uninstalled.

  • No startup PIN or startup key requirement. Silent mode and pre-boot authentication are mutually exclusive. Pick one.

  • WinRE present and healthy. BitLocker uses it for recovery.

If any of those are wrong, don’t touch policy yet. Fix the device first.

Where to configure it

Ignore the Settings Catalog for this one. Microsoft is explicit that the Settings Catalog does not expose the TPM startup authentication controls silent enablement depends on (docs).

Use the Endpoint security path:

Intune admin center → Endpoint security → Disk encryption → Create Policy → Platform: Windows → Profile: BitLocker.

The settings that actually matter for silent:

  • Enable full disk encryption for OS and fixed data drives: Yes
  • Hide prompt about third-party encryption: Yes
  • Allow standard users to enable encryption during Autopilot: Yes
  • Require device to back up recovery information to Azure AD: Yes (gates encryption on successful escrow — this is the setting that saves you)

  • Recovery password rotation: Enable on Entra and Hybrid-joined devices
  • OS and fixed drive: XTS-AES 256, TPM-only protector, no PIN

Full setting reference: Disk encryption profile settings.

The rollout pattern

Do not target All Devices. Ever.

  1. Pilot ring (5–10 devices): your own, one per client hardware family. Watch the Endpoint security → Disk encryption report for “Encrypted” status and a recovery key present in Entra. If the key isn’t there, the device isn’t safe, regardless of what Windows says.

  2. Early ring (10–20% by dynamic group): let this soak for a week. Check the encryption report daily. Anything stuck in “Waiting for response” usually means TPM readiness or a WinRE issue — remediate per device, don’t loosen policy.

  3. Broad rollout: remaining corporate devices via a dynamic group filtered on deviceOwnership eq "Company". Never include BYO.

  4. New builds: bake the policy into Autopilot so encryption completes during ESP. Standard-user silent enablement only works if “Allow standard users to enable encryption during Autopilot” is Yes — double-check before you cut a new image.

The help desk also needs the recovery-key retrieval flow documented before you go broad, not after the first lockout.

The pitfalls that bite

  • Silent and startup PIN are incompatible. If compliance demands pre-boot auth, you’re running standard BitLocker, not silent. Decide once.

  • Recovery key escrow failures hide in plain sight. Without “Require device to back up recovery information to Azure AD = Yes”, devices encrypt, keys never land in Entra, and you discover it the day a laptop gets stolen. Audit the encryption report monthly — the ground truth is “Recovery key: Yes/No” per device, not policy compliance.

  • Don’t mix endpoint protection + disk encryption profiles targeting the same devices. Conflicts resolve unpredictably. Pick the Disk encryption profile and delete the legacy Endpoint protection one (policy comparison).

Get those three right and BitLocker becomes a checkbox — exactly what your clients are paying for.

DSPM: The End of Guessing About Your Sensitive Data

image

Most Microsoft 365 tenants I walk into are flying blind on data.

The sensitivity labels exist. A couple of DLP policies exist. Someone once turned on Insider Risk Management because a consultant said so. And then nothing. Nobody knows what’s working, what’s exposed, or which sensitive files are sitting wide open in a SharePoint site shared with half the planet.

That’s not a security posture. That’s a guess.

The tool that finally ends the guessing is Microsoft Purview Data Security Posture Management. If you’ve got E5 or the Purview Suite and you’re not showing this to your clients, you’re leaving value on the table.

What is DSPM, really?

DSPM is the dashboard that tells you, in plain English, where your sensitive data is sitting unprotected and which users are handling it carelessly. It pulls signals from the tools you already pay for — DLP, Information Protection, Insider Risk Management, Adaptive Protection — and stitches them into one view.

The clever bit is the correlation. Before DSPM, you’d open five different blades, cross-reference three different reports, and still miss half of it. Now the findings and recommendations land on one page, with a one-click path to spin up the matching policy.

That’s not a report. That’s a to-do list with context.

Step-by-Step: turning DSPM on

Portal only. Stay in the GUI — easier for you, easier to hand off to the next admin.

Open the Purview portal

Sign in to the Microsoft Purview portal as a member of the Data Security Management role group, an Insider Risk Admin, or a Compliance Administrator. Global Admin works too, but please don’t use it if you can help it.

Open the DSPM solution

From the home page, go to SolutionsData Security Posture ManagementOverview.

Turn on analytics

On the Overview page, click Turn on analytics. That one switch also enables DLP analytics and Insider Risk analytics behind the scenes if they aren’t already on. One click, three switches. The full checklist is in the Get started with DSPM article.

Wait

Yes, really. The automated scan across your tenant can take up to three days on anything larger than a handful of users. Walk away. Brew a coffee. Come back on Thursday.

Review the recommendations

Back on the DSPM dashboard, open Recommendations. Each one tells you what was found, why it matters, and offers a one-click path to create the DLP or Insider Risk policy that fixes it. You don’t start from a blank policy screen anymore — you start from your tenant’s real gaps.

Track trends over time

Use the Analytics and Reports tabs in client reviews. A trend line of risky activity going down beats any invoice justification I’ve ever tried to write.

Why this actually changes behaviour

“Are we protected?”

That’s the question every SMB owner asks. Most of us have been answering with vibes. Good vibes, educated vibes, but vibes.

DSPM changes the answer. You can point at a number. You can point at a recommendation you actioned last month and the unprotected file count that dropped because of it. You can show, not tell.

For MSPs, that’s a QBR slide that sells itself. For internal IT, it’s the evidence you need when the CFO asks what the Microsoft Purview licence is actually doing for the business.

And if Copilot is already in the tenant — which, let’s be honest, it increasingly is — then DSPM for AI is your next stop. Same lens, pointed at what people are pasting into Copilot prompts and what’s flowing back out.

Copilot doesn’t slow down. Neither does your data sprawl. Use something that keeps up.

DSPM isn’t there to create more work. It’s there to stop the guessing.

Attack Surface Reduction Rules in Microsoft 365 Business Premium: A Practical Deep Dive

image

If you manage Business Premium tenants for SMB clients, Attack Surface Reduction (ASR) rules are arguably the highest-leverage defensive control you’re not fully exploiting. They ship with Defender for Business (included in Business Premium), they cost nothing extra to enable, and they block the precise behaviours that commodity ransomware and living-off-the-land attacks depend on — Office spawning child processes, Win32 API calls from macros, credential theft from LSASS, and PSExec-style lateral movement. This post assumes you already know what ASR is. The goal here is to help L2/L3 engineers deploy it without breaking line-of-business apps.

Prerequisites the documentation underplays

ASR has hard dependencies that silently downgrade rules to “not configured” if missed. Microsoft Defender Antivirus must be the active AV (not passive, not disabled by a third-party product), real-time protection must be on, and cloud-delivered protection must be on — several rules won’t evaluate without cloud lookups. Verify these under Microsoft Defender portal → Endpoints → Configuration management → Device configuration, and confirm the device shows as “Active” in the device inventory. Tamper Protection must also be enabled tenant-wide; without it, local admins or malware can disable ASR in seconds.

Where to configure in Business Premium

Business Premium gives you two supported paths. The simplified experience lives at security.microsoft.com → Device configuration → Next-generation protection / Firewall, but ASR rules specifically are configured through Intune. In the Microsoft Intune admin center, go to Endpoint security → Attack surface reduction and create an “Attack surface reduction rules” profile targeting Windows 10/11. Avoid mixing this with legacy Endpoint Protection templates or Group Policy — conflicting sources result in the classic “last writer wins” behaviour, and you will spend hours chasing phantom rules. If a device is co-managed or has stale GPOs, check Event Viewer → Applications and Services Logs → Microsoft → Windows → Windows Defender → Operational (event IDs 1121, 1122, 5007) to confirm which source applied.

The rollout pattern that works

Do not flip rules straight to Block. The proven pattern is: enable every rule in Audit mode, leave it for two to four weeks, then review the ASR report at security.microsoft.com → Reports → Attack surface reduction rules. Filter by rule and by device, identify the noisy ones, and build per-rule exclusions before switching to Block. Where available, use Warn mode as a staged step — the user gets a toast and can unblock once, which is invaluable for finance teams running macro-heavy spreadsheets. Warn mode is Windows 10 1809+ only; older builds silently treat it as Block.

Start with the “standard protection” set Microsoft recommends: Block credential stealing from LSASS, Block abuse of exploited vulnerable signed drivers, and Block persistence through WMI event subscription. These have the lowest false-positive rate and the highest return. The two rules that most commonly break things are “Block Office applications from creating child processes” (breaks legitimate mail-merge and older add-ins) and “Block executable content from email client and webmail” (breaks zipped installers).

Pitfalls to pre-empt

Exclusions are path-based and case-sensitive on the file name portion — wildcards work on folders, not extensions. Per-rule exclusions (added in later releases) are preferred over global Defender exclusions; use them so you don’t inadvertently hole-punch the whole AV. If a rule appears to do nothing, check whether the device is in EDR block mode and whether Defender is the active AV — ASR is silently disabled under third-party AV. Finally, ASR events do not always surface as alerts; plumb them into your SIEM via the Defender API or Advanced Hunting (DeviceEvents where ActionType startswith "Asr").

References

Building Conditional Access That Actually Works: Requiring Intune‑Compliant Devices

image

Requiring Intune‑compliant devices via Conditional Access is one of the most impactful controls available to Microsoft 365 Business Premium tenants. It’s also one of the easiest ways to accidentally break access if you don’t understand how the moving parts fit together.

This article walks through the end‑to‑end mechanics, not just the clicks, so L2/L3 engineers understand why policies behave the way they do.


Architecture: What Happens During Sign‑In

When a user signs into a cloud app (Exchange Online, SharePoint, Teams):

  1. Microsoft Entra ID evaluates applicable Conditional Access policies

  2. If a policy requires a compliant device, Entra queries Intune
  3. Intune returns a binary verdict: Compliant or Not compliant
  4. Access is granted or blocked accordingly

Microsoft explicitly states that Conditional Access relies on Intune compliance state and will not function as intended without an Intune compliance policy in place. [learn.microsoft.com]

This is not a soft dependency — no compliance policy means every device fails the check.


Step 1: Define “Compliant” in Intune (Before Touching Conditional Access)

In the Intune admin centre (intune.microsoft.com):

Devices → Compliance policies → Create

At minimum for Windows endpoints in Business Premium, Microsoft supports compliance checks such as:

  • BitLocker enabled

  • Secure Boot enabled

  • Minimum OS version

  • Antivirus and firewall present

These controls form the inputs to Conditional Access — they do nothing by themselves. [learn.microsoft.com]

Operational tip:
Assign compliance policies to user groups, not devices. Conditional Access evaluates user sign‑ins, not device objects.


Step 2: Create the Conditional Access Policy

In the Microsoft Entra admin centre (entra.microsoft.com):

Protection → Conditional Access → Policies → New policy

Core configuration (baseline):
  • Users
    • Include: Target user group (do not start with All users)

    • Exclude:

      • Emergency access (break‑glass) accounts

      • Service accounts (where applicable)

Microsoft explicitly recommends excluding emergency access accounts to avoid tenant lockout. [learn.microsoft.com]

  • Target resources

    • Start with: Exchange Online or Office 365

    • Expand later to “All cloud apps”
  • Grant

    • ✅ Require device to be marked as compliant
  • Enable

    • ✅ Report‑only (initially)

This exact flow is documented by Microsoft as the supported deployment approach. [learn.microsoft.com]


Step 3: Report‑Only Mode Is Not Optional

Report‑only mode evaluates the policy without enforcing it. This allows you to:

  • Confirm which users would be blocked

  • Identify unmanaged or unenrolled devices

  • Detect users authenticating from unsupported platforms

Microsoft strongly recommends Report‑only mode before enforcement for Conditional Access policies of this type. [learn.microsoft.com]

Validation path:

  • Entra ID → Sign‑in logs

  • Filter: Conditional Access → Report‑only

  • Review failure reasons


Step 4: Common Failure Scenarios (Seen in the Wild)

“User prompted repeatedly for MFA then blocked”
  • Device is Entra ID joined but not enrolled in Intune
  • No compliance policy applies to the user
“macOS or BYOD phones blocked unexpectedly”
  • Platform not covered by a compliance policy

  • Device enrolment incomplete
“Policy works for admins but not staff”
  • Admins often use already‑managed devices

  • Staff devices lag behind in enrolment

Microsoft documents that Conditional Access can only evaluate compliance for MDM‑enrolled devices. [learn.microsoft.com]


Step 5: When Not to Use Compliant‑Only Access

Microsoft explicitly provides alternative templates where compliance is one of several controls, not the only one, such as:

  • Compliant device OR Microsoft Entra hybrid joined OR MFA

This is recommended where full device management is not realistic. [learn.microsoft.com]


Production Roll‑Out Recommendation

For Business Premium tenants, Microsoft’s own Zero Trust guidance aligns to:

  1. Require MFA first

  2. Enrol devices into Intune

  3. Require compliant devices for core workloads

  4. Expand to all cloud apps once stable [learn.microsoft.com]


Official Microsoft References

What’s actually happening inside an Intune App Protection Policy

image

Most MSPs I work with have shipped App Protection Policies (APP) to a client by now. BYOD iPhone, Outlook Mobile, a PIN, a block on copy-paste to personal apps — done, move on. But when the client’s compliance consultant asks how the data is actually protected, or why a specific action blocks on an unenrolled Android, the hand-waving starts. Here’s what’s actually going on under the covers.

The Intune App SDK, not the OS, is doing the work

An App Protection Policy doesn’t manage the device. That’s the whole point of MAM-without-enrolment. Enforcement lives inside the app itself, courtesy of the Intune App SDK that Microsoft has compiled into Outlook, Teams, Word, Excel, OneDrive, and a growing list of line-of-business apps wrapped with the App Wrapping Tool. When the user signs in with their work identity, the SDK phones home to the Intune service, pulls down the policy that applies to that user plus that app, and starts enforcing it in-process.

That’s why policies are scoped to user and app, not device. The OS on an unenrolled phone has no idea this is happening. The app is quietly running a second policy engine beside the operating system — and a separate wipe scope, which is what lets you revoke org data from a personal device without nuking the phone.

The broker is the hinge

On iOS, Microsoft Authenticator acts as the broker. On Android, it’s Company Portal — even when no device enrolment exists. The broker holds the work identity token and passes it to every SDK-enabled app, which is why one PIN prompt in Outlook covers Teams and Word for the next session. If the broker isn’t installed, APP quietly degrades: the SDK still enforces policy on its own app, but cross-app single sign-on and some advanced integrity checks fall away. A detail worth surfacing in onboarding.

Conditional launch is a local state machine

The under-appreciated piece is Conditional Launch. Every time a managed app resumes, the SDK runs a checklist: is the OS version above the floor, is the device jailbroken or rooted, is the Play Integrity or App Attest result clean, is the user still in good standing with Entra ID, how long since the last check-in? If any requirement fails, the SDK can warn, block, or selectively wipe org data — on its own, from a cached policy, without Intune needing to reach the device at that moment.

The data boundary works the same way. “Save copies of org data” restricted to OneDrive for Business, “Open-in” restricted to other managed apps, encrypted cache at rest — these are decisions made inside the app process by asking the SDK whether the target app is on the allow list and signed in with the same work identity.

Where Conditional Access joins in

The final layer is the Entra ID grant control Require app protection policy. This is what makes APP enforceable rather than optional. The token issued to the app is stamped with a claim confirming the app is compliant with its APP at sign-in. Conditional Access can refuse the token entirely if the app isn’t protected, and with Continuous Access Evaluation the refusal can happen mid-session. That’s the seam between the identity plane and the data plane.

What I’d tell a junior engineer

APP is a policy engine bolted into each app, gated by the broker, validated by Conditional Access. Understand those three layers and the “why did this break” tickets get a lot shorter.

Worth bookmarking:

Windows LAPS via Intune: the quiet Business Premium win most MSPs still haven’t shipped

image

Walk into almost any SMB estate and you’ll find the same thing — a local administrator password that was set during imaging three years ago, shared across every device, and written down somewhere nobody wants to admit. It’s the sort of gap that never shows up on a client’s risk register until it does. What surprises me is how many MSPs haven’t yet rolled out Windows LAPS, even though it’s sitting right there inside Microsoft 365 Business Premium — Intune is part of the licence, the feature is part of Windows, and there’s no extra agent to push.

What it actually is

Windows LAPS is the successor to the old “Microsoft LAPS” tool — the one that needed an AD schema extension and a client-side extension pushed by GPO. The replacement is baked into Windows 10, Windows 11, and Server 2019+. Nothing to install. You just tell it what to do.

Each managed device backs its local admin password up to a directory — either on-prem AD or Entra ID. Pick one per device, and for cloud-joined SMB fleets it’s almost always Entra. The password is stored encrypted against an Entra key, and only users holding the right role can read it. The device itself never keeps the current password in the clear once it has rotated.

Deploying through Intune

In the Intune admin centre it lives under Endpoint security → Account protection → create a policy of type Local admin password solution (Windows LAPS). Under the hood you’re configuring the LAPS CSP, but the UI hides that detail.

The settings worth thinking through:

  • Backup directory — Azure AD or Active Directory. Disabled turns it off.

  • Administrator account name — leave blank and it manages the built-in Administrator, which is disabled by default on most modern builds. Name it explicitly if you want LAPS to manage a dedicated local admin.

  • Automatic account management — the newer setting that lets LAPS create and maintain the account for you, so you don’t need a separate provisioning step.

  • Password complexity, length, age — 14+ characters, full complexity, 30-day rotation is a sensible baseline.

Assign the policy to an Entra device group, not a user group — it’s a device-scoped CSP and won’t apply against user targeting.

Rotation, post-authentication reset, and retrieval

The bit that’s genuinely new — and the bit most pros miss — is the post-authentication reset. When the managed account signs in interactively, Windows starts a grace timer. When that elapses, it can rotate the password, sign the account off, or reboot the device. That behaviour alone shuts the door on a technician staying signed in for days with a password someone else might already have read. Scheduled rotation still runs on the age you set, and you can force an on-demand rotation from the device blade in Intune.

Retrieval is the other place it pays for itself. In Intune, open the device and click Local admin password. Or do it from Entra directly — Devices → the device → Local administrator password recovery. The role needed is Cloud Device Administrator, or an Intune role with the right retrieval action. Every read is audited, and by default reading the password triggers a fresh rotation, so a looked-up credential is a one-shot — useful if you’re handing it to a junior tech for a single call.

Worth the morning it takes

Rolling this out across a client tenant is an afternoon’s work per fleet, and it closes a gap most MSPs have been quietly carrying for years. If you’re already selling Business Premium, you’re already paying for it. The only real question left is why it isn’t deployed yet.

Step-by-step: Find deleted file logs for a SharePoint site

image

Option 1: Use the Microsoft Purview audit portal

This is the easiest method for most admins.

  1. Sign in to Microsoft 365

  2. Open Audit

    • In the left menu, go to Solutions > Audit.

    • If prompted, enable auditing if it isn’t already on.
  3. Start a new search

    • Select New Search.
  4. Set the date range

    • Choose the period when you think the file was deleted.

    • Be aware that audit retention depends on licensing:

      • Many non-E5 tenants keep audit data for 180 days
      • E5 and some add-on licenses can retain some audit data for 1 year by default citehttps://learn.microsoft.com/purview/audit-search#before-you-search-the-audit-log
  5. Choose activities

    • In the activity filter, look for SharePoint file deletion-related actions such as:

      • Deleted file (FileDeleted)

      • Recycled a file (FileRecycled)

      • Deleted file from recycle bin (FileDeletedFirstStageRecycleBin)

      • Deleted file from second-stage recycle bin (FileDeletedSecondStageRecycleBin) citehttps://learn.microsoft.com/purview/audit-log-activities#file-and-page-activities
  6. Filter by site, file, or user

    • Use available filters to narrow results:

      • Site URL
      • File name
      • User
    • If you know the person who deleted the file, filtering by user makes results much easier to review.
  7. Run the search

    • Click Search.
  8. Review the results

    • Open matching events to see details such as:

      • who performed the action

      • when it happened

      • the file involved

      • the site URL

      • the operation type
  9. Check the event sequence

    • A typical deletion trail may look like this:

      • FileRecycled = file moved to recycle bin

      • FileDeletedFirstStageRecycleBin = removed from first-stage recycle bin

      • FileDeletedSecondStageRecycleBin = permanently removed from second-stage recycle bin citehttps://learn.microsoft.com/purview/audit-log-activities#file-and-page-activities


What the log entries mean

For SharePoint deleted files, these are the most useful audit events:

  • FileDeleted
    A user deleted a document from a site. citehttps://learn.microsoft.com/purview/audit-log-activities#file-and-page-activities

  • FileRecycled
    A user moved a file into the SharePoint recycle bin. citehttps://learn.microsoft.com/purview/audit-log-activities#file-and-page-activities

  • FileDeletedFirstStageRecycleBin
    A user deleted a file from the site’s recycle bin. citehttps://learn.microsoft.com/purview/audit-log-activities#file-and-page-activities

  • FileDeletedSecondStageRecycleBin
    A user deleted a file from the second-stage recycle bin. citehttps://learn.microsoft.com/purview/audit-log-activities#file-and-page-activities

That sequence helps you determine whether the file is still recoverable or has been permanently removed.


Practical tip for small businesses

If you are only trying to answer:

  • Who deleted the file?
  • When was it deleted?
  • Was it permanently deleted or just moved to the recycle bin?

Then the audit search with the filters:

  • date range

  • user

  • file name

  • SharePoint activities

is usually enough.

If you are trying to restore the file as well, you should also check:

  • the site recycle bin
  • the second-stage recycle bin

because the audit log tells you what happened, but recovery depends on whether the file is still retained in one of those recycle bins.


Option 2: Use PowerShell for more detailed searches

If you prefer scripting or want to export results, Microsoft also supports using the Search-UnifiedAuditLog cmdlet in Exchange Online PowerShell to search and export audit records. citehttps://learn.microsoft.com/purview/audit-log-export-records#use-powershell-to-search-and-export-audit-log-records

High-level process:

  1. Connect to Exchange Online PowerShell.

  2. Run Search-UnifiedAuditLog for the date range.

  3. Search SharePoint-related audit records.

  4. Export the results to CSV for filtering and reporting. citehttps://learn.microsoft.com/purview/audit-log-export-records#use-powershell-to-search-and-export-audit-log-records

This is especially useful if:

  • you need a report,

  • you want to search a large range of data,

  • or you want to automate the process.


Things to check if you can’t find the log

If no results appear, check these common causes:

  1. Wrong date range

    • Expand the time window.
  2. Audit retention expired

    • Older events may no longer be available depending on license. citehttps://learn.microsoft.com/purview/audit-search#before-you-search-the-audit-log
  3. Wrong activity selected

    • Try both:

      • deleted

      • recycled

      • recycle bin deletion events
  4. Auditing not enabled

    • In most tenants this is on, but if it was disabled previously, older activity may not exist. Microsoft notes audit log ingestion can be turned on or off. citehttps://learn.microsoft.com/purview/audit-search#before-you-search-the-audit-log
  5. Looking in SharePoint site settings instead of Purview

    • File deletion history is generally tracked in the Microsoft 365 unified audit log, not as a simple “deletion report” inside the SharePoint site itself.


Simple example

If a user says, “The file Budget.xlsx disappeared from the Finance SharePoint site,” you would:

  1. Open Purview Audit
  2. Search the last 7–30 days

  3. Filter activities to:

    • FileDeleted

    • FileRecycled

    • FileDeletedFirstStageRecycleBin

    • FileDeletedSecondStageRecycleBin
  4. Filter by:

    • Site URL = Finance site

    • File name = Budget.xlsx
  5. Review who deleted it and whether it is still recoverable

Existing systems can now enable Windows Smart App Control (and you should)

Screenshot 2026-04-16 210136

What Windows Smart App Control actually is

Smart App Control (SAC) is a pre‑execution application control layer built into Windows 11 that blocks untrusted software before it runs. It lives in Windows Security → App & browser control, and operates independently from Microsoft Defender Antivirus and SmartScreen. [support.mi…rosoft.com], [computerworld.com]

This is important:

Smart App Control is not antivirus.
It is policy‑enforced app allow/deny at launch time, based on trust and reputation.

Think of it as Microsoft sneaking a consumer‑friendly WDAC‑lite into Windows 11.


The security model: how SAC makes decisions

When any executable (EXE, DLL, MSI, script, etc.) attempts to run, Smart App Control applies a deterministic trust pipeline:

1. Cloud reputation check first

Windows queries Microsoft’s cloud‑based app intelligence service, which analyses signals from billions of executions worldwide. [support.mi…rosoft.com], [computerworld.com]

If the app is:

  • Known good

  • Widely deployed

  • Previously classified as safe

It runs


2. Certificate trust validation

If cloud intelligence cannot confidently classify the app, SAC checks:

  • Is the file digitally signed?

  • Is the certificate trusted and valid?

  • Has the binary been tampered with?

Signed software from reputable vendors typically passes this stage. [support.mi…rosoft.com], [howtogeek.com]

Valid signature = allowed


3. Everything else is blocked

If the app is:

  • Unsigned

  • Unknown

  • Newly compiled custom binaries

  • Internally built tooling

Smart App Control blocks execution

There is no “Run anyway”, no whitelist, and no user override in enforcement mode. That is entirely by design. [computerworld.com], [howtogeek.com]


The three Smart App Control states (this matters)

SAC operates in three mutually exclusive modes:

1. Evaluation mode
  • SAC runs silently

  • Nothing is blocked

  • Windows observes your real‑world app usage

  • SAC decides if your system is “compatible” with strict enforcement

This was originally only triggered on clean installs. [howtogeek.com]


2. Enforcement (On)
  • Unknown or untrusted apps are blocked at launch

  • No user bypass

  • No per‑app exceptions

  • Logs are written to Windows Security / Event Viewer

This is where SAC actually provides protection.


3. Off
  • No checks

  • No enforcement

  • Until recently, this was permanent without OS reinstall


Why Smart App Control was widely ignored (until now)

From a pure security model perspective, SAC was solid.
From a real‑world usability perspective, it was borderline hostile.

Until early 2026:

  • If you disabled SAC once, it could never be turned back on
  • Re‑enablement required a full Windows reinstall or reset
  • Upgraded systems were locked to Off
  • MSPs, developers, and power users effectively couldn’t touch it

Microsoft openly acknowledged this rigidity in its own documentation. [support.mi…rosoft.com]

So the result?

Everyone who actually understands Windows workflows turned it off permanently.


What changed in 2026 (this is the big deal)

April 2026 Windows 11 security updates fundamentally changed SAC’s lifecycle

Microsoft removed the “one‑way switch” limitation.

As of the April 2026 Windows 11 updates (24H2 / 25H2):

Smart App Control can now be turned ON after install
Smart App Control can be re‑enabled after being turned off
No OS reinstall required
Managed via Windows Security UI

This change is explicitly documented by Microsoft and multiple independent sources. [techrepublic.com], [pureinfotech.com], [windowsreport.com], [msn.com]


Where the toggle now lives
Windows Security
→ App & browser control
→ Smart App Control
→ Smart App Control settings

From there, you can:

  • Switch On
  • Switch Off
  • Let systems enter Evaluation again

[techrepublic.com], [pureinfotech.com]


What did not change (important limitations remain)

Microsoft did not soften SAC’s enforcement model:

  • ❌ Still no per‑app allow

  • ❌ Still blocks unsigned internal apps

  • ❌ Still unsuitable for dev workstations

  • ❌ Still excluded from enterprise‑managed devices

The decision engine is unchanged. Only the lifecycle control was fixed. [msn.com]


Who Smart App Control now makes sense for

✅ Excellent fit
  • SMB users
  • Standard staff PCs
  • BYOD devices
  • Non‑technical users
  • High‑risk email / web exposure roles

Especially when paired with:

  • Defender Antivirus

  • Attack Surface Reduction rules

  • Defender SmartScreen


❌ Poor fit
  • Developers

  • MSP admin machines

  • Script‑heavy workflows

  • Legacy Line‑of‑Business apps

  • Custom PowerShell tooling

For these, WDAC, AppLocker, or Intune‑managed policy is still the correct solution.


MSP‑level takeaway (opinionated, but grounded)

Smart App Control finally crossed the line from:

“Technically interesting but unusable”

to:

“Deployable baseline protection for unmanaged Windows 11 PCs”

It is not a replacement for:

  • Application control

  • Device management

  • Security policy

But it is now a credible default deny layer for Windows 11 endpoints that previously had none.