Tamper Protection and EDR Block Mode: An Opinionated Rollout for Business Premium

image

If you manage Microsoft 365 Business Premium tenants and you are still treating Tamper Protection as a one-click toggle and EDR in block mode as an afterthought, you are leaving real protection on the table. Both are included with Defender for Business — there is no licence excuse — but the production rollout is more nuanced than the docs suggest.

Here is the playbook I run on every BP tenant.

Prerequisites that bite people

Before either feature does anything useful, devices must be onboarded to Microsoft Defender for Endpoint (or Defender for Business) and reporting healthy in the Defender portal. Until onboarding completes, Tamper Protection literally shows as Not Applicable on the device, and the EDR sensor cannot enforce anything.

Three prerequisites that catch MSPs out:

  • Devices must be Intune-managed or co-managed with the Endpoint Protection workload pointed at Intune. Co-managed devices where the workload still points to ConfigMgr are not supported by the Intune Tamper Protection policy.
  • Defender antimalware platform must be at 4.18.1906.3 or later. Modern devices are; stale gold images and the odd Server 2016/2019 host often are not.
  • If you plan to manage AV exclusions through Intune, set DisableLocalAdminMerge = true in your AV policy. Without it, tamper-protected exclusions silently fail to apply.

Verify all of this on a pilot device with Get-MpComputerStatus, checking IsTamperProtected and AMRunningMode, before you flip anything.

Where to configure

For Tamper Protection, do it in Intune, not the Defender portal. Granular targeting beats a tenant-wide toggle every time.

Intune admin centre → Endpoint security → Antivirus → Create policy → Platform: Windows → Profile: Windows Security Experience → Tamper protection (device): On.

For EDR in block mode, the cleanest path is the Defender portal — it is tenant-wide and does not fight your AV policies:

Microsoft Defender portal → Settings → Endpoints → Advanced features → Enable EDR in block mode → Save.

You can also drive it through Intune via the Defender CSP when you need device-group scoping, but for most BP tenants tenant-wide is correct.

Microsoft’s references:

The rollout that survives contact with users

Three rings, always:

  1. Pilot (5–10 devices) — your own techs plus one cooperative power user. Apply the Tamper Protection policy first, leave it 48 hours, then enable EDR block mode tenant-wide. Watch the Action Center for unexpected Blocked/Prevented entries; confirm IsTamperProtected = True and AMRunningMode = Normal (or EDR Block Mode on third-party AV tenants).
  2. Broader pilot (~25%) — one quiet department, ideally not Finance during end-of-month. Run for a full working week.
  3. Full rollout — assign to your “All Workstations” dynamic group.

Sequencing matters. Enabling EDR block mode before Tamper Protection means a misbehaving LOB app can still be silenced by a local admin disabling Defender — which defeats the point.

Top three pitfalls

  1. Passive-mode false sense of security. If a third-party AV is the primary product, Defender drops into passive mode. EDR block mode still fires, but real-time protection, network protection, ASR rules, and indicators are all inactive. Either document this in the customer-facing security baseline or migrate them off the third-party AV.
  2. Tamper Protection blocking your own policy changes. Once on, you cannot edit AV exclusions or disable real-time protection from the device — and sometimes not cleanly from Intune either. Use troubleshooting mode from the Defender portal for short-lived changes; never disable Tamper Protection at scale.
  3. Forgetting the servers. Windows Server 2012 R2/2016 do not auto-passivate when third-party AV is installed. Set the ForceDefenderPassiveMode registry value before onboarding, or you will have two AV products fighting at boot.

Get the prerequisites right, ring the rollout, and these two features quietly become the most boring — and most valuable — controls in your Business Premium stack.

Inspect What You Expect: Why MSPs Can’t “Set and Forget” Copilot (or Anything Else)

image

One pattern I see repeatedly in MSP businesses—especially as they start adopting Microsoft 365 Copilot—is the quiet belief that once something is delegated, the job is done.

Hand it to a technician.
Hand it to an admin.
Hand it to an AI tool.

Then move on.

That approach has always been risky. With Copilot in the mix, it becomes outright dangerous.

Not because Copilot is untrustworthy—but because systems don’t improve unless you observe them. And as an MSP, improving systems is literally your job.

Delegation Without Inspection Is How Problems Hide

Most MSPs already understand this with infrastructure. You don’t deploy backups and just hope they work. You test. You get alerts. You look for drift.

But when it comes to productivity work—emails, reporting, meetings, content creation—we suddenly relax.

I’m seeing MSPs roll out Copilot, show users a few prompts, and then disappear. No feedback loop. No measurement. No review of outputs or behaviours.

Weeks later, the questions start:

  • “Why are users still asking basic questions?”

  • “Why hasn’t productivity improved?”

  • “Why does Copilot feel underwhelming?”

The issue isn’t the tool. It’s the lack of inspection.

Copilot Changes the Work—So You Need New Sensors

Copilot doesn’t just speed things up. It changes how work happens.

People delegate thinking earlier.
Drafts appear faster.
Decisions are made with less friction—and sometimes less reflection.

That’s powerful, but only if you can see what’s going on.

This is where I introduce the idea of sensors.

Sensors are simple mechanisms that tell you when reality drifts from expectation. They’re not about distrust—they’re about visibility.

In Copilot terms, that might look like:

  • A short weekly check‑in where users paste an example output that helped (or failed).

  • A dashboard showing adoption signals across apps, not just license counts.

  • A Teams message when usage patterns drop after the initial rollout.

  • A review cadence where managers validate whether Copilot‑created artefacts are actually being used.

None of this is complex. Almost no one does it.

AI Amplifies Weak Processes First

Here’s the uncomfortable truth: Copilot makes good systems better and bad systems louder.

If documentation is outdated, Copilot spreads outdated thinking faster.
If decision rights are unclear, Copilot accelerates confusion.
If users don’t know what “good” looks like, Copilot produces more confident mediocrity.

Inspecting outcomes—not effort—is how you catch this early.

I’ve worked with MSPs who expected Copilot to “lift capability” across the board. What actually happened was more revealing: high performers got better, while poor habits became more visible.

That visibility is a gift—if you’re looking for it.

Growth Comes From Feedback Loops, Not Trust Falls

Whether you’re an MSP of five people or fifty, growth doesn’t come from hiring smarter people or deploying smarter tools. It comes from tightening feedback loops.

That’s why mature MSPs obsess over:

  • Red/green indicators

  • Exception reporting

  • Notifications when something deviates from normal

The same thinking now applies to knowledge work.

Copilot isn’t a project you “finish”. It’s a system you tune. And tuning only works when you inspect what you expect.

The Takeaway for MSP Leaders

If you’re advising clients—or running your own MSP—don’t treat Copilot like a magic upgrade.

Treat it like any other core system:

  • Define what “good” looks like

  • Build simple sensors

  • Review outputs, not intentions

  • Adjust the environment, not just the prompts

Trust is fine. Visibility is better.

If you’re not inspecting, you’re guessing. And guessing doesn’t scale—especially in an AI‑assisted world.

A Cleaner Way to Connect PowerShell to SharePoint Online

image

Connect-PnPOnline with a browser sign-in is fine when you’re sitting at the keyboard. It becomes a problem the moment you’re not. The script that worked beautifully on your laptop refuses to run unattended. The scheduled job that was meant to tidy up orphaned sites overnight quietly does nothing, because it’s still waiting for someone to type a password. And the moment conditional access tightens on the admin account you’ve been quietly using for automation, every script that touches SharePoint behaves like it’s been thrown out a window.

The fix has existed. The setup hasn’t.

Certificate-based app authentication for SharePoint Online has been supported by Microsoft for years. The mechanics are well documented. The trouble has always been the assembly — generate a cert, export the public key, register an app in Entra ID, paste the right GUIDs in the right boxes, find Sites.FullControl.All in the API permissions list, grant admin consent, copy the thumbprint somewhere you won’t lose it, and verify the tenant ID in three different places along the way. By the time you’ve finished, you’ve forgotten which client you were doing it for.

So I’ve written a script that does the whole sequence end to end:

  • Generates a self-signed RSA-2048 certificate in your local certificate store

  • Creates the Entra ID app registration

  • Uploads the certificate and grants Sites.FullControl.All with admin consent

  • Provisions the service principal and adds Application.Read.All on Graph so the app can read its own metadata back

  • Resolves your tenant’s SharePoint root URL automatically from the Graph verified-domains call

  • Saves tenant, app ID, site URL, and thumbprint into a JSON profile so future connections need almost no parameters

What’s normally half an hour of clicking between Entra, the SharePoint admin centre, and a Notepad full of half-remembered GUIDs runs in about ninety seconds.

I’ve been written a new script — https://github.com/directorcia/Office365/blob/master/o365-connect-pnp-cert.ps1

with full documentation here – https://github.com/directorcia/Office365/wiki/Connect-to-SharePoint-Online-with-Certificates

What the Script Actually Does

There are two modes, controlled by switches.

-GenerateLocalCertificate creates a self-signed RSA-2048 certificate in your current user’s certificate store, exports the public key as a .cer file, and optionally exports a password-protected .pfx. By default it’s valid for two years. That’s the local side of the handshake.

-UseCertificateAuth is the everyday mode. You tell it which tenant to connect to — or let it look up the details in a profile map file — and it signs into Exchange Online using that certificate. No password. No browser. No MFA dialog.

The clever bit is the third option: combining -GenerateLocalCertificate with -ProvisionEntraApp -Tenant 'contoso.onmicrosoft.com'. In a single run, the script will generate the local certificate, authenticate to Microsoft Graph via a device-code flow, create the Entra ID app registration if it doesn’t exist, upload the certificate, grant Exchange.ManageAsApp and Application.Read.All with admin consent, create the matching service principal, sign you into Exchange Online to add the app to the Organization Management role group, and save the tenant, app ID, and certificate thumbprint to a JSON profile file so future connections don’t need any of those parameters.

Getting Started

If you’re new to certificate auth, the first run is the one that matters. Drop the script onto an admin machine, open PowerShell, and run:

.\o365-connect-pnp-cert.ps1 -GenerateLocalCertificate -ProvisionEntraApp -Tenant 'yourtenant.onmicrosoft.com'

You’ll be prompted to sign in — via device code for the Graph permissions (which if you use the –copydevicecodetoclipboard, option will put the required device code straight into the clipboard to paste into the request). You need a Global Admin account.

Where this earns its keep across a client base

After that first run, connecting to a tenant looks like this:

.\o365-connect-pnp-cert.ps1 -UseCertificateAuth -Tenant ‘contoso.onmicrosoft.com’

No password. No browser. No MFA prompt. The profile file is the bit that pays you back across an MSP book. One script lives in your tooling folder, each client has its own certificate and entry in the JSON map, and Task Scheduler can finally drive things like site collection audits, sharing reports, lifecycle cleanup on Teams-connected sites, and external-user reviews without anyone watching it run. Filter by tenant or site URL on the command line and the same script services twenty different customers without you ever editing it.

One honest caveat

When you’ve just provisioned a brand-new app, give Entra fifteen to thirty minutes for the role grants to replicate before your first cert-based connect. It’s the single most common reason a fresh setup looks broken when it isn’t. The script flags this on the way out, but it’s worth saying twice.

Why Certificates Beat Passwords

The security argument is the easy one. A certificate’s private key never leaves the machine that generated it. Nothing crosses the wire that an attacker could intercept and replay. There’s no shared secret to rotate across a team, no admin password sitting in a vault that someone might extract, and no MFA bypass to engineer because the flow doesn’t involve a user account at all.

If the certificate is ever compromised, you remove the key credential from the app registration and the access is gone — no password reset required, no impact on any human admin account.

The script enforces TLS 1.2, refuses to assign RBAC if the PnP session has landed in the wrong tenant, warns when the certificate is within thirty days of expiry, and keeps the device-code value off the clipboard by default to avoid leaks on RDP or shared sessions.

The change is a quiet one. You stop thinking about who is signing in and start thinking about which certificate is presenting itself. Once your SharePoint automation is no longer at the mercy of someone else’s MFA settings or a password rotation policy, the kind of work you’re willing to schedule expands. That’s the real win — not the ninety seconds saved on setup, but the chores you finally get around to doing.

Making Attack Surface Reduction Rules Actually Work in Microsoft 365


image


Attack Surface Reduction (ASR) rules are one of the most powerful — and most misunderstood — security capabilities in Microsoft Defender for Endpoint.

On paper, they’re simple:

Reduce the ways attackers can abuse Windows.

In practice, many environments either:

  • Enable everything in block mode and break workflows, or

  • Leave ASR in audit forever because “it caused issues once”.

Just like Conditional Access, ASR only works properly when:

  • You understand what problem you’re solving
  • You deploy it gradually and intentionally
  • You accept that security without friction isn’t security

This post explains how to deploy ASR rules properly using Intune and Microsoft Defender — in a way that actually raises the security bar without torching productivity.


Why ASR Rules Matter (Still)

ASR rules are designed to block common attacker techniques, not malware files.

That distinction matters.

Most modern attacks don’t rely on:

  • Dropping obvious malware

  • Exploiting rare zero‑days

They rely on living‑off‑the‑land (LOLBins):

  • PowerShell

  • WMI

  • Office macros

  • Credential dumping from LSASS

  • Script abuse from user‑writable locations

ASR targets behaviour, not signatures — which is why Microsoft consistently recommends them as part of a Zero Trust baseline.

But behaviour-based controls must be deployed carefully.


The Core Problem with ASR Deployments

In the wild, I usually see one of three patterns:

1. “Turn Them All On”

Someone enables every ASR rule in block mode.

Result:

  • Line of business scripts fail

  • Custom automation breaks

  • IT disables ASR entirely
2. “Audit Forever”

Rules sit in audit mode indefinitely “until we review logs”.

Result:

  • Attack techniques pass straight through

  • Security teams get a false sense of protection
3. “We Enabled One or Two”

Only macro-related rules are enabled.

Result:

  • Partial coverage

  • Easily bypassed attack paths remain open

None of these deliver meaningful protection.


A Better Mental Model for ASR

Instead of thinking:

“Which rules should we turn on?”

Ask:

“Which attacker techniques do we want to make impractical?”

ASR works best when combined with:

  • Standard user devices
  • Intune-managed endpoints
  • Defender for Endpoint P1/P2
  • Strong Conditional Access

Sound familiar? Same foundations as compliant-device CA policies.


The ASR Rules That Actually Deliver Value

Here are the ASR rules that consistently provide the best risk reduction with manageable impact in real environments.

1. Block credential stealing from LSASS

Rule ID: 9e6c4e1f-7d60-472f-ba1a-a39ef669e4b2

This blocks tools like Mimikatz-style credential dumping.

✅ High attacker impact
✅ Minimal end-user disruption
✅ Should be BLOCK in almost every environment


2. Block Office from creating child processes

Rule ID: d4f940ab-401b-4efc-aadc-ad5f3c50688a

This stops:

  • Macro → PowerShell

  • Document-based malware chains

Realistically:

  • Most organisations do not need Office spawning shells

Start in AUDIT, then move to BLOCK once exceptions are known.


3. Block executable content from email and webmail

Rule ID: be9ba2d9-53ea-4cdc-84e5-9b1eeee46550

This blocks users launching:

  • EXEs

  • Scripts

  • Payloads straight from email or web download locations

✅ Enforces basic hygiene
✅ Aligns well with user expectations
✅ Rarely breaks legitimate workflows


4. Use Advanced Protection Against Ransomware Abuse

Rules that help here include:

  • Blocking untrusted executable content

  • Blocking abuse of vulnerable signed drivers

These pair extremely well with:

  • Defender Tamper Protection

  • Controlled Folder Access (selectively)


How to Deploy ASR Rules Properly with Intune

Step 1: Create an ASR Policy in Audit Mode

In Intune:

  • Endpoint Security

  • Attack Surface Reduction

  • Create policy

  • Start with Audit

Audit mode tells you:

  • What would have been blocked

  • Which apps or scripts need exclusions

This is not optional.


Step 2: Review Events in Advanced Hunting

Use this table in Defender:

DeviceEvents

| where ActionType contains “Asr”

Focus on:

  • Repeat offenders

  • Automation tools

  • Known admin workflows

If something fires once, ignore it.
If it fires 300 times a day, investigate.


Step 3: Use Targeted Exclusions — Sparingly

ASR exclusions should be:

  • File path–specific

  • App–specific

  • As narrow as possible

Avoid:

  • Wildcards

  • Folder-wide exclusions unless absolutely required

Bad exclusions undo the entire point of ASR.


Step 4: Move High‑Confidence Rules to Block

Once audit noise stabilises:

  • Move specific rules, not the whole policy

  • Prioritise credential theft and Office abuse

Yes, this causes friction.
So does getting owned.


Common ASR Mistakes (That I Still See in 2026)

  • Treating ASR as “optional”

  • Letting developers demand blanket exclusions

  • Ignoring audit logs

  • Enabling rules without Defender onboarding complete

  • Forgetting ASR only protects Defender-managed devices

ASR is not a set-and-forget tool.
It’s an operational security control.


ASR + Conditional Access = Real Endpoint Trust

Here’s the key point many miss:

ASR strengthens the integrity of “Compliant Device” signals.

If a device:

  • Meets Intune compliance

  • Runs Defender

  • Enforces ASR rules

  • Has tamper protection enabled

You can trust Conditional Access decisions far more.

Compliance without hard endpoint controls is mostly paperwork.


Final Thought

Attack Surface Reduction is one of those Microsoft security features that:

  • Looks scary

  • Sounds complex

  • Delivers massive value when done properly

If your ASR rules are all disabled, set to audit forever, or barely touched — you’re leaving one of the best cost‑to‑benefit controls on the table.

Just like Conditional Access… ASR only works when you actually enforce it.


The Real Reason Copilot “Didn’t Work”? No One Defined What Success Looked Like

image

I keep hearing the same complaint from MSPs experimenting with Microsoft 365 Copilot.

“It didn’t really land.”
“The team didn’t get much value.”
“We turned it on, but outcomes were mixed.”

When I dig into those conversations, the issue is almost never licensing, configuration, or even training.

It’s much simpler—and more uncomfortable.

No one explained the criteria for success.

A Team Can’t Execute a Standard They’ve Never Seen

I’ve watched this play out inside MSPs and their clients more times than I can count.

Copilot gets enabled. People are encouraged to “use AI.” Expectations are implied, not stated. Then weeks later, leadership wonders why email quality is inconsistent, reports still take too long, or meetings haven’t magically improved.

Copilot didn’t fail. The organisation did.

Humans—and AI—perform best when “good” is clearly defined. If you don’t articulate what a successful outcome looks like, Copilot will happily produce something, but it won’t necessarily produce the right thing.

This is where Copilot quietly exposes a weakness many MSPs already have: undocumented standards.

Copilot Forces the “Definition of Done” Conversation

One of the most valuable things Copilot does isn’t writing content or summarising meetings. It forces people to think clearly before they ask.

When someone prompts Copilot effectively, they’re doing implicit work:

  • What is the purpose of this output?

  • Who is it for?

  • What would “finished and acceptable” actually look like?

Without that clarity, prompts drift, outputs vary, and frustration sets in.

I now encourage MSPs to write down three to five criteria that define “done” for common tasks before encouraging Copilot use.

Not documentation theatre. Just enough clarity to guide behaviour.

A Practical MSP Scenario You’ll Recognise

Take a simple task: internal client update emails.

Without a definition of done, Copilot outputs range from overly wordy to dangerously vague. The problem isn’t AI—it’s ambiguity.

Now imagine the standard is written down:

  • Clear summary of what was done (in plain language)

  • Any risks or follow‑ups explicitly called out

  • No technical jargon unless requested

  • Suitable to forward directly to a non‑technical client

  • Under 200 words

Suddenly, Copilot becomes consistent, fast, and useful. Junior staff improve overnight. Senior staff stop rewriting everything. The standard becomes repeatable.

Copilot didn’t create the quality. The criteria did.

Why This Matters More Than the Tech

MSPs love tools, but tools don’t fix thinking problems.

Copilot changes the way people work by making fuzzy expectations painfully visible. If staff don’t know what a “good” report, ticket update, or proposal looks like, Copilot will simply amplify that uncertainty at scale.

The MSPs seeing real productivity gains are doing something different. They’re treating Copilot as a thinking partner, not an output machine.

They define success first, then let Copilot help execute it faster and more consistently.

That shift—from “do the task” to “meet the standard”—is where the real business impact sits.

What I’m Advising MSP Leaders to Do Now

Before your next Copilot rollout, pause.

Pick three high‑value tasks your team does daily. For each one, write down three to five simple success criteria. That’s it.

Not policies. Not 12‑page SOPs. Just clarity.

Then show the team how Copilot supports that standard.

The Takeaway

If Copilot “isn’t delivering value,” don’t start by blaming the tool.

Ask a harder question instead:

Did we ever explain what success actually looks like?

Because a team can’t execute a standard they’ve never been shown—and Copilot will expose that gap faster than any consultant ever could.

If you get the definition of done right, Copilot becomes a force multiplier. If you don’t, it just makes the mess more obvious.

And honestly? That might be exactly the wake‑up call MSPs need.

Stress Test It: Why Copilot Exposes Weak MSP Processes Faster Than Any Audit Ever Could

image

One of the biggest mistakes I see MSPs making with Microsoft 365 Copilot isn’t technical.
It’s procedural.

They document a process, feel good about it, file it away, and move on. Then Copilot gets introduced and suddenly everything that “worked fine” starts breaking—confusion, rework, inconsistent outcomes, frustrated staff.

That’s not Copilot failing.
That’s Copilot revealing where the cracks already were.

AI has zero patience for fuzzy thinking, undocumented assumptions, or tribal knowledge. If your process relies on “just ask Steve” or “we usually do it this way”, Copilot will surface that gap almost immediately.

Which is why I keep coming back to one principle with MSPs:

Once it’s documented, stress test it. Properly.

Hand It Over and Watch Where It Breaks

The simplest (and most uncomfortable) test is this:

Document the process.
Then hand it to someone who didn’t write it.

Not your best tech. Not the person who lives in that system every day. Hand it to someone competent, but neutral.

Then watch where they hesitate.

The first place they pause.
The first clarifying question they ask.
The workaround they invent because the next step isn’t clear.

That confusion is your gap.

I see this constantly with Copilot rollouts. An MSP documents “how we enable Copilot for a client” or “how staff should use Copilot in Teams”. On paper, it looks solid. In practice?

  • No one is sure where approved prompts live

  • No one knows what’s off-limits data‑wise

  • Everyone assumes someone else has done the access check

  • Security reviews live in a different document entirely

Copilot just accelerates that confusion because people start using it everywhere, all at once.

Copilot Forces End‑to‑End Thinking (Whether You Like It or Not)

Here’s the uncomfortable truth:
Copilot doesn’t care about your internal silos.

If a process only works because steps 4 and 5 happen “eventually” or “when time allows”, Copilot will make that painfully obvious.

For MSPs, this usually shows up in:

  • User onboarding that assumes clean SharePoint permissions

  • Client documentation that exists but isn’t current

  • Security controls that are “mostly” standardised

  • SOPs that describe what happens but not who decides

When Copilot is introduced, the questions multiply: “Can I use this with client data?”
“Is this the approved template?”
“Why does Copilot see this file but not that one?”

If the process doesn’t flow cleanly from step one to done, your team will improvise. And improv is exactly what MSPs spend years trying to eliminate.

Fill the Gap. Then Do It Again.

The fix isn’t complicated, but it is repetitive.

When someone gets confused, don’t explain it verbally and move on.
Fix the document.

Close the loop.
Make the decision explicit.
Remove the assumption.

Then hand the process to the next person and run it again.

You’re not looking for perfection. You’re looking for the places where the system breaks under light pressure—before clients or Copilot apply real pressure.

This is where MSP maturity actually shows. Not in how clever the Copilot prompts are, but in how resilient the underlying process is when a human and an AI are both trying to follow it.

The Real Takeaway for MSPs

Copilot isn’t a tool you “set up and support”.
It’s a mirror.

It reflects the quality of your documentation, your standardisation, your decision‑making, and your discipline as a provider.

If you want Copilot to scale productivity instead of chaos, stop asking “what does Copilot do?” and start asking:

“Where would this process fail if no one could ask a question?”

Stress test it.
Fill the gaps.
Then do it again.

That’s how you make Copilot work for your MSP—not against it.

Claude Cowork vs Copilot Cowork: why the Microsoft answer wins for SMB

image

I’ve watched a lot of clients spend the last twelve months stitching together AI tools that don’t talk to each other. A Claude tab here. A ChatGPT tab there. A Copilot tab somewhere in the middle. Then a folder of CSVs they keep dragging in and out of each one.

That’s not a workflow. That’s a tax.

So when Anthropic shipped Claude Cowork and Microsoft shipped Copilot Cowork in roughly the same window, the question landed in my inbox: which one do we tell our clients to use?

I’ll save you the suspense. For an SMB already paying for Microsoft 365, it’s not close.

What is Cowork, really?

Cowork is the bit that does the work, not the bit that talks about it. You give it an outcome — “draft the quarterly update from these meeting notes and send it to the leadership team” — and it goes off and does the thing.

That’s the shared idea. Both products own it. The split is in where the work happens.

Claude Cowork lives on your desktop. You mount a folder, drop your files in, and Claude runs in a sandbox on your machine. It doesn’t see your inbox. It doesn’t see your calendar. It doesn’t see your Teams chats unless you’ve copy-pasted them in. You bring the data to the model.

Copilot Cowork is the inverse. It already lives inside Microsoft 365, grounded in your Outlook, Teams, SharePoint, OneDrive, and calendar through Work IQ. You don’t mount anything. The model is already where your data lives.

Notice what’s missing? The mounting step. The “let me copy this folder over” step. The “hang on, I need to paste in the email thread” step.

For SMBs, that’s the whole game.

Step-by-Step: getting Copilot Cowork going

If you’re licensed for Microsoft 365 Copilot and enrolled in the Frontier preview, the start is short.

Open Cowork in Microsoft 365

Browse to m365.cloud.microsoft, sign in, and pick Cowork from the agent list. If you don’t see it, check Frontier enrolment under Copilot settings.

Describe the outcome

Skip the prompt-engineering nonsense. Talk like a person.

Read my inbox from this week, find anything tagged from a client,
draft a Friday wrap-up email summarising open items, and post a
short version into the Operations channel in Teams.

Notice what’s missing? Any reference to a file path. Any “first export your inbox to CSV” step. Cowork already has the inbox, the calendar, the Teams channels, and the SharePoint files. It just needs the instruction.

Approve the action

Cowork shows you exactly what it’s about to send, post, or schedule before it does it. You hit Send, Post, or Cancel. The full flow is in the getting started doc if you want to walk a client through it.

Set the schedule

Want it to do this every Friday at 4pm? Schedule the prompt and walk away. Copilot doesn’t get tired. Use that.

Why this actually changes behaviour

Claude Cowork is a beautifully built tool. For a developer or a data analyst on a Mac with a folder full of CSVs, it sings. I’m not knocking it.

But that’s not the SMB picture. The SMB picture is a bookkeeper, a sales lead and a director who all live inside Outlook and Teams from 8am to 6pm. Their data isn’t in a folder on their desktop. It’s in their mailbox, their channel chats, their SharePoint sites and their meeting transcripts.

“But couldn’t we just pipe our M365 data into Claude?”

You could. You’d be paying twice — once for M365, once for Claude — and you’d be exporting business data into a different vendor’s environment to do work the platform you already own can do natively.

That’s not a productivity gain. That’s a procurement problem.

Here’s the real win. Copilot Cowork sits behind the same Entra identity, the same conditional access, the same Purview labels and the same retention policies your tenant already runs. The governance story is already built. There’s no second tool to license, secure, train, or audit.

My recommendation? If you’re an MSP and you’re not walking your SMB clients through Copilot Cowork, you’re leaving value on the table — theirs and yours.

Meet people where they already are.

Cowork isn’t a second AI app for your clients to learn. It’s the work, finally getting done in the place it was always supposed to happen.

Entra ID backup just turned up in your Business Premium tenant

image

A few weeks ago I logged into a Business Premium tenant to do something completely unrelated and noticed a new node in the Entra portal: Backup and Recovery. No upsell banner, no add-on prompt, no “contact your reseller”. Just there. Sitting under Identity governance like it had always been part of the furniture.

That’s the bit worth pausing on. Microsoft has quietly turned identity backup into table stakes for every BP tenant. Notice what’s missing? An invoice.

For years the conversation around protecting your directory has been someone else’s product pitch. Third-party backup vendors built entire businesses on the fact that Microsoft wouldn’t restore a Conditional Access policy you nuked at 4pm on a Friday. Now Microsoft is restoring it for you.

What is Entra Backup and Recovery, really?

It’s a daily snapshot of the configuration that runs your tenant’s identity. Users, groups, applications, service principals, Conditional Access policies, named locations, the authentication methods policy — the things that, when they go missing, take down sign-in for your whole client base.

Five days of retention. Tamper-resistant. No global admin can switch it off, no compromised account can wipe the safety net before the bad thing happens. That’s not a feature. That’s governance.

Important caveats so you don’t sell something that isn’t there. Hard-deleted objects are gone — the recycle bin still does its 30-day job for users and groups, but Backup is for configuration recovery, not undeleting things. Hybrid identity synced from on-premises AD has limitations. Workforce tenants only — not B2C or External ID. And it’s currently in Public Preview, so treat it like one. The official overview is worth a read before you stand in front of a client.

A daily snapshot you can’t disable is more honest than a backup product you forget to renew.

Step-by-Step: turning it on for a Business Premium tenant
1. Sign into the Entra admin centre

Use a Global Administrator account. Navigate to Identity governanceBackup and Recovery. If the node isn’t there yet, give the tenant a day — rollout is staged.

2. Enable the service

It’s a single switch. Once enabled, the first snapshot is captured within 24 hours. There’s nothing to license — Business Premium already includes Entra ID P1, which is the bar.

3. Assign the right roles

There are two purpose-built ones: Microsoft Entra Backup Reader and Microsoft Entra Backup Administrator. Don’t hand recovery rights to every Global Admin out of habit. Restoring a Conditional Access policy from a five-day-old snapshot is exactly the sort of move you want logged against a named, scoped role.

4. Run a Difference Report before you restore anything

This is the part that earns its keep. Before recovering an object, the portal shows you what will change — what’s in the snapshot, what’s live, and where they disagree. You see the diff before you click. The supported objects and limitations(opens in new window) page tells you exactly what’s in scope.

Why this actually changes behaviour

Here’s the real win. The reason MSPs have been selling backup-for-Entra add-ons is fear — what if? That conversation gets harder when Microsoft has put a tamper-resistant safety net in the box.

My recommendation? Stop selling fear. Start showing governance. Walk your BP clients through their backup status, the role separation, and the recovery flow for applications and service principals. It takes ten minutes and it positions you as the person who knew this was already there, not the person trying to bolt something on top.

That’s not a product conversation. That’s an advisor conversation.

The relief, when you find it, isn’t the relief of buying a safety net. It’s the relief of finding one you didn’t have to install.