I keep hearing the same complaint from MSPs experimenting with Microsoft 365 Copilot.
“It didn’t really land.”
“The team didn’t get much value.”
“We turned it on, but outcomes were mixed.”
When I dig into those conversations, the issue is almost never licensing, configuration, or even training.
It’s much simpler—and more uncomfortable.
No one explained the criteria for success.
A Team Can’t Execute a Standard They’ve Never Seen
I’ve watched this play out inside MSPs and their clients more times than I can count.
Copilot gets enabled. People are encouraged to “use AI.” Expectations are implied, not stated. Then weeks later, leadership wonders why email quality is inconsistent, reports still take too long, or meetings haven’t magically improved.
Copilot didn’t fail. The organisation did.
Humans—and AI—perform best when “good” is clearly defined. If you don’t articulate what a successful outcome looks like, Copilot will happily produce something, but it won’t necessarily produce the right thing.
This is where Copilot quietly exposes a weakness many MSPs already have: undocumented standards.
Copilot Forces the “Definition of Done” Conversation
One of the most valuable things Copilot does isn’t writing content or summarising meetings. It forces people to think clearly before they ask.
When someone prompts Copilot effectively, they’re doing implicit work:
- What is the purpose of this output?
- Who is it for?
- What would “finished and acceptable” actually look like?
Without that clarity, prompts drift, outputs vary, and frustration sets in.
I now encourage MSPs to write down three to five criteria that define “done” for common tasks before encouraging Copilot use.
Not documentation theatre. Just enough clarity to guide behaviour.
A Practical MSP Scenario You’ll Recognise
Take a simple task: internal client update emails.
Without a definition of done, Copilot outputs range from overly wordy to dangerously vague. The problem isn’t AI—it’s ambiguity.
Now imagine the standard is written down:
- Clear summary of what was done (in plain language)
- Any risks or follow‑ups explicitly called out
- No technical jargon unless requested
- Suitable to forward directly to a non‑technical client
- Under 200 words
Suddenly, Copilot becomes consistent, fast, and useful. Junior staff improve overnight. Senior staff stop rewriting everything. The standard becomes repeatable.
Copilot didn’t create the quality. The criteria did.
Why This Matters More Than the Tech
MSPs love tools, but tools don’t fix thinking problems.
Copilot changes the way people work by making fuzzy expectations painfully visible. If staff don’t know what a “good” report, ticket update, or proposal looks like, Copilot will simply amplify that uncertainty at scale.
The MSPs seeing real productivity gains are doing something different. They’re treating Copilot as a thinking partner, not an output machine.
They define success first, then let Copilot help execute it faster and more consistently.
That shift—from “do the task” to “meet the standard”—is where the real business impact sits.
What I’m Advising MSP Leaders to Do Now
Before your next Copilot rollout, pause.
Pick three high‑value tasks your team does daily. For each one, write down three to five simple success criteria. That’s it.
Not policies. Not 12‑page SOPs. Just clarity.
Then show the team how Copilot supports that standard.
The Takeaway
If Copilot “isn’t delivering value,” don’t start by blaming the tool.
Ask a harder question instead:
Did we ever explain what success actually looks like?
Because a team can’t execute a standard they’ve never been shown—and Copilot will expose that gap faster than any consultant ever could.
If you get the definition of done right, Copilot becomes a force multiplier. If you don’t, it just makes the mess more obvious.
And honestly? That might be exactly the wake‑up call MSPs need.