Why AI Doesn’t Give the Same Answer Twice (And Why That’s Not a Bug)

image

One of the most common frustrations I hear from people using AI is this:

“I asked it the same question yesterday and got a different answer today.”

And usually that’s followed by:

“So… which one is right?”

This is where most people run head‑first into a concept they weren’t expecting: AI is probabilistic, not deterministic.

That sounds technical. It isn’t. But it does change how you should think about using AI.

Deterministic vs probabilistic (in plain English)

A deterministic system works like a calculator.

  • 2 + 2 = 4

  • Every time

  • Forever

Same input. Same output. No surprises.

Traditional software works this way. Code is written, rules are defined, and the system follows them exactly. That’s why accounting systems, payroll, and databases behave predictably. They have to.

AI doesn’t work like that.

AI is probabilistic. That means it doesn’t calculate “the answer”. It calculates the most likely next word, then the next, then the next — based on probabilities.

Think less calculator and more very well‑read human.

AI is making an educated guess (every single time)

When you type a prompt into an AI system, it isn’t “looking up” an answer. It’s generating a response based on:

  • Patterns it learned during training

  • The context of your prompt

  • The words it has already generated

  • Statistical likelihoods

Each word is chosen because it’s likely, not because it’s guaranteed.

That’s why:

  • You won’t always get the same response twice

  • Wording matters more than people expect

  • Small changes in prompts can produce big changes in results

This isn’t a flaw. It’s literally how the system works.

Why this confuses people

Most of us have spent our entire digital lives interacting with deterministic systems.

  • Search engines return ranked results

  • Forms either submit or error

  • Software either works or crashes

So when AI gives us a plausible but slightly different answer, our brain goes:

“Hang on… which one is correct?”

The answer is often: both could be reasonable.

AI isn’t trying to be a source of absolute truth. It’s trying to be a useful collaborator.

Prompts are instructions, not questions

This is the biggest mindset shift.

If you treat AI like Google and just “ask a question”, you’ll get inconsistent results and frustration.

If you treat AI like a new employee who wants to help but lacks context, things improve dramatically.

That employee:

  • Is smart

  • Has read a lot

  • Doesn’t know your business

  • Doesn’t know what “good” looks like to you

So the quality of the output depends heavily on the quality of your instructions.

Because the system is probabilistic, vague instructions lead to vague (or unpredictable) outcomes.

Why structure reduces randomness

Good prompting doesn’t remove probability — but it constrains it.

Clear prompts:

  • Reduce ambiguity

  • Narrow the range of possible responses

  • Increase consistency

For example:

  • “Summarise this” → wide range of outcomes

  • “Summarise this in 5 bullet points for a non‑technical audience, focusing on business impact” → much tighter results

You’re not forcing the AI to be deterministic. You’re guiding the probabilities in your favour.

The real risk: false certainty

The most dangerous mistake isn’t that AI is probabilistic.

It’s that people forget it is.

AI responses often sound confident, polished, and authoritative — even when they’re wrong, incomplete, or missing context.

That’s why:

  • You should always review outputs

  • You shouldn’t blindly trust first drafts

  • Human judgement still matters

AI is brilliant at drafting, summarising, ideation, and acceleration.

It is not a replacement for thinking.

The takeaway

If you remember one thing, make it this:

AI doesn’t give you the answer.
It gives you a likely answer.

Your job isn’t to demand certainty from a probabilistic system.

Your job is to:

  • Give clearer instructions

  • Provide better context

  • Review and refine the output

When you do that, AI stops feeling unpredictable — and starts feeling powerful.

And once you understand that shift, everything about prompting suddenly makes a lot more sense.

If You Check Email More Often Than You Prompt AI, You’re Probably Falling Behind

image

Here’s a simple, uncomfortable question.

How many times today have you checked your email or scrolled social media…
versus how many times you’ve deliberately prompted AI?

If the answer is “a lot more email”, you’re probably not just distracted.
You’re likely falling behind.

Not because email is evil.
Not because LinkedIn is a waste of time.
But because the way work gets done has fundamentally shifted — and many people haven’t adjusted their habits yet.

Attention Is No Longer the Bottleneck

For years, productivity advice focused on managing attention:

  • Inbox zero

  • Notification control

  • Time blocking

  • Focus modes

All useful. All still relevant.

But AI changes the equation.

The real bottleneck now isn’t attention — it’s leverage.

AI tools like Microsoft 365 Copilot don’t just save time.
They compress thinking, drafting, analysing, summarising, and planning into minutes instead of hours.

Every time you don’t use them for a task they’re good at, you’re choosing a slower path by default.

And speed compounds.

Email Is Reactive. AI Is Generative.

Checking email is reactive work.

You’re responding to other people’s priorities, context, and framing. Even when it’s important, it’s rarely leverage-heavy.

Prompting AI is generative work.

You’re:

  • Creating first drafts instead of staring at blank pages

  • Summarising weeks of emails instead of rereading them

  • Turning messy thoughts into structured plans

  • Extracting actions instead of manually parsing information

One creates momentum.
The other mostly maintains motion.

If you’re opening Outlook out of habit but only opening Copilot when you “have time”, you’ve inverted the value equation.

The New Baseline Is “AI-First” Thinking

High performers aren’t using AI as a novelty anymore. They’re using it as a default interface to work.

Before they:

  • Write a document

  • Respond to a complex email

  • Prepare for a meeting

  • Analyse data

  • Draft a proposal

They ask AI first.

Not for the final answer — but for acceleration.

This isn’t about replacing thinking.
It’s about removing friction from thinking.

The same way calculators didn’t make accountants dumb, AI won’t make professionals lazy. But refusing to use it will make you slow.

MSPs: This Gap Is Already Showing

In the MSP world, this gap is becoming obvious.

Some teams are:

  • Using Copilot to generate SOPs

  • Summarising tickets and incidents automatically

  • Creating customer-ready reports in minutes

  • Turning compliance frameworks into action plans quickly

Others are still:

  • Manually writing everything

  • Copying and pasting between tools

  • “Getting to it later”

  • Complaining they’re too busy to learn AI

The irony?
The people “too busy” to prompt AI are usually the ones who need it the most.

Prompting Is a Skill — and It Needs Reps

Here’s the part many miss.

Prompting AI isn’t magic.
It’s a skill.

And like any skill, it improves with repetition.

If you only prompt AI once or twice a day, you’ll never build fluency.
If you prompt it dozens of times, it becomes second nature.

You stop thinking:

“Should I use AI for this?”

And start thinking:

“How should I ask AI to help with this?”

That mental shift is where the real productivity gains live.

A Simple Rule of Thumb

Try this for a week.

Every time you feel the urge to:

  • Check email

  • Refresh Teams

  • Scroll LinkedIn

Ask yourself one question first:

“Is there something I could prompt AI to move forward right now?”

Draft. Summarise. Plan. Refine. Analyse.

You don’t need perfect prompts.
You just need to start.

Because the real risk isn’t AI getting things wrong.

It’s you not using it at all while others quietly build an advantage.

Falling Behind Is Quiet — Until It Isn’t

Nobody sends an alert saying:

“You’re now less productive than your peers.”

It happens gradually.

Others deliver faster.
They think clearer.
They respond sharper.
They scale themselves.

And one day, it’s obvious.

So if you’re checking your inbox twenty times a day but only prompting AI once or twice…

That’s not a productivity strategy.

That’s a warning sign.

You Already Have Copilot. You’re Just Not Using It (Yet)

image

One of the biggest blockers I see with Copilot adoption isn’t cost.
It’s confusion.

Too many organisations think Copilot is something you buy, flip a switch on, and magically productivity goes up. Then they see the Microsoft 365 Copilot licence price and either panic… or over‑hype it internally and guarantee disappointment.

Here’s the part most people miss:

Copilot Chat is already included with Microsoft 365.
No extra licence. No commitment. No risk.
[support.mi…rosoft.com]

And it’s the best place to start evaluating Copilot—as long as you set the right expectations.


What Copilot Chat Actually Is

Copilot Chat is a secure, enterprise-grade AI chat experience that comes with eligible Microsoft 365 business plans. It’s available through the Copilot app, browser, and inside Microsoft 365 surfaces. [support.mi…rosoft.com]

Think of it as:

  • A safe, work-friendly alternative to public AI tools

  • A place to learn how to prompt properly

  • A way to introduce AI thinking without touching business data

It’s excellent for:

  • Brainstorming

  • Drafting content

  • Summarising uploaded documents

  • Research and idea validation

  • Learning how AI responds to different prompts

What it doesn’t do is magically understand your tenant.

And that’s where expectations matter.


What Copilot Chat Does Not Do

Copilot Chat does not have access to your Microsoft 365 data by default.

That means:

  • It can’t see your emails

  • It can’t summarise your Teams meetings

  • It can’t analyse your SharePoint files

  • It can’t act inside Word, Excel, Outlook or Teams using live context

Those capabilities require a Microsoft 365 Copilot licence. [support.mi…rosoft.com]

This is the mistake I see over and over again:

“We tried Copilot and it wasn’t very impressive.”

No—you tried Copilot Chat and expected Microsoft 365 Copilot.

They are related, but they are not the same thing.


Why Copilot Chat Is Still the Right Starting Point

Even with those limitations, Copilot Chat is a brilliant on‑ramp to AI adoption.

Why?

Because Copilot success has very little to do with licences—and everything to do with behaviour.

Copilot Chat lets organisations:

  • Learn how to ask better questions

  • Understand AI strengths and limitations

  • Build internal confidence with generative AI

  • Establish safe usage patterns and governance conversations

All before spending a dollar on add‑on licensing.

For MSPs, this is gold. You can:

  • Run Copilot Chat workshops

  • Teach prompt engineering fundamentals

  • Identify which roles would actually benefit from full Copilot

  • Reduce the risk of failed rollouts later


What Changes When You Buy Microsoft 365 Copilot

Microsoft 365 Copilot is where AI stops being a chat tool and becomes a workflow tool.

With the paid licence, Copilot:

  • Works directly inside Word, Excel, PowerPoint, Outlook and Teams

  • Understands emails, meetings, chats, files and calendars

  • Uses Microsoft Graph to reason across your tenant

  • Can summarise meetings, draft replies, analyse spreadsheets and build decks

In short:
Copilot Chat helps you think.
Microsoft 365 Copilot helps you do.
[support.mi…rosoft.com]

But that power only delivers value if users already know how to work with AI.


Set Expectations First. Licence Later.

The smartest Copilot projects I’ve seen all follow the same path:

  1. Start with Copilot Chat

  2. Train people how to prompt and think with AI

  3. Identify high‑value roles and use cases

  4. Then—and only then—license Microsoft 365 Copilot

Copilot Chat isn’t a “cut‑down demo”.
It’s a training ground.

Use it properly, and when you do buy licences, Copilot won’t feel expensive—it’ll feel obvious.

And that’s how Copilot adoption should work.

AI Guilt Is the Wrong Question (But the Right Wake‑Up Call)

image

I watched a video this week that stuck with me far longer than it probably should have. It wasn’t flashy. It wasn’t hyped. It wasn’t trying to sell me the “AI will save us all” story.

Instead, it focused on something far more uncomfortable: the guilt felt by people who build AI systems that lead to job losses.

And honestly? That discomfort is exactly what we should be leaning into right now.

The AI conversation is broken because it’s usually framed at the extremes. Either AI is an unstoppable monster coming for everyone’s job, or it’s a magical productivity fairy that somehow improves everything without consequence. Both positions are lazy. Both avoid responsibility.

The truth — as usual — is messier.

AI Doesn’t Lay People Off. People Do.

Let’s get one thing clear early: AI does not make decisions. Humans do.

AI doesn’t walk into a boardroom and announce redundancies. AI doesn’t restructure teams. AI doesn’t decide that headcount is the fastest way to protect margins.

Executives do that.

Business owners do that.

Leaders do that.

Blaming “the technology” is a convenient way to outsource accountability. It allows people to say, “We had no choice”, when what they really mean is, “We chose efficiency over people, and we don’t want to own that.”

The guilt described in this video isn’t actually about AI. It’s about power without ownership.

Productivity Has Always Displaced Work

This part isn’t new. Automation has been displacing tasks — and entire roles — for centuries. Spreadsheets replaced ledger clerks. Email replaced postal rooms. Cloud computing replaced on‑prem everything teams.

What is new is the speed and scope.

AI doesn’t just replace manual labour. It replaces cognitive effort. Drafting, analysing, summarising, responding, triaging — the very tasks many knowledge workers believed were “safe”.

That’s confronting. It should be.

But pretending we can stop it is fantasy. The real question is: what do we do with the leverage it gives us?

MSPs Are at the Front Line of This Shift

For MSPs, this conversation isn’t theoretical. You’re already living it.

Every Copilot deployment, every automation script, every agent you roll out reduces friction — and often reduces billable effort. That’s not a bug. That’s the future.

The mistake is thinking the win is “doing the same work with fewer people”.

The real win is doing better work with the same people.

More proactive security.
More strategic advice.
More business insight.
More human judgment where it actually matters.

If your only AI strategy is cost‑cutting, then yes — guilt is probably appropriate.

The Ethical Line Is Leadership, Not Technology

The developers in this video are asking themselves the wrong question: “Should we build this?”

The better question is: “How will this be used?”

AI is a multiplier. It amplifies intent. Good leaders will use it to elevate teams. Bad leaders will use it to extract value and discard people.

The technology doesn’t decide which path you’re on. You do.

And for MSPs advising clients? This is where your role becomes critical. You’re no longer just implementing tools — you’re shaping outcomes. You’re influencing how businesses adopt AI, what they automate, and what they preserve.

That’s not a technical responsibility. It’s a moral one.

Feeling Uncomfortable Is a Sign You’re Paying Attention

If AI makes you uneasy, good. That means you’re thinking beyond features and licences.

Progress without reflection is how we end up with systems that optimise everything except humanity.

AI isn’t the enemy. But unexamined efficiency absolutely is.

So instead of asking whether AI will replace jobs, maybe we should be asking something harder:

What kind of organisations are we choosing to build with it?

Because that answer won’t be written by algorithms.
It’ll be written by leaders.

And MSPs will be right there with them, whether they like it or not.

Why the Essential Eight Falls Short for Microsoft 365 Copilot

image

The Essential Eight has done a lot of good.

It’s helped lift the baseline security posture of thousands of Australian organisations. It’s given boards something concrete to point at. And it’s given MSPs a common language to talk about “doing security properly”.

But here’s the uncomfortable truth:

The Essential Eight is not a good security framework for working with Microsoft 365 Copilot.

That doesn’t mean it’s useless.
It means it was never designed for this problem.

And pretending otherwise is where things start to break.

The Essential Eight Was Built for a Different Era

At its core, the Essential Eight is a host‑centric, exploit‑reduction framework.

Patch your systems.
Lock down macros.
Control admin privileges.
Stop ransomware from ruining your week.

That mindset made perfect sense when the primary risks were:

  • Malware executing on endpoints

  • Credential theft via phishing

  • Lateral movement across on‑prem networks

Copilot changes the threat model completely.

Copilot doesn’t break in.
It doesn’t escalate privileges.
It doesn’t drop malware.

It uses the access you’ve already given people—and amplifies it.

That’s a fundamentally different class of risk.

Copilot Turns “Access” Into the Attack Surface

The Essential Eight assumes that if a user can access something, the risk has already been accepted.

Copilot doesn’t.

Copilot takes that access and:

  • Aggregates it

  • Summarises it

  • Correlates it

  • Surfaces it in seconds

A user who technically had access to 10,000 SharePoint files—but never opened them—now has an AI assistant that can reason over all of them at once.

Nothing in the Essential Eight meaningfully addresses:

  • Overshared SharePoint sites

  • Inherited permissions chaos

  • “Everyone except external users” links

  • Legacy Teams and Groups no one remembers creating

From an Essential Eight perspective, everything is fine.

From a Copilot perspective, the tenant is a loaded weapon.

“We’re Essential Eight Compliant” Is a False Sense of Safety

This is where I see organisations get caught out.

They’ve ticked the boxes:

✅ MFA enforced
✅ Devices compliant
✅ Admin roles restricted
✅ Patching up to date

Then they turn on Copilot and assume security is handled.

It isn’t.

Because Essential Eight compliance tells you almost nothing about:

  • Who can see sensitive data

  • Whether data is correctly classified

  • Whether information barriers exist

  • Whether users understand the impact of AI on data exposure

Copilot doesn’t care that your macros are locked down.

It cares about data sprawl.

The Essential Eight Doesn’t Model “Inference Risk”

This is the biggest gap.

Copilot introduces inference risk—the ability to derive sensitive insights from non-sensitive data.

Individually harmless documents can become highly sensitive when combined:

  • A pricing doc

  • A staff list

  • A project timeline

  • A financial forecast

Copilot can stitch those together in ways humans rarely do.

The Essential Eight has no control for:

  • Semantic aggregation

  • Contextual inference

  • AI‑assisted discovery

You can be perfectly compliant and still expose far more than you realise.

Copilot Needs a Data‑Centric Security Model

If you’re serious about Copilot, your security thinking has to shift.

From:

“Can this device run malicious code?”

To:

“Should this person ever see this information—at scale?”

That means frameworks and controls that focus on:

  • Information architecture

  • Permission hygiene

  • Data classification and sensitivity labels

  • SharePoint and Teams governance

  • Ongoing access reviews

  • User behaviour and intent

None of which are meaningfully addressed by the Essential Eight.

This Doesn’t Mean You Throw the Essential Eight Away

Let’s be clear.

The Essential Eight is still a solid baseline.

You absolutely should be doing it.

But treating it as sufficient for Copilot is a mistake.

It’s like saying:

“We’ve installed seatbelts, so autonomous driving is safe.”

Different problem. Different risk profile.

The Right Question to Ask

Instead of asking:

“Are we Essential Eight compliant?”

Copilot forces a better question:

“What could Copilot expose tomorrow that we’d be uncomfortable explaining to the board?”

If you can’t answer that confidently, the framework you’re using is the wrong one for the job.

Copilot doesn’t reward checkbox security.

It rewards intentional design, clean data, and disciplined governance.

And that’s a conversation the Essential Eight simply wasn’t built to have.

This Is the Reality Now

image

Most people are still stuck at Level 1.

They’re arguing about which AI tool is “best”.
ChatGPT vs Copilot. Claude vs Gemini. Model versions. Token limits. Benchmarks.

It’s all noise.

Because the real advantage was never the tool.

It’s how you delegate.

We’ve seen this movie before. When cloud arrived, people obsessed over which hypervisor was better instead of rethinking infrastructure. When SaaS took off, they argued about features instead of outcomes. AI is no different. The ones arguing about tools are missing the shift entirely.

Chat gives you answers.
Automation gives you leverage.
Agents give you time back.

And time is the only asset that actually matters.

Chat Is the Training Wheels

Chat-based AI is incredible. Don’t get me wrong. It’s useful, powerful, and accessible. It helps you think, draft, brainstorm, research, and unblock yourself.

But chat is still you doing the work.

You ask.
You refine.
You copy.
You paste.
You decide.

That’s not leverage. That’s assistance.

Chat is the equivalent of having a smart junior sitting next to you, waiting for instructions. Helpful? Absolutely. Transformational? Only if you stop there.

Most people do.

They feel productive because they’re faster — but they’re still the bottleneck.

Automation Is Where Leverage Starts

Automation changes the equation.

When you automate, work happens without you being present. Decisions are made based on rules. Actions trigger automatically. Systems talk to systems.

This is where output starts to scale without effort scaling with it.

But automation still has limits. It’s rigid. It does exactly what you tell it to do — no more, no less. It’s fantastic for repeatable, predictable processes, but it struggles when judgement is required.

Which brings us to the real shift.

Agents Are the Force Multiplier

Agents are where things get uncomfortable — because they replace you in the loop.

Agents don’t just answer questions.
They monitor.
They decide.
They act.
They escalate only when needed.

That’s delegation at a level most people aren’t ready for.

Instead of asking AI to help you do the work, you assign the work and walk away. You define outcomes, guardrails, and exceptions — and the agent handles the rest.

This is the difference between working with AI and working through AI.

One saves time.
The other gives it back.

Time Is the Only Asset That Matters

Money can be earned again.
Tools can be replaced.
Skills can be relearned.

Time is gone forever.

And yet most business owners, MSPs, and professionals are using AI to shave minutes instead of reclaim hours. They’re optimising tasks instead of eliminating them. They’re still “busy”, just faster at being busy.

The winners in this next phase aren’t going to be the people who know the most prompts.

They’ll be the people who know how to delegate to systems.

Who design workflows where AI works while they sleep.
Who build agents that handle the boring, repetitive, low‑value decisions.
Who spend their time on strategy, relationships, and leverage — not execution.

This Is the World We’re In Now

This isn’t future talk. It’s not hype. It’s not “someday”.

This is now.

AI isn’t just a tool you use anymore. It’s labour you can assign. And the moment you understand that, the question changes.

It’s no longer:
“Which AI should I use?”

It’s:
“What work should I never do again?”

The only real question left is whether you’re going to lean into that reality — or keep asking AI for answers while time keeps slipping through your fingers.

Because AI won’t run out of capacity.

You will.

Why Microsoft Copilot Wins: Because Copy‑Paste Isn’t a Workflow

image

There’s a lot of noise right now about AI tools.

Everyone has one. Everyone claims theirs is “the best”. And on the surface, they all seem to do the same thing: you type a prompt, it spits out words, code, or ideas.

But after working with AI daily — and helping MSPs and businesses actually use it — I’ve come to a very clear conclusion:

Microsoft Copilot isn’t better because it’s smarter.
It’s better because it’s integrated.

And that changes everything.

The Copy‑Paste Tax No One Talks About

Most AI tools live in a browser tab.

You ask a question.
You get an answer.
Then you copy it.
Then you paste it somewhere else.

Word. Excel. Outlook. Teams. PowerPoint. CRM. Ticketing system.

That constant switching feels minor… until you add it up.

It’s mental context‑switching.
It’s broken flow.
It’s extra clicks.
It’s friction.

Over a day, a week, a month — it’s a tax on productivity that nobody puts in a pricing comparison.

AI that forces you to copy and paste is still making you do the hard work.

Copilot Lives Where the Work Happens

Copilot doesn’t sit off to the side like a clever intern waiting for instructions.

It’s embedded directly into the tools people already use:

  • Writing inside Word
  • Analysing data inside Excel
  • Responding inside Outlook
  • Summarising conversations inside Teams
  • Building decks inside PowerPoint

That matters more than most people realise.

Because the real value of AI isn’t generating content.
It’s reducing friction in the flow of work.

With Copilot, you’re not moving information between systems.
You’re working on the thing, while the AI works with you.

Context Is the Secret Sauce

Here’s the uncomfortable truth about most AI tools:

They only know what you tell them.

Every prompt starts from scratch unless you manually paste in context. Emails. Documents. Spreadsheets. Notes. Meeting transcripts.

That’s not intelligence. That’s busywork.

Copilot, on the other hand, is grounded in your Microsoft 365 data — respecting permissions, security, and compliance — and understands:

  • The document you’re editing

  • The email thread you’re replying to

  • The meeting you just came out of

  • The spreadsheet you’re staring at

  • The chat you missed yesterday

You don’t have to re‑explain your world every time.

That’s the difference between an AI toy and an AI assistant built for work.

Real Productivity Is Invisible

The biggest productivity gains don’t look impressive in a demo.

They look like:

  • Finishing an email in 30 seconds instead of 5 minutes

  • Turning meeting notes into actions without rewriting them

  • Asking “what changed?” instead of rereading 20 messages

  • Starting a document without staring at a blank page

Copilot excels here because it removes micro‑tasks you shouldn’t be doing in the first place.

You’re not “using AI”.
You’re just getting work done faster.

Security and Compliance Aren’t Optional

This is where a lot of organisations quietly get nervous.

Browser‑based AI tools are often disconnected from your identity, your data controls, and your compliance posture. People paste sensitive information in because they’re trying to be efficient — and suddenly governance is gone.

Copilot inherits your existing Microsoft 365 security model:

  • Identity

  • Permissions

  • Data boundaries

  • Compliance controls

It only shows users what they already have access to.

That’s not just a technical detail.
For MSPs and regulated businesses, it’s the difference between “we can use this” and “we can’t touch this”.

The Best AI Is the One People Actually Use

Here’s the final point — and it’s the one that matters most.

If AI requires:

  • Training people on a new interface

  • Convincing them to change tools

  • Forcing them to remember “where the AI lives”

…adoption will stall.

Copilot shows up inside the tools people already know.

No change management theatre.
No new browser tabs.
No “remember to use the AI”.

It’s just… there.

And that’s why it wins.

Not because it’s flashy.
Not because it’s louder.
But because it understands a simple truth:

AI only delivers value when it disappears into the workflow.

And right now, Copilot does that better than anything else on the market.