The Real Challenge with AI Isn’t Accuracy — It’s That It’s Probabilistic, Not Deterministic

image

One of the hardest mindset shifts people are struggling with in the age of AI isn’t learning how to use the tools.

It’s unlearning how we expect technology to behave.

For decades, IT has trained us to think in deterministic terms. Same input, same output. Every time. If it doesn’t work that way, it’s broken and we fix it.

AI doesn’t work like that. And pretending it does is where most of the frustration, fear, and failed deployments come from.

We Built Our Businesses on Determinism

Traditional IT systems are deterministic by design. Firewalls either block traffic or they don’t. Conditional Access policies either allow sign-in or they don’t. Accounting software produces the same report today as it did yesterday, assuming the data hasn’t changed.

That determinism is comforting. It’s auditable. It’s predictable. It’s what allows MSPs to scale, standardise, document, and support environments consistently.

AI blows a hole straight through that expectation.

Large language models don’t know things in the way traditional systems do. They predict. They generate the most statistically likely next word based on context, patterns, and probability. That means two identical prompts can produce slightly different outputs — both valid, both reasonable, neither “wrong”.

For IT people, that feels deeply uncomfortable.

“Why Did It Give Me a Different Answer?”

This is the number one complaint I hear from business owners and technicians alike.

“I asked Copilot yesterday and it gave me a better answer.” “It worked last time — why is this one different?” “How can I trust something that changes its mind?”

Here’s the blunt truth: AI isn’t changing its mind. It never had one.

It’s doing exactly what it was designed to do — generate a probabilistic response, not execute a fixed rule.

If you approach AI expecting it to behave like a script, a policy, or a PowerShell command, you will be disappointed every single time.

Probabilistic Systems Are Not Broken — They’re Different

Probabilistic systems excel in areas deterministic systems are terrible at:

  • Interpreting vague human language

  • Summarising messy, unstructured data

  • Generating ideas, drafts, options, and variations

  • Adapting to context rather than rigid rules

But they are fundamentally unsuitable for tasks that require absolute consistency, precision, or compliance on their own.

This is where many AI projects go off the rails. Organisations try to replace deterministic processes with probabilistic tools instead of augmenting them.

AI shouldn’t decide whether a user gets admin rights. AI shouldn’t be the sole source of truth for compliance decisions. AI shouldn’t replace controls that require repeatability and audit trails.

That’s not a failure of AI — it’s a failure of design.

The MSP Problem: Clients Expect Certainty

As MSPs, we’re in a tough spot.

Our clients expect answers, not probabilities. They want confidence, not “it depends”. They want systems that behave the same way every day.

When we introduce AI into that environment without resetting expectations, we inherit the blame for its uncertainty.

This is why AI needs guardrails:

  • Defined use cases

  • Clear boundaries

  • Human-in-the-loop review

  • Deterministic systems underneath probabilistic ones

AI is brilliant at drafting the email. It’s terrible at deciding whether it should be sent.

Prompting Is an Attempt to Add Determinism

A lot of what we call “prompt engineering” is really just us trying to force probabilistic systems to behave more deterministically.

We add structure. We add constraints. We add role instructions. We add examples.

And it works — to a point.

But it never becomes fully deterministic, and that’s the trap. The moment you treat AI output as authoritative instead of assistive, you create risk.

The Opportunity Is in Hybrid Thinking

The organisations that will win with AI aren’t the ones chasing perfect answers.

They’re the ones designing hybrid systems:

  • Deterministic workflows for control and compliance

  • Probabilistic AI for insight, acceleration, and creativity

AI doesn’t replace judgment — it amplifies it. It doesn’t remove responsibility — it redistributes it. And it absolutely doesn’t eliminate the need for human oversight.

The Mindset Shift That Matters

The real challenge with AI isn’t hallucinations. It isn’t accuracy. It isn’t even security.

It’s accepting that we’ve invited a non-deterministic system into a world built on certainty.

Once you stop trying to make AI behave like traditional software, and start designing around what it actually is, everything gets easier.

And far more powerful.

Leave a comment