One of the biggest misunderstandings I still see in the market is the idea that large language models are “just software”. That they’re something you build, configure, and control in the same way you do an application, a script, or even a PowerShell module.
They’re not.
LLMs are not coded in the traditional sense. They are grown.
And once you understand that distinction, a lot of confusion around AI, risk, accuracy, and expectations suddenly makes sense.
Code Is Deterministic. LLMs Are Probabilistic.
Traditional software works because we tell it exactly what to do.
If this happens, do that.
If the value equals X, return Y.
If the script runs twice with the same inputs, you expect the same outputs.
LLMs don’t work like that.
They are trained on vast amounts of data and learn patterns, relationships, and probabilities. When you prompt an LLM, it isn’t “executing logic”. It is calculating the most likely next token based on everything it has seen before.
That’s not coding.
That’s cultivation.
Think of an LLM less like a calculator and more like a very well‑read human who answers based on experience, context, and probability. Sometimes they’re brilliant. Sometimes they’re confidently wrong. And sometimes they surprise you with insights you didn’t expect.
You Don’t Compile an LLM – You Train It
When we write code, we compile it. When there’s a bug, we fix the line of code and re‑run it.
With LLMs, you don’t fix bugs in the same way.
You:
- Change the training data
- Adjust the fine‑tuning
- Improve the prompt context
- Add guardrails
- Supplement with retrieval (RAG)
- Wrap it in agents, workflows, and policy
That’s why LLMs improve over time in jumps, not increments. A new model release isn’t a patch Tuesday update – it’s a new organism that has grown up on a bigger, cleaner, more structured diet.
This is also why the same prompt can give you slightly different answers on different days or across different models. You’re not calling a function. You’re having a conversation with a statistical engine.
Why This Matters for Business (and MSPs)
If you think LLMs are coded, you’ll expect certainty.
If you understand they’re grown, you’ll design for outcomes instead.
That means:
- You validate outputs instead of blindly trusting them
- You treat AI as an assistant, not an authority
- You design processes that assume probabilistic answers
- You put humans in the loop where it matters
- You focus on reducing risk, not eliminating it (because you can’t)
This is exactly why raw “public AI” is dangerous in business contexts, and why platforms like Microsoft 365 Copilot matter. Copilot doesn’t magically make the LLM smarter – it feeds it better data, constrains its environment, applies identity, compliance, and security, and grounds responses in your organisation’s reality.
The model hasn’t changed. The nutrition has.
Prompts Are Fertiliser, Not Commands
Another symptom of the “coded mindset” is prompt obsession.
People ask for the perfect prompt as if it’s a magic incantation.
Prompts don’t control LLMs.
They nudge them.
A good prompt gives context, tone, constraints, and examples. A bad prompt starves the model and then complains about the output.
Again, this makes sense if you think in biological terms. You don’t shout instructions at a plant and expect it to grow differently overnight. You change the environment, the inputs, and the expectations.
Why AI Feels Uncomfortable to Traditional IT People
For those of us who grew up with servers, scripts, and systems that either worked or didn’t, LLMs are uncomfortable.
They live in the grey.
They’re not always right.
They’re not always wrong.
They’re useful far more often than they’re perfect.
And that’s the mental shift required.
The organisations that win with AI won’t be the ones who treat it like another application to deploy. They’ll be the ones who treat it like a junior staff member that:
- Needs good information
- Needs supervision
- Improves with feedback
- Gets more useful the more you work with it
The Bottom Line
LLMs aren’t coded.
They’re grown.
If you try to manage them like software, you’ll be frustrated. If you treat them like a system that learns, adapts, and responds to its environment, you’ll unlock real value.
This is why AI strategy isn’t about models. It’s about data, context, governance, and outcomes.
And it’s why the real competitive advantage won’t come from “which AI you use”, but from how well you grow it inside your business.
If you’re still treating AI like a tool, you’re already behind.
If you’re treating it like a capability, you’re finally asking the right questions.