I watched a video this week that stuck with me far longer than it probably should have. It wasn’t flashy. It wasn’t hyped. It wasn’t trying to sell me the “AI will save us all” story.
Instead, it focused on something far more uncomfortable: the guilt felt by people who build AI systems that lead to job losses.
And honestly? That discomfort is exactly what we should be leaning into right now.
The AI conversation is broken because it’s usually framed at the extremes. Either AI is an unstoppable monster coming for everyone’s job, or it’s a magical productivity fairy that somehow improves everything without consequence. Both positions are lazy. Both avoid responsibility.
The truth — as usual — is messier.
AI Doesn’t Lay People Off. People Do.
Let’s get one thing clear early: AI does not make decisions. Humans do.
AI doesn’t walk into a boardroom and announce redundancies. AI doesn’t restructure teams. AI doesn’t decide that headcount is the fastest way to protect margins.
Executives do that.
Business owners do that.
Leaders do that.
Blaming “the technology” is a convenient way to outsource accountability. It allows people to say, “We had no choice”, when what they really mean is, “We chose efficiency over people, and we don’t want to own that.”
The guilt described in this video isn’t actually about AI. It’s about power without ownership.
Productivity Has Always Displaced Work
This part isn’t new. Automation has been displacing tasks — and entire roles — for centuries. Spreadsheets replaced ledger clerks. Email replaced postal rooms. Cloud computing replaced on‑prem everything teams.
What is new is the speed and scope.
AI doesn’t just replace manual labour. It replaces cognitive effort. Drafting, analysing, summarising, responding, triaging — the very tasks many knowledge workers believed were “safe”.
That’s confronting. It should be.
But pretending we can stop it is fantasy. The real question is: what do we do with the leverage it gives us?
MSPs Are at the Front Line of This Shift
For MSPs, this conversation isn’t theoretical. You’re already living it.
Every Copilot deployment, every automation script, every agent you roll out reduces friction — and often reduces billable effort. That’s not a bug. That’s the future.
The mistake is thinking the win is “doing the same work with fewer people”.
The real win is doing better work with the same people.
More proactive security.
More strategic advice.
More business insight.
More human judgment where it actually matters.
If your only AI strategy is cost‑cutting, then yes — guilt is probably appropriate.
The Ethical Line Is Leadership, Not Technology
The developers in this video are asking themselves the wrong question: “Should we build this?”
The better question is: “How will this be used?”
AI is a multiplier. It amplifies intent. Good leaders will use it to elevate teams. Bad leaders will use it to extract value and discard people.
The technology doesn’t decide which path you’re on. You do.
And for MSPs advising clients? This is where your role becomes critical. You’re no longer just implementing tools — you’re shaping outcomes. You’re influencing how businesses adopt AI, what they automate, and what they preserve.
That’s not a technical responsibility. It’s a moral one.
Feeling Uncomfortable Is a Sign You’re Paying Attention
If AI makes you uneasy, good. That means you’re thinking beyond features and licences.
Progress without reflection is how we end up with systems that optimise everything except humanity.
AI isn’t the enemy. But unexamined efficiency absolutely is.
So instead of asking whether AI will replace jobs, maybe we should be asking something harder:
What kind of organisations are we choosing to build with it?
Because that answer won’t be written by algorithms.
It’ll be written by leaders.
And MSPs will be right there with them, whether they like it or not.