The Bigger They Are, the Harder You Fall
A Cautionary Tale from the Frontlines of MSP Failure
The last couple of weeks have been equal parts illuminating and infuriating. I’ve been working with a business that learned the hard way what happens when you trust a big-name Managed Service Provider (MSP) that talks a big game but crumbles under real pressure. And here’s the real kicker—this wasn’t a new relationship. This MSP had been in play for over five years. They’d designed the environment. They’d installed the infrastructure. They had full visibility of what was running and what was well past its sell-by date.
And despite all that insider knowledge, they still managed to fuck it up catastrophically.
Let’s start with the basics. The MSP replaced the client's firewalls about a month before the breach. Seemed routine—the hardware was approaching end of life in March 2025.
Sensible enough.
Except someone forgot to actually configure it properly.
The result? A wide-open door and an attacker who didn’t even need to try hard. What followed was a masterclass in how not to handle a cybersecurity incident.
The breach itself was bad. But what came after—that was the real horror show.
A Failure of Integrity
From the first moment, it was clear the MSP wasn’t being transparent. On Day 1, they were evasive. I was called in on Day 2 as the Incident Manager. I dropped everything and was onsite within 90 minutes. I was on the phone with the MSP within the hour, trying to get clear answers and coordinate a proper response.
Accountability is everything in this line of work—when an MSP makes a mistake, they need to own it from the very beginning.
One of the first things I asked? “Can I have ALL the logs, please?” And: “Can we sit down face-to-face with someone in charge tomorrow morning?”
Their response? “I’m not sure that’s possible.”
Not possible? You’re the primary IT provider, and there’s an active breach. That’s not just weak—it’s disgraceful.
Weeks later, they still haven’t looked me or the client in the eye. No onsite meeting. No accountability. And this, despite their head office being less than two hours away. Instead, we were stuck in an endless loop of Teams calls—just me, the client, and a rotating cast of their team members.
At one point, I counted nine of them. Nine people to say absolutely nothing of value. It was like watching a live-action remake of Whose Responsibility Is It Anyway?—except no one was funny, and everyone looked like they’d rather be anywhere else.
The Logs That Weren’t
Meanwhile, I kept pushing for the firewall logs.
Day after day—ignored, delayed, deflected.
When we finally got something? Oh, joy. What a treat. Completely fucking useless. The one time window that mattered—the period during which the breach occurred—was missing entirely. Vanished. Gone.
Honestly, if you're going to make us wait six days, at least pretend to try. What we got looked like someone printed the logs, spilled coffee on them, shredded them out of spite, then handed us the scraps with a straight face. The only thing missing was a Post-it note saying, “Oops.”
There was no explanation. No apology. Not even a half-arsed shrug. Just a big, gaping, data-shaped hole where actual evidence should have been.
And when we asked to deploy proper audit tools? Same story—delayed and resisted until Day 6. When those tools did run, they painted a picture of an environment teetering on the edge of collapse.
Rotten to the Core
We’re talking about:
400+ patchable vulnerabilities—many of them critical
Multiple servers running Windows Server 2016, which is barely clinging to life in extended support
Hypervisors still running VMware 6.7, which went end of life in 2023
When challenged about the outdated tech stack, the MSP started flailing. Evasive muttering. Vague promises. And the classic “we’ll need to quote for that” routine—as if fixing their own screw-up was a billable extra. Sorry lads, but patching a fire you started isn’t upsell territory. That’s just your job.
Then came Day 9.
We asked why one of the Server 2016 machines hadn’t been rebooted after an update.
Their excuse?
“It’s 2016—it takes ages to boot. Would’ve been outside the maintenance window.”
Right. Sure. Except… it had been up for 23 days. No reboot. No maintenance. Just more bullshit.
The Email That Said the Quiet Part Out Loud
And here’s the real smoking gun.
The client had an internal email from the MSP’s own ticket system on Day 2—before I was even engaged as Incident Manager.
While the MSP was sitting on Teams call after Teams call with us, dodging question after question—"Do you know what happened?"—they already had the answer. In writing.
The firewall misconfiguration had caused the breach, and they knew it.
Worse? The email explicitly said: “Do not tell the customer.”
Let that sink in.
They admitted fault internally. Then they deliberately tried to hide it.
On Day 7, the client read that email aloud in a meeting. The silence that followed said more than any apology ever could.
Picking Up the Pieces
Since then, we’ve been doing what the bigger MSP should have done from the start:
Building a real, working asset register
Enforcing proper patch management
Installing monitoring and alerting tools that actually alert
Giving the client full visibility and control over their environment
Most importantly—we’re making sure they own their own logs and documentation. No more gatekeeping. No more “we’ll get back to you.” No more black box nonsense.
At time of writing, the recovery is ongoing. Because of the lack of logs—and the deliberate obstacles thrown in the way—we still can’t confirm the integrity of the environment.
Which means the only safe path forward is a six-figure ground-up rebuild.
That’s the price of trusting the wrong partner.
Why This Should Never Have Happened
Here’s the real tragedy: if the client had even attempted Cyber Essentials certification, most of this mess would’ve come to light long before the breach.
We’re not even talking Cyber Essentials Plus—just the standard baseline would’ve flagged:
Unsupported operating systems
Out-of-date firewall firmware
Absence of basic patch management
No monitoring
No logs
No asset register
Cyber Essentials isn’t a silver bullet, but it is a brutally effective spotlight. One that would've lit up the mess this MSP left festering under the surface.
Instead, they got stitched up by a provider who treated transparency like an optional extra—and nearly lost everything because of it.
Wrapping Up
So, ask yourself:
Can you access your own firewall logs?
Do you know what’s been patched—and what hasn’t?
Do you have a written asset register?
Can someone show up today if everything breaks?
If your MSP made a mistake, would they own it—or bury it?
Because when it hit the fan for this client, their provider chose cowardice over integrity. And once that trust is gone, no SLA or support contract can bring it back.
Honesty. Ownership. Doing the fucking job.
If your MSP can’t manage that?
Show them the door.