Ransomware: A Bullet Whistled Past
... or how a simple Monday-morning routine caught a million-dollar ransomware attack
One of my clients had a serious close call. Their story is worth telling, as plenty of other IT leaders will probably recognise themselves in it.
Some context first. This is a story from a few years back, when I had just started working with the client, a medium-sized industrial company. My role was to coach the IT leadership team, co-build a concrete resilience plan, and help stand up an organisation capable of responding effectively to a cyber incident.
We were still in the discovery phase, and we'd just had a conversation about SOC onboarding and alert triage. The gist of it: while the SOC was still in its tuning phase, regularly skimming a handful of carefully chosen logs would give the security team a human baseline, a feel for what "normal" looked like. And, over time, it would help build the kind of muscle memory that notices when something smells off. They had just started doing this.
On Friday, October 14th, the internal threat watch we'd set up flagged a CERT-FR advisory about a vulnerability affecting Fortinet appliances (CERTFR-2022-ALE-011).
Since the client runs several publicly exposed Fortinet firewalls, the security team took the advisory seriously and escalated it to IT operations with a recommendation to patch quickly.
But it was Friday. And as the client himself put it: “By the time we’d worked out whether to patch over the weekend or not, it was already Monday!” There was clearly an organisational breakdown here. With hindsight, the obvious call was to patch over the weekend. And that was, in fact, something we'd already discussed together, walking through the data on the shortening timeframe between initial compromise and full exploitation (mercifully, this was in 2022, before AI-augmented attackers.)
But habit and routine erode even the best processes and the best intentions. That particular Friday, whether down to inertia, light staffing, an overloaded backlog or just bad luck, the team decided there was little risk in waiting until Monday.
Fortunately, one other process did not get skipped: the weekly review of logs from the publicly exposed firewalls, which had become a Monday-morning fixture for the client. That's when the team spotted a number of connections that had occurred over the weekend, along with a series of low-priority alerts that, given where the SOC was in its tuning at the time, wouldn't have been picked up, let alone escalated, but were appearing in suspiciously high numbers. Highly unusual. A quick check with the rest of the IT team confirmed those connections were, indeed, illegitimate.
From there, the organisation performed exactly as it should. The recently revised roles and responsibilities allowed the IT director, acting on the security team’s recommendation, to immediately cut all remote access (and loop in HR straight away, ready for the inevitable questions from employees whose remote access had just been pulled). A deep review of the Fortinet appliances and weekend connections could then begin.
The bottom line:
CVE-2022-40684 had indeed been exploited over the weekend. The attacker exported the firewall configurations and imported new ones, which let them create several user accounts on the VPN.
No further action was taken after creating the VPN accounts. This was almost certainly the work of an initial access broker, planning to resell the credentials to ransomware operators for later exploitation.
Although the CERT-FR advisory is dated October 14th, the Fortinet PSIRT alert (FG-IR-22-377) was published on October 10th. The team has now decided to monitor vendors’ PSIRT feeds directly for any publicly exposed equipment, in addition to the CERT-FR advisories.
And needless to say, the weight given to Friday-evening “urgent patch” recommendations got sharply revised upwards ;)
Now, I know perfectly well that some people out there (especially those with something to sell ;)) will rush to declare it unacceptable not to patch a critical exposed CVE, even on a Friday night. But that’s precisely the point of this post: there’s theory, there’s good intentions, and then there’s practice. And it’s precisely because we know slip-ups like this will happen that we adopt defence in depth. We MUST design for human nature. Cyber resilience maturity starts with accepting that you're not going to rewire it on a Friday afternoon (or ever ;)), you're not going to make it uniform across every team and every employee, and pretending otherwise is how you end up with a "plan" that only works when nothing goes wrong.
In this specific case, there are three reasons to be glad:
An IT team that’s been trained and switched on, that genuinely understands attacker TTPs and has taken the time to map them onto its own technical and architectural reality, will be able to react immediately when it spots signals that point unmistakably to an initial access compromise. This is because the team will know exactly which stage of the operation it’s looking at, and what the attacker’s options are at that point (and honestly, getting started on this typically takes about four 2-hour workshops. It's not the wall it's often made out to be.)
Even an organisation that doesn’t yet have a SOC (and there are more of these than people tend to assume) can organise itself internally to run threat monitoring (imperfectly, as in this case) and perform manual log review. And as we have seen, that can be effective.
The presence of a second authentication factor on most VPN user accounts gave the IT team some breathing room. They could properly assess the urgency on Monday. And, conveniently, they walked out of the incident with a textbook argument for forcing MFA onto the last few VIP accounts that had been trying to dodge it.
So the story ends well for this client, but it could easily have turned out far worse without the right response on Monday. The moral: better to prepare with the inevitable imperfection in mind.
That said, designing for human nature cuts both ways. Yes, it means hedging against Friday-afternoon slip-ups. But it also means capitalising on what humans do better than any machine: noticing that something is off, often without being able to say why. LLMs will outperform any analyst at sifting through data at scale. No LLM will replace the gut feeling that made someone look twice at those low-priority alerts on a Monday morning. Mature defence in depth designs around human failure modes and for human instincts.



