That article is SO wrong. You don't run one instance of a tier1 application. And they are on separate DCs, on separate networks, and the firewall rules allow only for application traffic. Management (rdp/ssh) is from another network, through bastion servers. At the very least you have daily/monthly/yearly (yes, yearly) backups. And you take snapshots before patching/app upgrades. Or you even move to containers, with bare hypervisors deployed in minutes via netinstall, configured via ansible. You got infected? Too bad, reinstall and redeploy. There will be downtime but not horrible. The DBs/storage are another matter of course, but that's why you have synchronous and asynchronous replicas, read only replicas, offsites, etc. But for the love of what you have dear, don't run stuff on bare metal because "what if the hypervisor gets infected". Consider the attack vector and work around that.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
You can prevent downtime by mirroring your container repository and keeping a cold stack in a different cloud service. We wrote an loe, decided the extra maintenance wasn't worth the effort to plan for provider failures. But then providers only sign contracts if you are in their cloud and you end up doing it anyways.
Unfortunately most victims aren't using best practices let alone industry standards. The author definitely learned the wrong lesson though.
Good comments.
Do you think there's still a lot of traditional or legacy thinking in IT departments?
Containers aren't new, neither is the idea of infrastructure as code, but the ability to redeploy a major application stack or even significant chunks of the enterprise with automation and the restoration of data is newer.
There is so much old and creaky stuff lying around and people have no idea what it does. Beige boxes in a cabinet that when we had to decommission it the only way to understand what it does was doing the scream test: turn it off and see who screams!
Or even stuff that was deployed as IaC by an engineer but then they left and so was managed "clickOps", but documentation never updated.
When people talk about the Tier1 systems they often forget the peripheral stuff required to make them work. Sure the super mega shiny ERP system is clustered, with FT and DR, backups off site etc. But it talks to the rest of the world through an internal smtp server running on a Linux box under the stairs connected to a single consumer grade switch (I've seen this. Dust bunnies were almost sentient lol).
Everyone wants the new shiny stuff but nobody wants to take care of the old stuff.
Or they say "oh we need a new VM quickly, we'll install the old way and then migrate to a container in the cloud". And guess what, it never happens.
If the hypervisor or any of its components are exposed to the Internet
Lemme stop you right there, wtf are you doing exposing that to the internet...
(This is directed at the article writer, not OP)
Lol, even in 2024 with free VPN/overlay solutions...they just won't stop public Internet exposure of control plane things...
True horrors
Like, that's what vpns and jump boxes are for at the very least.
Wanna bet they expose SSH on port 22 to the internet on their "critical" servers? 🤣
Ive been tempted to setup a Honeypot like this lol
You'll definitely get lots of login attempts. I used to have a port 22 ssh, hundreds of attempts per day.
Would be interesting to see what post login behavior was.
Well. Misconfiguration happens, and sadly, quite often.
Sure, but the author makes it sounds like thats its their standard way of doing things, which is insane.
And if you do have a misconfiguration, the rational thing is to fix that, not dump the entire platform.
Most organizations will avoid patching due to the downtime alone, instead using other mitigations to avoid exploitation.
If you can't patch because of downtime, maybe you are cheaping out too much on redundancy?
Yeah, that's pretty risky for this point in time.
I guess the MBA people look at total cost of revenue/reputation loss for things like ransomware recovery, restoration of backups vs the cost of making their IT systems resilient?
Personally, I don't think so (in many cases) or they'd spend more money on planning/resilience.
That immediately stuck out to me as well, what a lame excuse not to patch. I've been in IT for a while now, and I've never worked in any shop that would let that slide.
"Don't use virtualization", says exec whose product doesn't run on virtualization
I work for a newspaper. It was published without fail every single day since 1945 (when my country was still basically just rubble, deservedly).
So even when all our systems are encrypted by ransomware, the newspaper MUST BE ABLE TO BE PRINTED as a matter of principle.
We run all our systems virtualized, because everything else would be unmaintainable and it's a 24/7 operation.
But we also have a copy of the most essential systems running on bare metal, completely air-gapped from everything else, and the internet.
Even I as the admin can't access them remotely in any way. If I want to, I have to walk over to another building.
In case of a ransomware attack, the core team meets in a room with only internal wifi, and is given emergency laptops from storage with our software preinstalled. They produce the files for the paper, save them on a USB stick, and deliver that to the printing press.
Seems like your org has taken resilience and response planning seriously. I like it.
Another newspaper in our region was unprepared and got ransomwared. They're still not back to normal, over a year later.
After that, our IT basically got a blank check from executive to do whatever is necessary.
Blank check
Funny how that seems to often be the case. They need to see the consequences, not just be warned. An 'I told you so' moment...
I'm just glad they got to see the consequences in another company.
Their senior IT admin had a heart attack a month after the ransomware attack.
save them on a USB stick
...which is also kept with the air-gaped system and tossed once used, i assume...
There's several for redundancy, in their original packaging, locked in a safe, and replaced yearly.
How you keep the air gapped system in sync?
We don't. It's a separate, simplified system that only lets the core team members access the layout-, editing- and typesetting-software that is locally installed on the bare metal servers.
In emergency mode, they get written articles and images from the reporters via otherwise unused, remotely hosted email addresses, and as a second backup, Signal.
They build the pages from that, send them to the printers, and the paper is printed old-school using photographic plates.
That's a very high degree of BCDR planning, and quite costly I assume.
It's less than the cost of our cybersecurity insurance, which will probably drop us on a technicality when the day comes.
And it's not entirely an economic decision. The paper is family-owned in the 3rd generation, historically relevant as one of the oldest papers in the country, and absolutely no one wants to be the one in charge when it doesn't print for the first time ever.
Heh, whatever you do don't do what everybody in the world has been doing successfully for the past 20 years.
Most everything everywhere is virtual these days, even when the host hardware is single tenant. Companies running hosted applications on bare metal are rare. I run personal stuff that way because proxmox was too much hassle, but a more serious user would have just dealt with it.
It the virtual borks, spin it back up. That's a plus.
Some should run at least one instance baremetal, like domain controllers.
It's not a one-size-fits-all.
If we boil this article down to it's most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE's that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don't even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.
Agreed.
Dont we all use centralized management because there is cost and risk involved when we don't.
More management complexity, missed systems, etc.
So we're balancing risk vs operational costs.
Makes sense to swap out virtual for container solutions or automation solutions for discussion.
Would you care to expand on this? I understand many of the pieces mentioned but am not an expert on this and am trying to learn.
In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual's credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.
Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.
Makes perfect sense. I'm not as familiar with the admin side of things.
TY for taking the time to explain.