this post was submitted on 21 Jul 2024
191 points (76.6% liked)

Technology

59201 readers
3238 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

This is an unpopular opinion, and I get why – people crave a scapegoat. CrowdStrike undeniably pushed a faulty update demanding a low-level fix (booting into recovery). However, this incident lays bare the fragility of corporate IT, particularly for companies entrusted with vast amounts of sensitive personal information.

Robust disaster recovery plans, including automated processes to remotely reboot and remediate thousands of machines, aren't revolutionary. They're basic hygiene, especially when considering the potential consequences of a breach. Yet, this incident highlights a systemic failure across many organizations. While CrowdStrike erred, the real culprit is a culture of shortcuts and misplaced priorities within corporate IT.

Too often, companies throw millions at vendor contracts, lured by flashy promises and neglecting the due diligence necessary to ensure those solutions truly fit their needs. This is exacerbated by a corporate culture where CEOs, vice presidents, and managers are often more easily swayed by vendor kickbacks, gifts, and lavish trips than by investing in innovative ideas with measurable outcomes.

This misguided approach not only results in bloated IT budgets but also leaves companies vulnerable to precisely the kind of disruptions caused by the CrowdStrike incident. When decision-makers prioritize personal gain over the long-term health and security of their IT infrastructure, it's ultimately the customers and their data that suffer.

you are viewing a single comment's thread
view the rest of the comments
[–] breakingcups@lemmy.world 171 points 3 months ago (76 children)

Please, enlighten me how you'd remotely service a few thousand Bitlocker-locked machines, that won't boot far enough to get an internet connection, with non-tech-savvy users behind them. Pray tell what common "basic hygiene" practices would've helped, especially with Crowdstrike reportedly ignoring and bypassing the rollout policies set by their customers.

Not saying the rest of your post is wrong, but this stood out as easily glossed over.

[–] LrdThndr@lemmy.world 22 points 3 months ago* (last edited 3 months ago) (5 children)

A decade ago I worked for a regional chain of gyms with locations in 4 states.

I was in TN. When a system would go down in SC or NC, we originally had three options:

  1. (The most common) have them put it in a box and ship it to me.
  2. I go there and fix it (rare)
  3. I walk them through fixing it over the phone (fuck my life)

I got sick of this. So I researched options and found an open source software solution called FOG. I ran a server in our office and had little optiplex 160s running a software client that I shipped to each club. Then each machine at each club was configured to PXE boot from the fog client.

The server contained images of every machine we commonly used. I could tell FOG which locations used which models, and it would keep the images cached on the client machines.

If everything was okay, it would chain the boot to the os on the machine. But I could flag a machine for reimage and at next boot, the machine would check in with the local FOG client via PXE and get a complete reimage from premade images on the fog server.

The corporate office was physically connected to one of the clubs, so I trialed the software at our adjacent club, and when it worked great, I rolled it out company wide. It was a massive success.

So yes, I could completely reimage a computer from hundreds of miles away by clicking a few checkboxes on my computer. Since it ran in PXE, the condition of the os didn’t matter at all. It never loaded the os when it was flagged for reimage. It would even join the computer to the domain and set up that locations printers and everything. All I had to tell the low-tech gymbro sales guy on the phone to do was reboot it.

This was free software. It saved us thousands in shipping fees alone. And brought our time to fix down from days to minutes.

There ARE options out there.

[–] magikmw@lemm.ee 26 points 3 months ago* (last edited 3 months ago) (3 children)

This works great for stationary pcs and local servers, does nothing for public internet connected laptops in hands of users.

The only fix here is staggered and tested updates, and apparently this update bypassed even deffered update settings that crowdstrike themselves put into their software.

The only winning move here was to not use crowdstrike.

[–] wizardbeard@lemmy.dbzer0.com 7 points 3 months ago (1 children)

It also assumes that reimaging is always an option.

Yes, every company should have networked storage enforced specifically for issues like this, so no user data would be lost, but there's often a gap between should and "has been able to find the time and get the required business side buy in to make it happen".

Also, users constantly find new ways to do non-standard, non-supported things with business critical data.

[–] Bluetreefrog@lemmy.world 5 points 3 months ago

Isn't this just more of what caused the problem in the first place? Namely, centralisation. If you store data locally and you lose a machine, that's bad but not the end of the world. If you store it centrally and you lose the data, that's catastrophic. Nassim Taleb nailed this stuff. Keep the downside limited, and the upside unlimited or as he says, "Don't pick up pennies in front of a steamroller."

[–] LrdThndr@lemmy.world 6 points 3 months ago (1 children)

Absolutely. 100%

But don’t let perfect be the enemy of good. A fix that gets you 40% of the way there is still 40% less work you have to do by hand. Not everything has to be a fix for all situations. There’s no such thing as a panacea.

[–] magikmw@lemm.ee 6 points 3 months ago (3 children)

Sure. At the same time one needs to manage resources.

I was all in on laptop deployment automation. It cut down on a lot of human error issues and having inconsistent configuration popping up all the time.

But it needs constant supervision, even if not constant updates. More systems and solutions lead to neglect if not supplied well. So some "would be good to have" systems just never make the cut, because as overachieving I am, I'm also don't want to think everything is taken care of when it clearly isn't.

[–] catloaf@lemm.ee 2 points 3 months ago (1 children)

Yeah. I find a base image and post-install config with group policy or Ansible to be far more reliable.

[–] magikmw@lemm.ee 1 points 3 months ago

Yea we're doing something similiar. Only update base images for bigger OS updates or if something breaks or can break.

The general idea is to have config that works for both new PCs and the ones that are already in use. Saves on maintaining two configuration methods.

[–] timewarp@lemmy.world 1 points 3 months ago (1 children)

You were all in, but was the company all in? How many employees? It sounds like you innovated. Let's say that the company you worked for was spending millions on vendors that promised solutions but rarely delivered. If instead they gave you $400k a year, a $1 million/year budget & 10 employees.. I'm guessing you could have managed the laptop deployment automation, along with some other significant projects as well.

Instead though, people with good ideas, even loyal to the company, are competing against sales and marketing reps from billion dollar companies, and upper management are easily swooned.

[–] magikmw@lemm.ee 3 points 3 months ago

I'm the only one to swoon here, and I'm as sceptical as one can be.

I'm also a cost and my budget is on paper only. Non-IT management is complicit in crappy IT.

[–] LrdThndr@lemmy.world 0 points 3 months ago

Completely fair, man.

[–] cyberpunk007@lemmy.ca 5 points 3 months ago (2 children)

This is a good solution for these types of scenarios. Doesn't fit all though. Where I work, 85% of staff work from home. We largely use SaaS. I'm struggling to think of a good method here other than walking them through reinstalling windows on all their machines.

[–] LrdThndr@lemmy.world 2 points 3 months ago (1 children)

That’s still 15% less work though. If I had to manually fix 1000 computers, clicking a few buttons to automatically fix 150 of them sounds like a sweet-ass deal to me even if it’s not universal.

You could also always commandeer a conference room or three and throw a switch on the table. “Bring in your laptop and go to conference room 3. Plug in using any available cable on the table and reboot your computer. Should be ready in an hour or so. There’s donuts and coffee in conference room 4.” Could knock out another few dozen.

Won’t help for people across the country, but if they’re nearish, it’s not too bad.

[–] cyberpunk007@lemmy.ca 2 points 3 months ago

Not a lot of nearish. It would be pretty bad if this happened here.

[–] timewarp@lemmy.world -3 points 3 months ago* (last edited 3 months ago) (2 children)
  1. Configure PXE to reboot into recovery image, push out command to remove bad file. Reboot. Done. Workstation laptops usually have remote management already.

or

  1. Have recovery image already installed. Have user reboot & push key to boot into recovery. Push out fix. Done.
[–] cyberpunk007@lemmy.ca 3 points 3 months ago

I had no idea you could remotely configure pxe to reboot into a recovery image and run a script. How do you do this?

[–] LrdThndr@lemmy.world 0 points 3 months ago* (last edited 3 months ago)

Fuck yeah. Even better than reimage. That’s creative as fuck and I love it.

[–] Evotech@lemmy.world 4 points 3 months ago* (last edited 3 months ago)

Now your fog servers are dead. What now

[–] Brkdncr@lemmy.world 0 points 3 months ago (1 children)

How removed from IT are that you think fog would have helped here?

[–] LrdThndr@lemmy.world 5 points 3 months ago* (last edited 3 months ago) (1 children)

How would it not have? You got an office or field offices?

“Bring your computer by and plug it in over there.” And flag it for reimage. Yeah. It’s gonna be slow, since you have 200 of the damn things running at once, but you really want to go and manually touch every computer in your org?

The damn thing’s even boot looping, so you don’t even have to reboot it.

I’m sure the user saved all their data in one drive like they were supposed to, right?

I get it, it’s not a 100% fix rate. And it’s a bit of a callous answer to their data. And I don’t even know if the project is still being maintained.

But the post I replied to was lamenting the lack of an option to remotely fix unbootable machines. This was an option to remotely fix nonbootable machines. No need to be a jerk about it.

But to actually answer your question and be transparent, I’ve been doing Linux devops for 10 years now. I haven’t touched a windows server since the days of the gymbros. I DID say it’s been a decade.

[–] Brkdncr@lemmy.world 4 points 3 months ago (2 children)

Because your imaging environment would also be down. And you’re still touching each machine and bringing users into the office.

Or your imaging process over the wan takes 3 hours since it’s dynamically installing apps and updates and not a static “gold” image. Imaging is then even slower because your source disk is only ssd and imaging slows down once you get 10+ going at once.

I’m being rude because I see a lot of armchair sysadmins that don’t seem to understand the scale of the crowdstike outage, what crowdstrike even is beyond antivirus, and the workflow needed to recover from it.

[–] LrdThndr@lemmy.world 6 points 3 months ago (1 children)

FOG ran on Linux. It wouldn’t have been down. But that’s beside the point.

I never said it was a good answer to CrowdStrike. It was just a story about how I did things 10 years ago, and an option for remotely fixing nonbooting machines. That’s it.

I get you’ve been overworked and stressed as fuck this last few days. I’ve been out of corporate IT for 10 years and I do not envy the shit you guys are going through right now. I wish I could buy you a cup of coffee or a beer or something.

[–] Brkdncr@lemmy.world 3 points 3 months ago

Last time I used fog it was only doing static image deployment which has been out of style for a while. I don’t know if there are any serious deployment products for windows enterprise that don’t run on windows.

I’m personally not dealing with this because I didn’t like how Crowdstrike had answered a number of questions in their sales call.

Avoiding telling me their vuln scan doesn’t prob be all hosts after claiming it could replace a real vuln scanner, claiming they are somehow better than others at malware detection without bringing up 3rd party tests, claiming how their product was novel when others have been doing the same for 7+ years.

My fave was them telling me how much easier it is to manage but no one on the call had ever worked as a sysadmin or even seen how their competition works.

Shitshow. I’m so glad this happened so I can block their sales team.

[–] timewarp@lemmy.world -2 points 3 months ago (1 children)

Imaging environment down? If a sysadmin can't figure out how to boot a machine into recovery to remove the bad update file then they have bigger problems. The fix in this instance wasn't even re-imaging machines. It was merely removing a file. Ideal DR scenario would have a recovery image already on the system that can be booted into remotely, so there is minimal strain on the network. Furthermore, we don't live in dial-up age anymore.

[–] Brkdncr@lemmy.world 2 points 3 months ago (1 children)

Imaging environment would be bitlocker’d with its key stuck in AD which is also bitlocker’d.

[–] catloaf@lemm.ee 1 points 3 months ago (1 children)

Only if you're not practicing 3-2-1 with your backups.

[–] Brkdncr@lemmy.world 1 points 3 months ago (1 children)

Backup environment is also bitlocker’d.

[–] catloaf@lemm.ee 1 points 3 months ago

Then you didn't 3-2-1, because you should be able to restore from your alternate format, e.g. tape, without your existing infrastructure. Ideally your second and offsite copies are also offline, so even if you ignored the separate media rule, it wouldn't have been affected by the crowdstrike update.

Ultimately, nobody should have to tell you not to lock your keys in the car.

[–] timewarp@lemmy.world -2 points 3 months ago

Thank you for sharing this. This is what I'm talking about. Larger companies not utilizing something like this already are dysfunctional. There are no excuses for why it would take them days, weeks or longer.

load more comments (70 replies)