this post was submitted on 20 Jul 2024
649 points (98.4% liked)

linuxmemes

21291 readers
1923 users here now

Hint: :q!


Sister communities:


Community rules (click to expand)

1. Follow the site-wide rules

2. Be civil
  • Understand the difference between a joke and an insult.
  • Do not harrass or attack members of the community for any reason.
  • Leave remarks of "peasantry" to the PCMR community. If you dislike an OS/service/application, attack the thing you dislike, not the individuals who use it. Some people may not have a choice.
  • Bigotry will not be tolerated.
  • These rules are somewhat loosened when the subject is a public figure. Still, do not attack their person or incite harrassment.
  • 3. Post Linux-related content
  • Including Unix and BSD.
  • Non-Linux content is acceptable as long as it makes a reference to Linux. For example, the poorly made mockery of sudo in Windows.
  • No porn. Even if you watch it on a Linux machine.
  • 4. No recent reposts
  • Everybody uses Arch btw, can't quit Vim, and wants to interject for a moment. You can stop now.
  •  

    Please report posts and comments that break these rules!


    Important: never execute code or follow advice that you don't understand or can't verify, especially here. The word of the day is credibility. This is a meme community -- even the most helpful comments might just be shitposts that can damage your system. Be aware, be smart, don't fork-bomb your computer.

    founded 1 year ago
    MODERATORS
     
    top 28 comments
    sorted by: hot top controversial new old
    [–] TropicalDingdong@lemmy.world 62 points 4 months ago (8 children)

    Does any one here, working in IT, have a sense for how "on-going" this issue is expected to be? Is this something that is largely going to be resolved in a day or two, or is this going to take weeks/ months?

    [–] MNByChoice@midwest.social 104 points 4 months ago (2 children)

    My guess as a Linux admin in IT.

    I understand the fix takes ~5 minutes per system, must be done in person, and cannot be farmed out to users.

    There are likely conversations about alternatives or mitigations to/for crowdstrike.

    Most things were likely fixed yesterday. (Depending on staffing levels.) Complications could go on for a week. Fallout of various sorts for a month.

    Lawsuits, disaster planning, cyberattacks (targeting crowdstrike companies and those that hastily stopped using it) will go on for months and years.

    The next crowdstrike mistake could happen at any time...

    [–] qjkxbmwvz@startrek.website 48 points 4 months ago (3 children)

    The next crowdstrike mistake could happen at any time...

    Sounds like the tagline to an action movie.

    [–] Agent641@lemmy.world 21 points 4 months ago

    When will crowd strike next?

    [–] MNByChoice@midwest.social 6 points 4 months ago (1 children)
    [–] dumbass@leminal.space 8 points 4 months ago

    You'll be living it soon.

    [–] MadMadBunny@lemmy.ca 2 points 3 months ago

    Coming in a computer near you: Crowdstrikenado!

    [–] Ok_imagination@lemmy.world 21 points 4 months ago (1 children)

    Fully agree as a security engineer with a mostly Microsoft shop. We have some pending laptop fixes, but I think we've talked our cio out of hastily pulling out of CrowdStrike. Really, it didn't hit us hard. Maybe down for 2-3 hours around 4 am Friday morning. Microsoft gives us many more issues more frequently and we don't have constant talk of pulling it out...

    [–] boredsquirrel@slrpnk.net 18 points 4 months ago (1 children)

    Microsoft gives us many more issues more frequently and we don't have constant talk of pulling it out...

    Maybe you should ;)

    As a Linux user I deal with Windows issues way too often administering other laptops.

    [–] Ok_imagination@lemmy.world 5 points 3 months ago

    God, I wish!

    [–] db2@lemmy.world 27 points 4 months ago

    It's entirely dependent on the organization. The actual time it takes to deploy the fix is the same amount it takes to open 4 nested directories and delete one file and reboot, but things like bitlocker and other annoying system policies can get in the way dragging a 5 minute solution out to a multi-day debacle.

    [–] rockSlayer@lemmy.world 21 points 4 months ago

    The issue was a very simple programming mistake, which is why it was simple to get a patch out quickly. The reason it caused chaos is due to the fact that the software operates at an extremely high level of privilege, enough where even something small can disrupt the entire operating system

    [–] Entropywins@lemmy.world 11 points 4 months ago

    It will take however long it takes to implement the fix in person or implement a disaster recovery plan. Couple hours, days maybe weeks depending on the size of organization. Thankfully my work doesn't use crowdstrike but the main fix I've heard requires in person boot in safe mode, delete file and reboot to every effected machine, not difficult just time consuming if you have thousands of endpoints that need to be fixed.

    [–] AlecSadler@sh.itjust.works 8 points 3 months ago (1 children)

    At my org the security is so heavy that it's a multi-step, multi-tier fix (meaning the one Helpdesk person has to escalate, the first tier that gets it has one password but not the other, that has to go to second tier, etc.)

    They announced weekend hours all weekend on Friday and given we're talking tens of thousands of potentially impacted systems, my guess is it absolutely won't be done by Monday. That doesn't necessarily mean business is dead in the water, but it's definitely more chaotic and slow moving.

    [–] MonkderDritte@feddit.de 5 points 3 months ago

    At my org the security is so heavy

    Yet you allow some rando software with evelated privileges to run their own updates?

    [–] Estebiu@lemmy.dbzer0.com 6 points 4 months ago (1 children)

    My guess as an on-field technician is that this is going to take at least a week to resolve. As you probably know, it's an easy fix; the difficult part is going to every single store to actually do the procedure. Today I worked on 30-35 PCs, and most of my time was spent going from location to location. There's the tour de France so it's very time consuming. Anyway, yeah, at least a week.

    [–] TexasDrunk@lemmy.world 2 points 4 months ago

    MSPs are about to get a shit load of work for the next week just to get more boots on the ground.

    [–] adhdplantdev@lemm.ee 4 points 4 months ago* (last edited 4 months ago) (1 children)

    It's going to be a grind. This is causing blue screen of death on Windows machines which can only be rectified if you have physical/console access.

    In the cloud space if this is happening to you I think you're screwed. I mean theoretically there's a way to do it by installing Windows unmounting the disc from the virtual machine to another working virtual machine but it's a freaking bear.

    Basically everyone's going to have to grind this whole thing out to fix this problem. There's not going to be an easy way to use automation unless they have a way to destroy and recreate all their computers.

    I live in linuxland and it's been really fun watching this from the side. I really feel for this admins having to deal with this right now because it's going to just suck.

    [–] Morphit@feddit.uk 2 points 4 months ago (1 children)

    I'd have thought the cloud side would be pretty easy to script over. Presumably the images aren't encrypted from the host filesystem so just ensure each VM is off, mount its image, delete the offending files, unmount the image and start the VM back up. Check it works for a few test machines then let it rip on the whole fleet.

    [–] adhdplantdev@lemm.ee 8 points 3 months ago* (last edited 3 months ago)

    Oh my friend. You think these companies do things in a logical scalable way? I have some really bad news....

    Theoretically that could work but sometimes security measures require computers be BitLocker encrypted and certain softwares could make this difficult to achieve like fixing a domain controller.

    [–] taiyang@lemmy.world 4 points 4 months ago

    My dad was able to get his computers in city hall working by just deleting a file, but it is indeed a process. 6 steps, although the specifics elude me. You do have to do it in person though, requires repair mode or whatever.

    Funny thing though, they just got a new tech lead that very same day, his first day was this fiasco. Imagine that luck!

    [–] thisbenzingring@lemmy.sdf.org 51 points 3 months ago (1 children)

    On a secure closed network, old code and DOS based Win3.x is fine. Those apps are so nice to support. Training young people on those old technologies is fun.

    [–] masterofn001@lemmy.ca 29 points 3 months ago* (last edited 3 months ago)

    First job.out of highschool I got promoted to a management position.

    Because I could use win3.1 and print the shipping labels.

    I miss the 90s.

    [–] SchmidtGenetics@lemmy.world 16 points 4 months ago (1 children)

    Little late, new information came out and they broke other OSs months ago… but no one noticed…

    [–] boredsquirrel@slrpnk.net 4 points 4 months ago (1 children)
    [–] DmMacniel@feddit.org 11 points 4 months ago (1 children)

    My guess are Redundancy, proper rollbacks and or testing.

    [–] SchmidtGenetics@lemmy.world 6 points 4 months ago* (last edited 4 months ago)

    Not used on critical systems, it affected computers, a client had to inform them even.

    [–] Marduk73@sh.itjust.works 2 points 4 months ago