this post was submitted on 18 Feb 2024
135 points (95.9% liked)

Ask Lemmy

26682 readers
3202 users here now

A Fediverse community for open-ended, thought provoking questions

Please don't post about US Politics.


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 1 year ago
MODERATORS
 

Fess up. You know it was you.

top 50 comments
sorted by: hot top controversial new old
[–] tquid@sh.itjust.works 64 points 8 months ago* (last edited 8 months ago) (8 children)

One time I was deleting a user from our MySQL-backed RADIUS database.

DELETE * FROM PASSWORDS;

And yeah, if you don’t have a WHERE clause? It just deletes everything. About 60,000 records for a decent-sized ISP.

That afternoon really, really sucked. We had only ad-hoc backups. It was not a well-run business.

Now when I interview sysadmins (or these days devops), I always ask about their worst cock-up. It tells you a lot about a candidate.

[–] RacerX@lemm.ee 26 points 8 months ago (3 children)

Always skeptical of people that don't own up to mistakes. Would much rather they own it and speak to what they learned.

[–] Flax_vert@feddit.uk 16 points 8 months ago

This is what I was told when I started work. If you make a mistake, just admit to it. They most likely won't punish you for it if it wasn't out of pure negligence

[–] chameleon@kbin.social 14 points 8 months ago (1 children)

It's difficult because you have a 50/50 of having a manager that doesn't respect mistakes and will immediately get you fired for it (to the best of their abilities), versus one that considers such a mistake to be very expensive training.

I simply can't blame people for self-defense. I interned at a 'non-profit' where there had apparently been a revolving door of employees being fired for making entirely reasonable mistakes and looking back at it a dozen years later, it's no surprise that nobody was getting anything done in that environment.

[–] ilinamorato@lemmy.world 12 points 8 months ago

Incredibly short-sighted, especially for a nonprofit. You just spent some huge amount of time and money training a person to never make that mistake again, why would you throw that investment away?

load more comments (1 replies)
[–] cobysev@lemmy.world 18 points 8 months ago

I was a sysadmin in the US Air Force for 20 years. One of my assignments was working at the headquarters for AFCENT (Air Forces Central Command), which oversees every deployed base in the middle east. Specifically, I worked on a tier 3 help desk, solving problems that the help desks at deployed bases couldn't figure out.

Normally, we got our issues in tickets forwarded to us from the individual base's Communications Squadron (IT squadron at a base). But one day, we got a call from the commander of a base's Comm Sq. Apparently, every user account on the base has disappeared and he needed our help restoring accounts!

The first thing we did was dig through server logs to determine what caused it. No sense fixing it if an automated process was the cause and would just undo our work, right?

We found one Technical Sergeant logged in who had run a command to delete every single user account in the directory tree. We sought him out and he claimed he was trying to remove one individual, but accidentally selected the tree instead of the individual. It just so happened to be the base's tree, not an individual office or squadron.

As his rank implies, he's supposed to be the technical expert in his field. But this guy was an idiot who shouldn't have been touching user accounts in the first place. Managing user accounts in an Airman job; a simple job given to our lowest-ranking members as they're learning how to be sysadmins. And he couldn't even do that.

It was a very large base. It took 3 days to recover all accounts from backup. The Technical Sergeant had his admin privileges revoked and spent the rest of his deployment sitting in a corner, doing administrative paperwork.

[–] dependencyinjection@discuss.tchncs.de 10 points 8 months ago (1 children)
load more comments (1 replies)
load more comments (5 replies)
[–] Witchfire@lemmy.world 48 points 8 months ago* (last edited 8 months ago) (2 children)

Accidentally deleted an entire column in a police department's evidence database early in my career 😬

Thankfully, it only contained filepaths that could be reconstructed via a script. But I was sweating 12+1 bullets. Spent two days rebuilding that.

[–] aksdb@lemmy.world 28 points 8 months ago (2 children)

And if you couldn't reconstruct, you still had backups, right? ..... right?!

[–] FartsWithAnAccent@lemmy.world 29 points 8 months ago (1 children)

What the fuck is a "backups"?

[–] z00s@lemmy.world 11 points 8 months ago

He's the guy that sits next to fuckups

[–] Witchfire@lemmy.world 25 points 8 months ago

Oh sweet summer child

[–] superduperenigma@lemmy.world 17 points 8 months ago (1 children)

deleted an entire column in a police department's evidence database

Based and ACAB-pilled

load more comments (1 replies)
[–] sexual_tomato@lemmy.dbzer0.com 41 points 8 months ago (1 children)

I didn't call out a specific dimension on a machined part; instead I left it to the machinist to understand and figure out what needed to be done without explicitly making it clear.

That part was a 2 ton forging with two layers of explosion-bonded cladding on one side. The machinist faced all the way through a cladding layer before realizing something was off.

The replacement had a 6 month lead time.

load more comments (1 replies)
[–] Kata1yst@kbin.social 35 points 8 months ago* (last edited 8 months ago)

It was the bad old days of sysadmin, where literally every critical service ran on an iron box in the basement.

I was on my first oncall rotation. Got my first call from helpdesk, exchange was down, it's 3AM, and the oncall backup and Exchange SMEs weren't responding to pages.

Now I knew Exchange well enough, but I was new to this role and this architecture. I knew the system was clustered, so I quickly pulled the documentation and logged into the cluster manager.

I reviewed the docs several times, we had Exchange server 1 named something thoughtful like exh-001 and server 2 named exh-002 or something.

Well, I'd reviewed the docs and helpdesk and stakeholders were desperate to move forward, so I initiated a failover from clustered mode with 001 as the primary, instead to unclustered mode pointing directly to server 10.x.x.xx2

What's that you ask? Why did I suddenly switch to the IP address rather than the DNS name? Well that's how the servers were registered in the cluster manager. Nothing to worry about.

Well... Anyone want to guess which DNS name 10.x.x.xx2 was registered to?

Yeah. Not exh-002. For some crazy legacy reason the DNS names had been remapped in the distant past.

So anyway that's how I made a 15 minute outage into a 5 hour one.

On the plus side, I learned a lot and didn't get fired.

[–] Quazatron@lemmy.world 33 points 8 months ago (2 children)

Did you know that "Terminate" is not an appropriate way to stop an AWS EC2 instance? I sure as hell didn't.

[–] Flax_vert@feddit.uk 9 points 8 months ago (3 children)
[–] ilinamorato@lemmy.world 24 points 8 months ago

"Stop" is the AWS EC2 verb for shutting down a box, but leaving the configuration and storage alone. You do it for load balancing, or when you're done testing or developing something for the day but you'll need to go back to it tomorrow. To undo a Stop, you just do a Start, and it's just like power cycling a computer.

"Terminate" is the AWS EC2 verb for shutting down a box, deleting the configuration and (usually) deleting the storage as well. It's the "nuke it from orbit" option. You do it for temporary instances or instances with sensitive information that needs to go away. To undo a Terminate, you weep profusely and then manually rebuild everything; or, if you're very, very lucky, you restore from backups (or an AMI).

[–] Quazatron@lemmy.world 20 points 8 months ago

Noob was told to change some parameters on an AWS EC2 instance, requiring a stop/start. Selected terminate instead, killing the instance.

Crappy company, running production infrastructure in AWS without giving proper training and securing a suitable backup process.

[–] BestBouclettes@jlai.lu 8 points 8 months ago (2 children)

Apparently Terminate means stop and destroy. Definitely something to use with care.

load more comments (2 replies)
load more comments (1 replies)
[–] treechicken@lemmy.world 27 points 8 months ago* (last edited 8 months ago) (1 children)

I once "biased for action" and removed some "unused" NS records to "fix" a flakey DNS resolution issue without telling anyone on a Friday afternoon before going out to dinner with family.

Turns out my fix did not work and those DNS records were actually important. Checked on the website halfway into the meal and freaked the fuck out once I realized the site went from resolving 90% of the time to not resolving at all. The worst part was when I finally got the guts to report I messed up on the group channel, DNS was somehow still resolving for both our internal monitoring and for everyone else who tried manually. My issue got shoo-shoo'd away, and I was left there not even sure of what to do next.

I spent the rest of my time on my phone, refreshing the website and resolving the NS records in an online Dig tool over and over again, anxiety growing, knowing I couldn't do anything to fix my "fix" while I was outside.

Once I came home I ended up reversing everything I did which seemed to bring it back to the original flakey state.

Learned the value of SOPs and taking things slow after that (and also to not screw with DNS). If this story has a happy ending, it's that we did eventually fix the flakey DNS issue later, going through a more rigorous review this time. On the other hand, how and why I, a junior at the time, became the only de facto owner of an entire product's DNS infra remains a big mystery to me.

[–] Burninator05@lemmy.world 19 points 8 months ago (1 children)

Hopefully you learned a rule I try to live by despite not listing it: "no significant changes on Friday, no changes at all on Friday afternoon".

load more comments (1 replies)
[–] Burninator05@lemmy.world 27 points 8 months ago (1 children)

I spent over 20 years in the military in IT. I took took down the network at every base I was ever at each time finding a new way to do it. Sometimes, but rarely, intentionally.

load more comments (1 replies)
[–] rbos@lemmy.ca 24 points 8 months ago (4 children)

Plugged a serial cable into a UPS that was not expecting RS232. Took down the entire server room. Beyoop.

[–] lud@lemm.ee 15 points 8 months ago

That's a common one I have seen on r/sysadminds.

I think APC is the company with the stupid issue.

load more comments (3 replies)
[–] spaghetti_carbanana@krabb.org 24 points 8 months ago* (last edited 8 months ago) (1 children)

Worked for an MSP, we had a large storage array which was our cloud backup repository for all of our clients. It locked up and was doing this semi-regularly, so we decided to run an "OS reinstall". Basically these things install the OS across all of the disks, on a separate partition to where the data lives. "OS Reinstall" clones the OS from the flash drive plugged into the mainboard back to all the disks and retains all configuration and data. "Factory default", however, does not.

This array was particularly... special... In that you booted it up, held a paperclip into the reset pin, and the LEDs would flash a pattern to let you know you're in the boot menu. You click the pin to move through the boot menu options, each time you click it the lights flash a different pattern to tell you which option is selected. First option was normal boot, second or third was OS reinstall, the very next option was factory default.

I head into the data centre. I had the manual, I watched those lights like a hawk and verified the "OS reinstall" LED flash pattern matched up, then I held the pin in for a few seconds to select the option.

All the disks lit up, away we go. 10 minutes pass. Nothing. Not responding on its interface. 15 minutes. 20 minutes, I start sweating. I plug directly into the NIC and head to the default IP filled with dread. It loads. I enter the default password, it works.

There staring back at me: "0B of 45TB used".

Fuck.

This was in the days where 50M fibre was rare and most clients had 1-20M ADSL. Yes, asymmetric. We had to send guys out as far as 3 hour trips with portable hard disks to re-seed the backups over a painful 30ish days of re-ingesting them into the NAS.

The worst part? Years later I discovered that, completely undocumented, you can plug a VGA cable in and you get a text menu on the screen that shows you which option you have selected.

I (somehow) did not get fired.

load more comments (1 replies)
[–] necrobius@lemm.ee 22 points 8 months ago
  1. Create a database,
  2. Have organisation manually populated it with lots of records using a web app,
  3. accidentally delete database.

All in between the backup window.

[–] Churbleyimyam@lemm.ee 21 points 8 months ago (2 children)

It wasn't me personally but I was working as a temp at one of the world's biggest shoe distribution centers when a guy accidentally made all of the size 10 shoes start coming out onto the conveyor belts. Apparently it wasn't a simple thing to stop it and for three days we basically just stood around while engineers were flown in from China and the Netherlands to try and sort it out. The guy who made the fuckup happen looked totally destroyed. On the last day I remember a group of guys in suits coming down and walking over to him in the warehouse and then he didn't work there any more. It must have cost them an absolute fortune.

load more comments (2 replies)
[–] WagnasT@iusearchlinux.fyi 21 points 8 months ago

"acknowledge all" used to behave a bit different in Cisco UCS manager. Well at least the notifications of pending actions all went away... because they were no longer pending.

[–] Albbi@lemmy.ca 21 points 8 months ago

Broke teller machines at a bank by accidentally renaming the server all the machines were pointed to. Took an hour to bring back up.

[–] hperrin@lemmy.world 21 points 8 months ago

I fixed a bug and gave everyone administrator access once. I didn’t know that bug was… in use (is that the right way to put it?) by the authentication library. So every successful login request, instead of being returned the user who just logged in, was returned the first user in the DB, “admin”.

Had to take down prod for that one. In my four years there, that was the only time we ever took down prod without an announcement.

[–] FaceDeer@kbin.social 20 points 8 months ago (2 children)

It wasn't "worst" in terms of how much time it wasted, but the worst in terms of how tricky it was to figure out. I submitted a change list that worked on my machine as well as 90% of the build farm and most other dev and QA machines, but threw a baffling linker error on the remaining 10%. It turned out that the change worked fine on any machine that used to have a particular old version of Visual Studio installed on it, even though we no longer used that version and had phased it out for a newer one. The code I had written depended on a library that was no longer in current VS installs but got left behind when uninstalling the old one. So only very new computers were hitting that, mostly belonging to newer hires who were least equipped to figure out what was going on.

load more comments (2 replies)
[–] doc@kbin.social 20 points 8 months ago (2 children)

UPDATE without a WHERE.

Yes in prod.

Yes it can still happen today (not my monkey).

Yes I wrap everything in a rollback now.

load more comments (2 replies)
[–] slazer2au@lemmy.world 20 points 8 months ago (1 children)

I took down an ISPfor a couple hours because I forgot the 'add' keyword at the end of a Cisco configuration line

load more comments (1 replies)
[–] Volkditty@lemmy.world 19 points 8 months ago (3 children)

Light switch is right next to the main power breaker.

load more comments (3 replies)
[–] kindernacht@lemmy.world 18 points 8 months ago

My first time shutting down a factory at the end of second shift for the weekend. I shut down the compressors first, and that hard stopped a bunch of other equipment that relied on the air pressure. Lessons learned. I spent another hour restarting then properly shutting down everything. Never did that again.

[–] pastermil@sh.itjust.works 18 points 8 months ago (1 children)

I acidentally destroyed the production system completely thru improper partition resize. We got the database snapshot, but it's in that server as well. After scrambling around for half a day, I managed to recover some of the older data dumps.

So I spun up the new server from scratch, restored the database with some slightly outdated dump, installed the code (which was thankfully managed thru git), and configured everything to run all in an hour or two.

The best part: everybody else knows this as some trivial misconfiguration. This happened in 2021.

load more comments (1 replies)
[–] shyguyblue@lemmy.world 18 points 8 months ago

Updated WordPress...

Previous Web Dev had a whole mess of code inside the theme that was deprecated between WP versions.

Fuck WordPress for static sites...

[–] Clent@lemmy.world 17 points 8 months ago

UPDATE ON articles SET status = 0 WHERE body LIKE '%...%'

On master production server, running myisam, against a text column, millions of rows.

This causes queries to stack because table locks

Rather than waiting for the query to finish. a slave was promoted to master.

Lesson: don't trust mysqladmin to not do something bad.

[–] Nomecks@lemmy.ca 17 points 8 months ago* (last edited 8 months ago)

There was a nasty bug with some storage system software that I had the bad fortune to find, which resulted in me deleting 6.4TB of live VMs. All just gone in a flash. It took months to restore everything.

[–] theluddite@lemmy.ml 15 points 8 months ago

This is nowhere near the worst on a technical level, but it was my first big fuck up. Some 12+ years ago, I was pretty junior at a very big company that you've all heard of. We had a feature coming out that I had entirely developed almost by myself, from conception to prototype to production, and it was getting coverage in some relatively well-known trade magazine or blog or something (I don't remember) that was coming out the next Monday. But that week, I introduced a bug in the data pipeline code such that, while I don't remember the details, instead of adding the day's data, it removed some small amount of data. No one noticed that the feature was losing all its data all week because it still worked (mostly) fine, but by Monday, when the article came out, it looked like it would work, but when you pressed the thing, nothing happened. It was thankfully pretty easy to fix but I went from being congratulated to yelled at so fast.

[–] zubumafu_420@infosec.pub 15 points 8 months ago (2 children)

Early in my career as a cloud sysadmin, shut down the production database server of a public website for a couple of minutes accidentally. Not that bad and most users probably just got a little annoyed, but it didn't go unnoticed by management 😬 had to come up with a BS excuse that it was a false alarm.

Because of the legacy OS image of the server, simply changing the disk size in the cloud management portal wasn't enough and it was necessary to make changes to the partition table via command line. I did my research, planned the procedure and fallback process, then spun up a new VM to test it out before trying it on prod. Everything went smoothly except on the moment I had to shut down and delete the newly created VM, I instead shut down the original prod VM because they had similar names.

Put everything back in place, and eventually resized the original prod VM, but not without almost suffering a heart attack. At least I didn't go as far as deleting the actual database server :D

load more comments (2 replies)
[–] surewhynotlem@lemmy.world 15 points 8 months ago

I removed the proxy settings from every user in the company. Over 80k people without Internet for the day.

[–] EmasXP@lemmy.world 14 points 8 months ago (1 children)

Two things pop up

  • I once left an alert() asking "what the fuck?". That was mostly laughed upon, so no worry.
  • I accidentally dropped the production database and replaced it by the staging one. That was not laughed upon.
load more comments (1 replies)
[–] TwanHE@lemmy.world 14 points 8 months ago

Crashed a important server because it didnt have room for the update I was trying to install. Love old windows servers.

[–] finkrat@lemmy.world 13 points 8 months ago* (last edited 8 months ago)

Extracted a sizeable archive to a pretty small root/OS volume

[–] futs@lemmy.world 12 points 8 months ago

Advertised an OS deployment to the 'All Wokstations' collection by mistake. I only realized after 30 minutes when peoples workstations started rebooting. Worked right through the night recovering and restoring about 200 machines.

[–] BestBouclettes@jlai.lu 12 points 8 months ago

I was still a wee IT technician, I was supposed to remove some cables from a patch panel. I pulled at least two cables that were used as ISCSI from the hypervisors to the storage bays. During production hours. Not my proudest memory.

load more comments
view more: next ›