this post was submitted on 01 Nov 2023
23 points (100.0% liked)

Programming

17402 readers
145 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
 

A bit more context there since you might wonder why customers can cause Sev1's.

Well, I work for a Database Technology company and we provide a managed service offering. This managed service offering has SLA's that essentially enforce a 5 minute response time for any "urgent" issue.

Well, a common urgent issue is that the customer suddenly wants to load in a bunch of new data without informing us which causes the cluster to stop accepting write loads.

It's to the point where most if not all urgent pages result in some form of scaling of the cluster.

Since this is a customer driven behavior, there is no real ability to plan for it - and since these particular customers have special requirements (and thus, less ability to automate scaling operations), I'm unsure if there is any recourse here.

It's to the point that it doesn't even feel like an SRE team anymore - we should just instead be called "On-demand scaling agents". Since we're constantly trying to scale ahead of our customers.

All in all, I'm starting to feel like this is a management/sales level issue that I cannot possibly address. If we're selling this managed service offering as essentially "magic" that can be scaled whenever they need then it seems like we're being setup for failure at the organizational level. Not to mention, not being smart about costs behind scaling and factoring that into these contracts.

So, fellow SRE's have you had to have this conversation with a larger org? What works for something like this? What doesn't? Should I just seek greener pastures at this point?

P.S. - Posted c/Programming due to lack of a c/SRE

you are viewing a single comment's thread
view the rest of the comments
[–] deegeese@sopuli.xyz 5 points 1 year ago (1 children)

Queues must stop accepting more work before they bring down the application.

If the customer wants to write too much data, start rejecting jobs.

[–] th3raid0r@tucson.social 2 points 1 year ago (1 children)

Our database is actually pretty graceful. It just goes into stop writes status. You can still read any data and resolving the situation is as easy as scaling the cluster or removing old records. By no means is the database down or inoperable.

Essentially our database is working as designed. If we rate limited it further then we have less of a product to sell. The main feature we sell of our database technology is its IOPS and resiliency.

Further, this is just for a specific customer, it has no impact to any other customers or any sort of central orchestration. Generally speaking the stop writes status only ever impacts a single customer and their associated applications.

Also, customers can be very stingy with the clusters they are willing to buy. We actually are on poor terms of the couple of our customers who just refuse to scale and just expect us to magic their cluster into accepting more data than its sized for.

[–] deegeese@sopuli.xyz 2 points 1 year ago (1 children)

There is a fundamental rate limit based on cluster performance.

Your application is not aware of this limit, so it pretends to the client that there is no limit, then falls over.

Since you can’t make that number be infinity for your stingy customers, you need to send a rate limit exceeded error, even if you won’t admit to having an actual IOPS limit.

Surely there are cluster sizing guidelines you can point to once someone fills the queue?

[–] th3raid0r@tucson.social 1 points 1 year ago (1 children)

"Your application" - the customers you mean. Our DB definitely does it's own rate limiting and it emits rate limit warnings and errors as well. I didn't say we advertised infinite IOPs that would be silly. We are totally aware of the scaling factors there and to date IOPs based scaling is rarely a Sev1 because of it. (Oh no p99 breached 8ms. Time to talk to Mr customer about scaling up soon)

The problem is that the resulting cluster is so performant that you could load in 100x the amount of data and not notice until the disk fills up. And since these are NVME drives on cloud infrastructure, they are $$$.

So usually what happens is that the customer fills up the disk arrays so fast that we can't scale the volumes/cluster fast enough to avoid stop-writes let alone get feedback from the customer in time. And now that's like the primary reason to get paged these days.

We generally catch gradual disk space increases from normal customer app usage. Those give us hours to respond and our alerts are well tuned. It's the "Mr. Customer launched a new app and didn't tell us, and now they've filled up the disks in 1 hour flat." that I'm complaining about.

[–] joemo@lemmy.sdf.org 1 points 1 year ago

So it sounds like we have the root cause of the issue: customer connects a new app to the DB and didn't tell you.

This seems like the problem you need to solve.

There may be some technical solution there. But you could also suggest a change to the contracts (which would be very difficult to push through). We provide support for connections to the DB from XYZ app. If you make changes to what's writing to the DB, a consultation with our team is required to ensure sizing is correct

The issue here is that this would make life harder for c levels and sales because it doesn't allow clients to walk all over support.