this post was submitted on 17 Apr 2024
22 points (100.0% liked)

Web Development

3262 readers
24 users here now

Welcome to the web development community! This is a place to post, discuss, get help about, etc. anything related to web development

What is web development?

Web development is the process of creating websites or web applications

Rules/Guidelines

Related Communities

Wormhole

Some webdev blogsNot sure what to post in here? Want some web development related things to read?

Heres a couple blogs that have web development related content

CreditsIcon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 1 year ago
MODERATORS
 

Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.

top 3 comments
sorted by: hot top controversial new old
[–] chiisana@lemmy.chiisana.net 2 points 2 months ago (1 children)

Does individual users send activities directly? I thought only users of your instance and remote federated instances send traffic to your instance, so this change would only affect data coming through from the larger instances?

Also, what’s happening with the original request while they remain in queue? Say for example if large-instance.com is sending 11 updates to your instance; while your back end server is processing the first 10, what’s happening with the 11th? Does it get put on hold while your back end churn, or does it get a 200 OK response, despite the request may be failed at a later date? Neither of which seems ideal — if the instance get put in a queue waiting for a response while you churn, or worst yet, if your backend fails and your buffer is waiting X seconds to time out each request, you’re going to hurt the global federation by holding up a spot in their out going federation queue; if the instance is sent with a 200 OK and your backend eventually fails, you’d lose data as the other instance wouldn’t know they’d need to retry.

[–] rimu@piefed.social 2 points 2 months ago* (last edited 2 months ago) (1 children)

A HTTP 200 response is sent immediately. There is a small risk of data loss but only during the time when rate limiting is happening. Most instances behave well and only send activities at a sensible rate but when things go wrong we need to avoid the bad effects caused by those that are mis-configured or under attack.

It's not a perfect solution. But the alternative - fail to accept some activities, causing them to retry later and us to slowly fall more and more behind - is worse.

[–] chiisana@lemmy.chiisana.net 1 points 2 months ago

Thanks for the clarification! I’m inclined to agree here, small risk of data loss but failing to receive updates all together during peak burst is probably much worse.