Yoel Roth, a former Twitter employee, writes: "For anyone keeping track, this isn't even the first time they've completely broken the site by bumbling around in the rate limiter. There's a reason the limiter was one of the most locked down internal tools. Futzing around with rate limits is probably the easiest way to break Twitter."

Anyone want to explain to me why there's so much fragility around the Twitter rate limiter? I get that rate limiters can be dangerous, and I understand why a rate limiter without exponential backoff is likely to be bad news, but I don't get why this one is apparently such a haunted graveyard. What's the non-obvious complexity or fragility that I'm missing?

#twitter #SoftwareEngineering

@austern I assume its possibly because of the heavy influx of tweets. Afterall, its still really easy and really to make a 1000 tweets per second. Missing with rates while those 1000 tweets come in isn't as good of an idea. Its like messing with intake on your pipes to make it go slower.
@austern
I’m going to hazard a guess that it’s a mixture of “eventual consistency” combined with a #SuperGenius pulling round numbers out of his arsehole.
@futuresprog Yeah, "eventual consistency" has the right kind of shape for an answer. I could believe in an overly complicated system that behaves disastrously if different parts of it have different ideas about the rate limit, and I could believe in a system like that being a haunted graveyard that nobody dares to fix because a fix would itself have to involve a progressive rollout and would risk putting things into an inconsistent state.
@austern I think because the normal reaction by people, apps and websites to “I’m not getting any result” is “try again, again, again”
@austern Seemingly obvious guess: lack of any backpressure mechanisms?