[beta] degoog - search engine aggregator

https://lemmy.world/post/44038358

My biggest issue with searxng is that google gets rejected all the time. But that’s because the instance gets rate limited, no? Do you deal with that more effectively somehow? Is it even possible?

Hey! Well, aside from rotating multiple user agents, have a cheeky retry and having a “retry” button next to each engine I also heavily cache results so that for 12 hours the same query will give you the same results from cache. You can always invalidate cache from the settings.

Another thing you can also do is add multiple proxy that rotate on each request, traffic goes through proxy urls and you don’t get hit with rate limiting (again, proxy settings are in the settings tab - you can find free ones around).

So far I haven’t been rate limited by google yet, not gonna claim it won’t happen as it probably will, it’s the nature of the tool, but finger crossed so far so good.

I host mine at home and only available on my tailnet. Haven’t had any rate limiting since.
Yes, I think that’s also key, the more people using the same instance the more likely it’ll be to get banned, but again, rotating proxy is key to this I suppose
Echoing what @[email protected] said, I selfhost it and I only have one user…me.

turn on the limiter and link token, and that’ll stop in SearXNG. this is why most instances don’t work, is they don’t turn on the existing anti-bot measures.

https://docs.searxng.org/src/searx.botdetection.html

Bot Detection — SearXNG Documentation (2026.3.9+d4954a064)

Why do these docs look like code? So odd… For my dumb brain trying to absorb this from a phone, is there a doc for what to enable in the Docker container that I’m not seeing?
there’s some guides online on how to do it, you’ll need to edit some configs and also have a Redis docker up.
Ahh definately not my problem after figuring this out (couldn’t find a guide but looked at the source). My instance is private so this isn’t impacting my non-Google results.