What’s a basic way to ID these things in the wild?

Most of them comment “perfect” grammar and their comments are two to three times longer than normal for every comment.

There’s also specific grammar tells often.

So if I understand correctly, these are accts that have all (or most) of their content generated by LLM’s? If so, I wonder what the ‘bridge’ is? Is it someone manually copying content over, or is it automated somehow?

In order for client applications (Voyager, Interstellar, etc) to be able to interact with server, the latter needs to have an API. Those who write the bots just attach their bots on the other end, instead of a client

Client applications and other, different stuff. Getting rid of an API would be a messy answer, and then the bad actors could just automate interacting with the web UI, same as the libraries that are used for testing web pages

The web UI may not have a standard API, but it is just HTTP calls like everything else. You can make it harder for bots, but if a browser can do it, a bot can do it without a browser.

And even if all big instances got real good at stopping bot accounts, federation means they can spool up spam servers too.

It’s a scaling problem that I fear will end with the fediverse being as spammy as email (One of the original federated communication platforms)