@nixCraft we don't have to concerned about this on #Mastodon. Although someone could develop the code , it would never be accepted at the repository level, let alone at the network level.
This is one of the reasons why Mastodon and #TheFediverse is very important
@nicholasr @nixCraft I don't think that's true. The code is already there, since Mastodon has an API that people can (and do!) use for bots. Using that for an AI persona, instead of, say, Picard insights, is a small step.
There's also the aspect that believing you're not vulnerable for something makes you inherently vulnerable for that thing because you let your guard down.
And third, just because something is open source does not mean that the code is rigidly reviewed.
I agree the capability for the exploit is present in the #API. Someone could develop a client software that is an AI bot. It might work initially, but as soon as the AI is detected the account would be blocked. Eventually if there are many AI agents on a #Mastodon instance, that instance will be blocked. This is part of what I mean about the network blocking it
@nicholasr @nixCraft There's no exploit or vulnerability necessary for implementing an AI bot for Mastodon. There's literally an officially documented interface available for writing bots, and writing one that uses an LLM to pretend it's not a bot should be pretty trivial.
Relying on moderation for getting rid of bots is probably wishful thinking. Users can block the bot individually, but I'm not aware that there's a federated blocklist that multiple instances use.