So these AI chats are pretty cool.

One thing that's gonna cause some problems tho is that they sound really really convincing while sometimes being extremely extremely wrong

A zillion ways this revolution is going to be great.

One way that's going to suck tho is in places where human interactions are a *useful* friction / proof-of-work. When it's not, those systems will get really overloaded

Few examples:

1. Online troll bots & fake personalities
It's going to get a *lot* harder to distinguish bots from people and much easier to create entirely fictitious credible online personalities to troll/harass/do crime

2. Persuasive letters by e.g. constituents to regulators.
Volume of (~sensible, unique) letters was a valid indication of sentiment. Soon won't be.

3. Ransomware victim communications & negotiations
Used to be one of the few costly areas in scale. Not for long

@Pwnallthethings

so you're saying we could automate discussions with reinstated haters on the other place

even more diabolical than shadowbans