Over at Crossplay, I profiled a company using artificial intelligence to monitor what kids are saying—and what's being said to them—while playing online games. Would you feel comfortable using something like this? https://patrickklepek.substack.com/p/this-company-is-betting-ai-can-help
This Company Is Betting AI Can Help Protect Your Kid While They Play Games Online

ProtectMe silently monitors what your child says—and what people say to them—and reports back. But at what point does software like this become an invasion of privacy?

Crossplay

@patrickklepek
"If an event rises to a certain threat level, a parent receives a text message. The incidents are reviewed by Kidas, to prevent the system from spamming false positives."

AI flagging with human verification sounds like a good pattern for applying machine learning, but the reports don't have any of the offensive content. Ideally, the kid would have an account where that shows reports and the content so they can give context to conversations with parents.

@patrickklepek
I wonder how effective this tool would be if it sent the alerts to the player. Seems helpful to children if they got a "was that hate speech, report?" or "xXxGameRxXx has been an asshole tonight" popup from their Xbox: Help them separate acceptable speech from toxic behaviour. You don't want kids to internalize insults slung at them.

But maybe it's too hard to avoid it seeming like nanny Clippy.