Reddit has struck a $60m deal with Google that lets the search giant train AI models on its posts
Reddit has struck a $60m deal with Google that lets the search giant train AI models on its posts
I don’t think it’s terrible; the opposite really. It’s likely incredibly useful for creating LLMs with specific knowledge or behavior. The categorization into subreddits alone opens up so many possible applications. Imagine for example training a conversational AI with data from specific subreddits like science, askscience, biology, physics, astronomy,… or posts by users that frequent such subreddits in order to create sort of an academic AI.
You could do the same for all sorts of topics: Want a sports commentator AI, use sports related subreddits; an AI that supports you in writing a novel, use creative writing subreddits etc. Don’t want your AI to spew political opinions, exclude political subreddits from your data; don’t want it to use offensive language, only use well-moderated subreddits etc.
This presumes that Reddit is populated by so-called experts answering questions and posting in those subs.
But the vast overwhelming truth is that most people pretending to be experts are just regurgitating the answers they heard from another reddit post, and so on, and so on.
You might as well just train your AI on the “confidently incorrect” sub and call it a day.
Ai:
😭 I’m trying
I wonder if Google’s unlimited legal budget plays a role. Not a lawyer, so probably way off here…
But, for example, reddit’s success in part depends on Google ingesting their data — reddit shows up in Google searches all the time, which can only happen if Google uses reddit’s content. So reddit telling Google “you can’t use our content” doesn’t work, and they need to say something like, “you can use our content for search results but you can’t consume it as training data.”
This is a pretty straightforward statement/request/demand, but one could imagine Google lawyers maliciously complying and throwing their hands up dramatically, claiming “well we use some amount of AI in our search results, so if we can’t use your content for AI training then we can’t risk using it for search results.” Which would, I imagine, really, really hurt reddit (no Google results would be catastrophic I suspect).
So, perhaps the “low” 60M figure is just Google using their leverage.
Or not. As a random person on the Internet, I can say I’m probably not contributing anything meaningful here…
I’m personally curious whether Reddit actually has any ability to protect that database. I don’t remember Reddit TOS, but usually those things give them license to use and copy the data, maybe even to sell it, but not actually the copyright on it. So if someone made a Reddit scraper and copied the comments, wouldn’t only the actual commenter be able to sue?
$60M may be reflecting that, in that it’s more a convenience fee to shield Google against individual Redditors going after them than something that Reddit itself could actually sue over.
Considering it’s all full of Nazis and bots, and if you get to filter all of them out you’re left with reposts and low quality memes followed by comments that represent the hostile side of each of us… I’d say anything over $5 is a good deal for spez.
Now, I hope Google uses this data exclusively for detecting inappropriate answers. Can you imagine it giving answers based on the endless threads i of " I’m not your mate, bro; I’m not your bro, dude…".