Hard to imagine a signal that a website is a rugpull more intense than banning users for trying to delete their own posts

https://www.tomshardware.com/tech-industry/artificial-intelligence/stack-overflow-bans-users-en-masse-for-rebelling-against-openai-partnership-users-banned-for-deleting-answers-to-prevent-them-being-used-to-train-chatgpt

Like just incredible "burning the future to power the present" energy here

Stack Overflow bans users en masse for rebelling against OpenAI partnership — users banned for deleting answers to prevent them being used to train ChatGPT

Stack Overflow is overflowing with salt.

Tom's Hardware
@mcc So developers will stop sharing information on #StackOverflow and future #Copilot and friends will be forever stuck in the past, answering questions about historically relevant frameworks and languages.
#LLM #StuckOverflow
@chris Yeah. But for this to be true, we need a Stack Overflow replacement. And when Reddit went evil, the move to Lemmy doesn't seem to have succeeded as well as the move from Twitter to Mastodon.
@mcc IIRC Mastodon is older than Lemmy and the current move to Mastodon/Fedi happened in multiple waves, so it may be too early for higher expectations.
For stackoverflow I expect some degradation of quality since they accept “AI” generated content. This may additionally frustrate high quality authors and motivate them to leave. We’ll see.
What would a federated stack overflow look like if we were to invent it?

@chris I don't know. It's an interesting question because Stack Overflow is inherently more search-focused than Lemmy or Mastodon.

A good model for a distributed/ownerless SO might wind up looking more like bluesky than mastodon.

@chris And, of course, there's the weird element that the SO license *already* does not permit AI on a facial reading, and a distributed SO would probably be *easier* to scrape than the centralized one. So you're not actually preventing AI exploitation, you're only punishing one corporation (SO) for the AI bait-and-switch.
@mcc I personally see less problem in scraping a federated pool of knowledge but I absolutely hate that stackoverflow now owns this knowledge and can keep people from using it but sell “AI” as a service to them.

@chris I suppose one thing to consider is if a federated pool of knowledge is CC-BY-SA, then we only need a court ruling that OpenAI violates CC-BY-SA and the federated pool becomes AI-safe. Whereas SO can, (or already has) change the TOS so they own rights to relicense all content.

…but of course, CC-BY-SA is also incredibly inconvenient for a SO clone because everyone will generally want to copypaste sample code!

@mcc @chris practically speaking, duplicating a single CC-BY-SA code snippet is never going to be practically actionable, because the damages payable would be miniscule. There's also a strong argument to be made that a whole software package is not a derivative work of a small snippet, although I wouldn't want to be the one paying for that judgement.
@womble @chris As a person putting up sample code, I want that sample code to be useful to other people. I think the license should be picked to maximize that utility. The way I see it, one of the ways to maximize the utility is to make the license *unambiguous*. If the recipient has to *wonder* whether they can use the code, I am causing them unnecessary problems even if they eventually do use the code.
@mcc there is that. Finding a licence wording that explicitly allows the "good" uses, without allowing the "bad" uses, that doesn't have a billion unintended consequences, is probably something beyond human capacity. Quick, get an AI to write it!
@womble I actually do generally use CC0 these days if it's meant to be sample code rather than "open source".