Setting aside copyright/commercial/other aspects, a thought about writing blog posts that LLMs train on. When people write posts, they (a) feel good about helping others, and (b) hope to get some credit and visibility for doing so.

When mediated through LLMs, no longer the satisfaction that your consumer is a human (who might comment, thank, share, etc.); nor the cred that comes from people remembering the author, posting on HN, etc.

That is, it totally destroys the incentive structure.

@shriramk That sounds like the outcomes from other kinds of plagiarism to me (at least the credit part).
@shriramk And the same applies to answering questions on StackOverflow.
Fair Social Contracts and the Foundations of Large-Scale Collaboration

Large-scale collaborations with non-kin are a unique feature of human societies and foundational to human civilization. Individual relationships with…

INET Oxford
@avandeursen @shriramk Interesting paper. Thanks for sharing.
@shriramk it’s petty but one thing I hate about this AI training future is that I no longer have any visibility into how my posts are doing. I used to enjoy checking my traffic logs from time to time and getting the satisfaction that, oh right, there are people who are finding what I’ve made and maybe even sharing it with other people. Now there’s no point; my blog gets so little human traffic that it’s indecipherable noise compared to what I know are LLM scrapers. It doesn’t dissuade me from blogging exactly but it does take away a part of it that used to feel rewarding.

@shriramk Im always finding myself misunderstood.

It will be interesting to see whether machines are better at connecting my verbiage and weak coupling.

In general, I suspect people will have to differentiate themselves more - which includes flaws distinct to the human experience.