I’m not concerned about AI outcompeting competent writers or impacting my career directly. I am deeply concerned about AI swamping submission systems and destroying the ability of editors and readers to find the next generation of writers.

AI is very much a danger to the long term health of the field not because of competition for quality readable fiction but because of its ability to create dreck in previously unimaginable quantities and drown submission systems and indie publishing in shit.

@KellyMcC Sorry. At some time in the future perhaps. If by AI you mean the LLM models, sorry no.

LLM like GPT3 (ChatGPT), have a context of around 2K-4K tokens, which is a little less in words.

So yes, LLM are made to generate human-style text, and they are good at that. But they can only take so much context into account when predicting the next word.

So anything beyond short stories or paragraphs will have dramatic continuity issues.

@yacc143 @KellyMcC Well, @clarkesworld is already getting an avalanche of LLM submissions, and it’s causing them a ton of extra work.

@skry @KellyMcC @clarkesworld Purely out of curiosity, how do they know which submissions are LLM based?

Sounds like these guys have solved a problem that not even OpenAI can solve for ChatGPT.

Would be great if they could share their genius method to detect LLM output, that does not have these serious 2 digit percentages rates of false positives and false negatives.

@yacc143 @skry @clarkesworld My understanding is that it's pretty obvious to a human reader*, but that takes enormous amounts of time to implement.

*I believe the reason for that is that it's good at creating a sentence that logically follows the last, and all right out doing that paragraph to paragraph, but absolute shit at creating whole stories that make sense as stories.