I’m not concerned about AI outcompeting competent writers or impacting my career directly. I am deeply concerned about AI swamping submission systems and destroying the ability of editors and readers to find the next generation of writers.

AI is very much a danger to the long term health of the field not because of competition for quality readable fiction but because of its ability to create dreck in previously unimaginable quantities and drown submission systems and indie publishing in shit.

@KellyMcC interesting. Knew that AI would raise the floor but not the ceiling. Did not consider the flood of dreck.
@KellyMcC a submission system that doesn't delineate between humans and non humans isn't a submission system. It's a commercial enterprise engine.0 Run by entities. Not people. I'll invite you to reconsider the use of the term AI by considering you are both Sapient and Sentient. No machine in my experience, after over 30 years working with evolving neural networks, is actually intelligent.
@thecharmingcompany @KellyMcC Sure, any halfway competent editor will be able to distinguish the good stuff from AI-generated dreck after they've read it. The danger is that the quantity of submissions rises to unmanageable levels due to AI dangling the false promise of being able to submit stories without the effort of actually writing them.
@nxylas @thecharmingcompany @KellyMcC this is the situation I’m in at my day job. Yes, we absolutely can spot the dreck, but there is SO MUCH OF IT! It’s causing huge delays getting back to those human writers who actually put in the effort, and by then there’s a real risk they’ve moved on. I hate the thought that someone out there is giving up on writing just because too many others are trying shortcuts.
@alliepotts @nxylas @KellyMcC it happened in comics in the 90s. The best work never saw the editors desk... Because of shortcuts.
@alliepotts @nxylas @thecharmingcompany @KellyMcC I am more and more convinced that I've made the right choice sticking with self-pub. I have practically zero reach, but at least I know my work is out there and not moldering beneath a pile of AI dreck.
@nxylas @KellyMcC the pressure, ethically, should be on the submitter, no? To flag the source?
@thecharmingcompany @nxylas Sure, but I don't think the people submitting AI written works are going to be big on the whole ethical submission thing.

@nxylas @thecharmingcompany @KellyMcC editors you say...

...a dying job description, replaced by software.

@Mr @thecharmingcompany @KellyMcC I think AI will have to improve significantly before that happens. At the moment, it can make a decent fist of copy editing. But it's one thing to be able to spot spelling and grammatical errors, quite another to be able to tell the difference between a good story and a bad one, or offer suggestions for improving a story that's not quite there yet.
@nxylas @Mr @KellyMcC so perhaps it's. Far from intelligent
@nxylas @Mr @thecharmingcompany @KellyMcC
Speaking as a career copy editor, I can tell you that spotting spelling and grammatical errors is the least of what a copy editor does.

@nxylas @KellyMcC Does this mean there's an opportunity to jump ahead of AI submission queues by creating a compelling, personalized & inventive submission package?

Derek Sivers wrote about the Captain T conspiracy theory submission pack:

https://sive.rs/capt

(And Nick, since I think I recognize your name - I'm imagining a demo CD that comes with a tiny glitter sparkle Porcupine toy, a little cardboard robot, and a postcard story about a Robot that won't Obey and wants to be a Porcupine.)

Captain T | Derek Sivers

@thecharmingcompany @KellyMcC one doesn't need a machine to be intelligent, just the ability to emulate such.

Which is precisely the point we have reached.

As technology advances, emulation will far outstrip actual intelligence, of which, humanity (on average) seems to have a hard enough time demonstrating itself.

It was only a few years ago, that people were saying that AI wouldn't be creative, that out of all the jobs threatened, artists and writers would be unaffected.

That didn't last long.

Imagine where things will be in another few years.

@thecharmingcompany LLMs obviously aren't intelligence in any meaningful way, but that's the term that is in common use, and I don't see a lot of point in trying to cram all of the reasons that's not a great term into a post discussing the impact of the systems on the artistic submission process. I could certainly right a thousand word essay on why AI isn't but it's at best peripheral to my point.

@KellyMcC
This is my concern about prose/poetry and art, exactly.
A publisher knows good, viable writing. They can get to know when generic swill is being thrown at them. Same with art directors.
But the submission system is already swamped with legitimate writers and artists. The rate at which A.I. prompters can churn out generic mediocre writing and art and submit it is leagues faster than any human creator can even act.

The system will have to change, and it will likely be worse for us.

@KellyMcC
Its just spam these A.I. prompters are delivering. And you know what happens in computing when spam is detected in a system? The source is blocked.

So this could mean that legitimate talent, both writers and artists, get NO chance at all, and we're back to the system of "you gotta know someone" to even get your work seen.

Which is much, much worse.

@KellyMcC Sorry. At some time in the future perhaps. If by AI you mean the LLM models, sorry no.

LLM like GPT3 (ChatGPT), have a context of around 2K-4K tokens, which is a little less in words.

So yes, LLM are made to generate human-style text, and they are good at that. But they can only take so much context into account when predicting the next word.

So anything beyond short stories or paragraphs will have dramatic continuity issues.

@yacc143 @KellyMcC Well, @clarkesworld is already getting an avalanche of LLM submissions, and it’s causing them a ton of extra work.

@skry @KellyMcC @clarkesworld Purely out of curiosity, how do they know which submissions are LLM based?

Sounds like these guys have solved a problem that not even OpenAI can solve for ChatGPT.

Would be great if they could share their genius method to detect LLM output, that does not have these serious 2 digit percentages rates of false positives and false negatives.

@yacc143 @skry @clarkesworld My understanding is that it's pretty obvious to a human reader*, but that takes enormous amounts of time to implement.

*I believe the reason for that is that it's good at creating a sentence that logically follows the last, and all right out doing that paragraph to paragraph, but absolute shit at creating whole stories that make sense as stories.

@KellyMcC

"I am deeply concerned about AI swamping submission systems and destroying the ability of editors and readers to find the next generation of writers."

As someone with severe ADHD, online application processes often take too much executive function to jump through all the stupid little hoops. As systems like these get more and more hoops that try to sort out the humans from the robots, people like me will be silently excluded without anyone noticing because we will never even apply.

@KellyMcC said the cobbler about making shoes at the beginning of the industrial revolution. Yet here we are, most of us wearing factory made shoes rather than better custom made shoes because of the economics of the situation.

The quality could suffer, but the cost and availability will compensate. You have two choices, be one of the very few people who continue to hand make custom shoes for the rich or learn to use AI to create stories in a more productive fashion.

@KellyMcC Don't worry. Soon editors will have an AI to scan their inbox.

Damn, I don't even know if I'm joking or not.

@MarkHB @KellyMcC You're not. AIs to detect AI writing are already real.
@KellyMcC the "AI promise" to editors and readers is that they can now share in the delightful experience that holidaymakers have when trying to find a useful hotel review online
@KellyMcC and tools like zerogpt will help. But definitely writing submissions in every field are facing the need to make vetting process much stricter

@KellyMcC
"Swamping submission systems" hits the nail dead-on. The threat of #AI / #ChatGPT systems to the writing /creative pipeline/lifecycle isn't (so much) that it "kills the crops" as that, unchecked, it has potential to "kill the soil", the cultivation of new writers/creatives.

If social media taught us anything, we need to up our systems thinking game, AND FAST, because we so rarely think about secondary effects much less stave them off.

@PixelJones @KellyMcC I may be late to this, but isn't this one of those things where we are (or perhaps should) all be going, "always, always, check primary sources. Whether it's Wiki, or Chat-whatevs, or anything.

"What are your sources for that?" should almost always be question #1.

@bytebro @PixelJones Certainly in social media news consumption, that's a huge step in the right direction for the end user. There are two problems with doing that at the gatekeeper level. One, the sheer volume of the flow means that no human can keep up with it, which means shutting down flows rather than checking them. And in pure creative endeavors (where I work) the submission is the primary source, so there's no place to backtrace it to.

@bytebro @KellyMcC Right. "Check your sources" is always good advice but there's a huge chasm of effort between reading/scanning for information and formally _vetting_ it.

Before #ChatGPT (broadly speaking) you could look at a coherent, literate-sounding article & assume no human would go to the effort to produce coherent rubbish. You'd tacitly trust it & only vet it further if needed.

Now, vetting _all_ content _first_ is an added cognitive burden.

@KellyMcC

Add to it the fact that editors and scouts are already underpaid and overworked, and the introduction of AI manuscripts (weird word considering there’s no "manual" behind it but we get it) becomes a burden multiplier

@KellyMcC isn't Amazon's self publishing platform already overwhelmed? Not that Amazon of the "keep calm and rape on" school of content moderation will care.
@craignicol Pretty much, but it's going to get much worse before it gets better, if it ever does.
@KellyMcC looking at the teams that were laid off at Amazon, I don't think quality control is an area of focus. They'll just let the biased reviews decide.

But yeah, any human editor having to sift through that is not going to have a good time. It's going to make it a lot tougher to get good quality out into public

@KellyMcC

Yeah, this is really terrible.

One very present danger from "AI" — well, LLMs — is spambots, not the Terminator.

For editors — really anyone who consciously cares about words & meaningful communication — LLMs make the challenges they were already facing worse.

We've already seen the dangers of misinformation through social media since the pandemic started.

We will find it increasingly challenging to trust *anything* that we read *anywhere*.

@KellyMcC Candidly, some of the fiction being produced by humans is formulaic, AI-level dreck. Those writers should worry because they *are* replaceable. The worry isn’t that AI will create great art; it’s that AI will create art that comes across enough like something elevated that it can fool most of the people most of the time. It’s not good, but what if nobody notices?
@wampusmm repeating myself from elsewhere: Sturgeon’s Law postulates that 90% of everything is crap. I don’t agree, but in terms of scale lets call that number sound. What AI potentially does is create so much new crap without new gems such that we hit 99.999999% of everything is crap. That is a very different and entirely new problem.

I could see how that might create a world where can’t differentiate, but I think that’s less likely than simply being unable to find the good stuff.
@KellyMcC I think issues like this have been seen (although to a lesser degree) in open source. I recently read 'Working in Public' and it talks a lot about how the real commodity is developer *attention*
@kaio that makes sense, and it’s definitely a core problem in our info overload world.
@KellyMcC Using AI as a “tool” is also detrimental, because it contributes to the blandification of writing by steering people to everything that’s been done over and over and over again. It’s not designed for original thought.

@KellyMcC Well said.

"AI is very much a danger to the long term health of the field not because of competition for quality readable fiction but because of its ability to create dreck in previously unimaginable quantities and drown submission systems and indie publishing in shit."

@KellyMcC I would agree that this can be a upcoming issue, but really think when the dust settles it will be little more than a common tool for searching, building instructional info blah blah blah. Right now it is just a language learning system. This isn’t really “AI” at all.
@KellyMcC
One AI can mimic those 1 million monkeys!

@KellyMcC

Iirc Clarkesworld and Kaleidoscope have both had issues with this already.

@KellyMcC

Boomy generated 14 million tracks and dumped it on Spotify. Let that sink.

That 14% of ALL recorded music. In history. Let that sink deeper.

...

It's like the brooms in Fantasia, isn't it.

https://www.musicbusinessworldwide.com/ai-music-app-boomy-spotify-stream-manipulation/

AI music app Boomy has created 14.4m tracks to date. Spotify just deleted a bunch of its uploads after

Has the first salvo been fired in the major music rights-holders’ war against a “flood” of AI music content hitting DSPs?

Music Business Worldwide
@KellyMcC Technology will easily be able to filter out pure AI crap. In the end, AI isn't going to cost anyone jobs. People using AI are going to cost people not using AI jobs. It's just a tool you can use to move things along. Good writers will continue to outshine bad ones, AI or not.
@thegamersdome that's not what I'm hearing from the people actually working in editorial who are drowning in AI generated content. They can filter it out by hand, but the AI filters are both expensive and pretty shit at identifying AI generated content.