Here's a question about y'all's opinions on gen "AI"...
If I wrote a story, it's my own original idea, my own work of fiction, I'm dictating the events of the fictional story, and I used an "AI" to help turn my flat, often repetitive writing (I tend to say "so-n-so said" ad nauseam) in to something more readable, and then proofread it or had another human proofread it, would you consider this acceptable?
Additional clarification - *I* would be creating the events of the story, the "AI" would only be helping with how to word the description of those fictional events.

@hellomiakoda before the LLM "revolution", there was some promising "AI" tech focused on improving writing. As far as I know, it's gone, destroyed by LLMs.

I would not use an LLM for this task.

@hellomiakoda
I intensely dislike the usual LLM slop, but that's definitely not what you're proposing. Using the LLM just to help refine your prose sounds reasonable to me. I'll be curious to hear how that works out for you.
@brouhaha I don't actually plan on running what I write through it. I was merely curious. If I ever do actually write the story that's been sitting in my head for many years, I'd put it out there as is, shitty writing and all. Not sure it'll ever happen though. Executive dysfunction is a bitch!
@hellomiakoda I tried to do just that in GPT-2 and GPT-3 era, before ChatGPT exploded. It really ruins the writing. Even with state of the art models things are pretty bad: you feed it a masterpiece, you get generic flavourless slop out of it. If you prompt it harder, you get sparkling slop. There was a great article about an experiment of an English/Literature college teacher allowing students to experiment with LLMs, and how generic and flavourless results of the students' writing had become. Let me see if I can find it.
@hellomiakoda I think it would be fine. I'm not a fan of AI writing, but if you just step through the output to see some suggestions of alternative phrasing, then pick and choose what you like, I don't see the problem