I'd love to hear more opinions on generative AI from people who aren't confident writers

I feel like most of the commentary I see is from people who write with confidence - almost by definition, since writing confidently is an important prerequisite for widely broadcasting your opinions on things

I have a hunch that there are lots of people out there for whom the ability to have a computer help them write is a massively valuable thing, but their stories have so far not attracted much attention in the wider AI discourse

I'd love to upgrade that hunch with actual information

@simon This feels like one of those things that is aesthetically presented as liberatory but rapidly becomes a stigmatizing class marker, sold only to people who can't tell the difference. Like an MLM. "Finally, the benefit of (owning your own business|confident writing) is available to the masses! It's easy to get started, just pay (your upline|ChatGPT's escalating costs)." But no actual investor is fooled by MLM garbage products and nowhere that requires quality writing will be fooled by LLMs
@glyph @simon Re: "ChatGPT's escalating costs", I wonder how much more expensive ChatGPT Plus will get.
@glyph sure, but there are SO many situations that don't require quality writing: they need clear, boring, uncreative writing that clearly communicates some information
@simon @glyph Yep, and those scenarios in most situations should not require enough filler/bridging material to be any longer than the prompt you provide to ChatGPT. If they do, we should fix that.
@digifox my hunch is that reqching less confident writers how to use ChatGPT is a lot more realistic than convincing society as a whole to stop requiring formal letters!
@simon I guess I share your curiosity here, because I am curious if LLMs *can* provide this, and what experiences this type of user has. I feel like I should withhold details to protect the accused here, but I've had a few interactions where someone has attempted to work around their lack of english skills by pasting LLM output at me and I immediately clocked it because it was full of meandering fluff and the substance itself was confusingly ambiguous, as well as just wrong in places.

@simon @glyph Well, the person needing to do that communication is already writing natural language in the form of a prompt, so for the kind of writing you describe, they could just as well write the information itself with no fluff. In an ideal world, anyway.

Yes, I'm playing with ChatGPT for creative writing, and might even publish the (heavily edited and in some places rewritten) result. But some of my original skepticism about the utility of LLMs for non-fiction writing still remains.

@matt @simon I have also experimented with it for creative writing prompts and it can be hilarious and inspiring. But this parallels the story I have found about expertise level with programming as well. Experts can gesture vaguely at it and have it spit out wrong and misleading garbage, but the few real clues present in the sludge are enough to make it valuable. Novices give it a prompt and think it has solved their problem, which later embarrasses them horribly.
@matt @simon I just used it yesterday to help with a relatively simple X.509 certificate parsing puzzle, and it gave me a hilariously wrong example full of terrible security bugs and outright crashes, *but* it included all the terms I was blanking on and allowed me to quickly correct it into something close to what I wanted. I cannot imagine how a new CS grad would have interacted with this output though.
@glyph @matt Maybe that's a self-healing flow then: how does the way a novice uses LLMs change after they've been horribly embarrassed?
@simon @glyph I guess that depends on whether we're talking about mere embarrassment or something more high-stakes.
@matt @simon my concern is that it’s more like the downward spiral of MLMs or PUAs: “I just don’t have the right hustle / the right lines / the right prompts” where novices lean harder on it because they think its failures are highlighting their own deficiencies (and thus their own need for the tool) rather than identifying it as a problem *with* the tool.