@doug We talk about that a fair bit in the podcast episode

I think the key to this is that it's actually quite difficult to use this stuff effectively - because it lays SO MANY traps for you, there are so many examples of things that it will get blatantly wrong, often in very convincing ways

Once you learn how to side-step those traps it becomes amazingly productive - but that takes quite a lot of effort

@simon @doug having to learn how to sidestep problems doesn’t sound like much of an endorsement. Indeed it sounds very much like it adds new problems. It seems remarkably unhelpful to add problems that require deep expertise to sidestep.

@slott56 @doug Right: if someone tells you "LLMs are easy! They'll give you a huge productivity boost right out of the gate" then that person is misleading you

My message is "LLMs are surprisingly difficult to use. I have managed to get enormous productivity boosts after investing a lot of effort in learning how to use them effectively."

@slott56 @doug LLMs are a chainsaw disguised as a pair of scissors
@simon @slott56 @doug this is an argument I’ve been trying to make for using them as a productivity tool for technical writers, but instead people want chatbots that surface the synthetic text directly to end users without a human in the loop. I’m all please let’s not hand chainsaws to readers to make custom furniture. Let’s use chainsaws to make furniture faster so we can offer more kinds. (This anology has become strained. As a specific example, by custom furniture I mean code samples with annotations. By more kinds I mean code samples in more languages. )
@simon @slott56 @doug LLMs are a 3cm star key with four universal joints and a finicky electric drill attached. They can be used for very specific tasks with a lot of work and careful monitoring. It has no place on the public internet, or for generating content. If you have an analysis use case where it can generate domain specific insights from data, have at it, but you’d best double check the results.
@simon @slott56 @doug can’t think of a single situation where I’d choose scissors over a chainsaw

@simon @doug the large investment path isn’t as terrifying as having to sidestep problems. I want to focus on the fact that it introduces problems that require deep expertise to side-step. That’s daunting.

Folks get incredible productivity gains from the pomodoro method and don’t have to stidestep subtle, difficult-to-even-identify problems.

@slott56 @doug I'm increasingly seeing evidence that convinces me that it helps, rather than hurts, new programmers - some notes on that here: https://simonwillison.net/2023/Sep/29/llms-podcast/#does-it-help-or-hurt-new-programmers
Talking Large Language Models with Rooftop Ruby

I’m on the latest episode of the Rooftop Ruby podcast with Collin Donnell and Joel Drapper, talking all things LLM. Here’s a full transcript of the episode, which I generated …

@simon @doug the presence of ethical rules is troubling, also.

There are now two layers of ethical considerations: (1) should this even be done with computers? And now, this new, murky realm of (2) is the code or documentation produced by an ethical and trustworthy tool.

It’s introduced yet another problem.