bad news "AI bubble doomers". I've found the LLMs to be incredibly useful and reduce the workload (and/or make people much, MUCH more effective at their jobs with the "centaur" model).

Is it overhyped? FUCK Yes. Salespeople Gotta Always Be Closing. But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, which is all grifters and gamblers and criminals end-to-end, and the first dot-com bubble where not NEARLY enough people had broadband or even internet access, plus the logistics systems to support shipping products was nowhere REMOTELY where it is today.

If you are expecting this "AI bubble" to pop anytime soon, uh.. you might be waiting a bit longer than you think? Overhyped, yes, overbuilding, sure, but not remotely a true bubble any any of the same senses of the three examples I listed above 👆. There's something very real, very practical, very useful here, and it is getting better every day.

If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.

@codinghorror “I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.”

Please do, if you can. Because most time I’ve tried to use LLMs for work the error rate ends up costing me MORE time than I would have spent without, and most AI boosters are short on specifics. We just had a presentation at my job on how we all need to be using AI with no case studies of how it’s actually been useful so far.

@sethrichards here's one: a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.

Here's two: GiveDirectly did two GMI studies, Chicago and Cook County and we were very unclear what the relationship was, or why they did it that way. ChatGPT also knocked this out park and saved Tia a lot of time finding that information out, so she was freed up to focus on other work.

I could go on and on and on. Email me if you want ~12 more specific examples. With citations.

But also realize this: I am elite at asking very good, well specified, very clear, well researched questions, because we built Stack Overflow.

You want to get good at LLMS? learn how to ask better questions of evil genies. I was raised on that. 🧞

@codinghorror @sethrichards

Evil genies with a severe form of ADD of some sort.

You hit it on the head - the prompt is the key.

With an experienced human - vagueness is often acceptable, and they will usually ask for clarification. The AI doesn't ask - it guesses, often incorrectly. So you need to over-specify in the prompt, including things it might be insulting to mention when talking to an experienced human. Then iterate, and aggressively steer that conversation.

This is why I don't see the AI as replacing a human except for trivial situations. It's a force multiplier, but not a replacement, and the skills necessary to use them effectively are non-obvious.

@tbortels @codinghorror @sethrichards they're also incredibly susceptible to being mislead by the prompt itself...

something like "tell me how to use X to accomplish Y" when you _think_ X is relevant is much more likely to lead to something made up than "tell me how to accomplish Y"

@vt52 @codinghorror @sethrichards

To be 100% fair - yeah. GIGO, garbage in garbage out.

But - that's not a problem exclusive to AIs, or even computers. That's the XY problem, and it's a human thing.

https://en.wikipedia.org/wiki/XY_problem

XY problem - Wikipedia

@tbortels @codinghorror @sethrichards oh for sure, but it's the kind of situation where a human person is likely to recognize the underlying fault and correct, where LLMs will try to yes-and with a plausible-sounding but completely fabricated answer

to your point, there's got to be a knowledgeable (or at least vaguely savvy) human involved

@vt52 @sethrichards @codinghorror

You must be hanging out with some high quality humans. I can't tell you how many times I've had this conversation, in professional settings:

Me (noticing an issue): hey, how's it going?
Them: not so well - this isn't working,
Me: I'm not surprised. It doesn't work that way. How long has this been broken?
Them: two weeks,
Me: wow. Why didn't you ask for help?
Them: I did, but they told me this was the procedure.
Me: ಠ_ಠ

"Recognize a problem and change your course" is a learned skill, even for humans.

To be fair - my job is usually "figure out what is broken and fix it" by all sorts of names.

@tbortels @vt52 @sethrichards yes, agree.. also I have NEVER advocated for anything other than the centaur model -- a human with experience in that domain, working with LLMs as a research assistant. https://blog.codinghorror.com/changing-your-organization-for-peons/
Changing Your Organization (for Peons)

James Shore’s nineteen-week change diary is fascinating reading: It was 2002. The .com bust was in full slump and work was hard to find. I had started my own small business as an independent consultant at the worst possible time: the end of 2000, right as the bubble popped.

Coding Horror