This essay from @jenniferplusplus is very good, and very important.

It’s good enough and important enough that I’m just going to QFT the heck out of it here on Mastodon until I annoy you into readying the whole thing.

https://jenniferplusplus.com/losing-the-imitation-game/

This essay isn’t the last word on AI in software — but what it says is the ground level for having any sort of coherent discussion about the topic that isn’t all hype and panic.

1/

Losing the imitation game

AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

Jennifer++

“Artificial Intelligence is an unhelpful term. It serves as a vehicle for people's invalid assumptions. It hand-waves an enormous amount of complexity regarding what ‘intelligence’ even is or means.“

“Our understanding of intelligence is a moving target. We only have one meaningful fixed point to work from. We assert that humans are intelligent. Whether anything else is, is not certain. What intelligence itself is, is not certain.”

2/

“While the capabilities are fantasy, the dangers are real. These tools have denied people jobs, housing, and welfare. All erroneously. They have denied people bail and parole, in such a racist way it would be comical if it wasn't real.

👇👇👇
“And the actual function of AI in all of these situations is to obscure liability for the harm these decisions cause.”

3/

“What [LLM] parameters don't represent is anything like knowledge or understanding. That's just not what LLMs do. The model doesn't know what those tokens mean. I want to say it only knows how they're used, but even that is over stating the case, because it doesn't •know• things. It •models• how those tokens are used.

“…The model doesn't know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.”

4/

“The hard part of programming is building and maintaining a useful mental model of a complex system. The easy part is writing code.”

Do you see where this is going? Have I convinced you to read the whole thing yet?

https://jenniferplusplus.com/losing-the-imitation-game/

5/

Losing the imitation game

AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

Jennifer++

Here it is: the One Weird Thing that people who aren’t programmers (or are bad programmings) just don’t understand about writing software. This is it. If you miss this, you’ll miss what LLMs can and can’t do for software development. You’ll be prey to the hype, a mark for the con.

6/

“They're positioning this tool as a universal solution, but it's only capable of doing the easy part. And even then, it's not able to do that part reliably. Human engineers will still have to evaluate and review the code that an AI writes. But they'll now have to do it without the benefit of having anyone who understands it.”

7/

“No one can explain it. No one can explain what they were thinking when they wrote it. No one can explain what they expect it to do.

“Every choice made in writing software is a choice not to do things in a different way. And there will be no one who can explain why they made this choice, and not those others. In part because it wasn't even a decision that was made. It was a probability that was realized.”

8/

You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

Alas, that does not remotely resemble how people are pitching this technology.

9/

I love, for example, this student’s reaction to having ChatGPT trying to write some of her paper:
https://hachyderm.io/@inthehands/109491316523726437

Indignant outrage is a powerful thought-sharpening tool!

Alas, AI vendors are not pitching LLMs as indignant outrage generators.

10/

Paul Cantrell (@[email protected])

My favorite outcome so far: a student remarked (paraphrasing here) that she didn’t realize how much she had to say in her paper until she saw how wrong the AI was, how much it missed the point. Observing her own reaction to BS about her topic made her realize she’d underestimated the extent of her own newly-forming knowledge. That…that is the sort of outcome an educator dreams of. #ai #chatgpt #education #writing #highered

Hachyderm.io

I’ve heard from several students that LLMs have been really useful to them in that “where the !^%8 do I even start?!” phase of learning a new language, framework, or tool. Documentation frequently fails to share common idioms; discovering the right idiom in the current context is often difficult. And “What’s a pattern that fits here, never mind the correctness of the details?” is a great question for an LLM.

Alas, the AI hype is around LLMs •replacing• thought, not •prompting• it.

11/

The hard part of programming is •thinking about what you’re doing•, because the computer that runs your code isn’t going to do that.

And as Jennifer points out in the essay, we do that by thinking about code. Not just about our abstract mental models, not just about our natural language descriptions of the code, but about the code itself. Where human understanding meets machine interpretation, •that’s• where the real work is, •that’s• what makes software hard:

12/

Code is cost. It costs merely by •existing• in any context where it might run. Code is a burden we bear because (we hope) the cost is worth it.

What happens if we write code with a tool that (1) decreases the cost per line of •generating• code while (2) vastly increasing the cost per line of •maintaining• that code? How do we use such a tool wisely? Can we?

Useful conversation about that starts on this ground floor:

https://jenniferplusplus.com/losing-the-imitation-game/

/end

Losing the imitation game

AI cannot develop software for you, but that's not going to stop people from trying to make it happen anyway. And that is going to turn all of the easy software development problems into hard problems.

Jennifer++

@inthehands i found the blog post a mixed bag. The specific points about software development are very good.

But the general critique of LLMs is lacking. Two examples:

1. She claims AI/LLMs are not intelligent, but does specify what that means.
Maybe AI then is intelligent by *some* measure?
2. She claims the model parameters don't represent knowledge or understanding. Same problem here: what *is* knowledge?

She falls into the same fallacy as the AI hype train: vague terms.

@inthehands the big value i see in this blog post is the concretness of the risk descriptions and the factual references to actual AI project failures.

@elhult @inthehands

If these are all the objections you can raise, consider the blog post a success – and be mindful to not be part of the problem.