I think a lot of people that don't really understand what they're doing obsess over LLMs for the same reason that they might have obsessed over visual programming or Plain-English-programming a generation ago.

In their mind, programming works like this:

1) A clever person designs the system/app/website/game in their mind.

2) The person uses whatever tools available to wrangle the computer into reproducing that vision.

(1/...)

In this model, the bottleneck is (2) and anything that isn't the native tongue of the designer is actively getting in the way of the manifestation of that vision.

THIS IS NOT HOW PROGRAMMING WORKS.

In reality, programming is an eclectic back and forth conversation between developer and machine where the former explores the possibility space and the machine pushes back by unveiling its constraints.

No 'vision' survives this process unscathed, and this is a good thing.

Those that obsess over LLMs like to believe that Plain English sits at the top of the abstraction pile, that it is the thing that a programming environment should seek to model. From this point of view, an LLM seems perfect: type in words, program comes out.

But Plain English is not the top of the pile, not even close. It's an imprecise and clumsy lingo. The process of development is about throwing away that imprecision and engaging with the reality of the possibility space.

It can be hard for those that don't do a lot of programming to understand, but programmers do not think in Plain English (or whatever their native tongue is). They do not, for the most part, spend their time wrangling & getting frustrated at their tools.

Instead, programmers think in abstractions that sit beyond the realm of natural language, and those abstractions are carved through dialect with the machine. The machine pushes back, the chisel strikes the marble, and the abstraction evolves.

LLMs promise something enticing, but ultimately hollow: the ability to skip the dialectic and impose one's will on the machine by force. They are appealing because they do not allow space for their user to be wrong, or to be forced to encounter the consequences of their unrefined ideas.

This is why code written by LLMs is often buggy, insecure, and aimless: it is written to appease a master that does not understand the conflict between their ideas, nor the compromises necessary to resolve them.

If you're an LLM fan, it might initially appear confusing that a programmer might choose a statically typed language, and more confusing still that those with experience *yearn* for static typing. Why limit yourself?

But the reality is that the development of good software requires the dialectic between developer and machine to take place, and type systems accelerate this process by allowing a skilled programmer to refine their mental model much earlier in the development process.

I think this is all I have to say on this topic.
People seemed to like this so I turned it into a blog post: https://www.jsbarretto.com/blog/on-llms/
On Large Language Models | Joshua Barretto

Resist the siren song of your local tech oligarchy

@jsbarretto this is great, but I have one quibble: you are speaking here as if coding is special, that by virtue of being a mechanical output a computer program demands this sort of dialectical interaction with reality whereas the fuzzy english description sits fully formed in the mind of the author. But any *writer* will recognize that this is how the process of writing any long-form work of natural language works, too. A mind simply cannot contain a large idea all at once.
@glyph Not at all! I think this applies to lots of forms of intellectual labour. I'm just talking about programming because that's my field and the thing I feel the most comfortable talking about :)
@jsbarretto yeah I should have phrased this more directly with a “yes, and” framing, I didn’t feel like you’d object, there was just an implication. What you’re saying about the specifics of coding *is* absolutely true, and it looks a little different for long-form writing, but these are both instances of the general pattern of communication being a part of thinking

@jsbarretto I feel like the real problem AI solves is speeding up the literal writing of code and management doesn't understand that it's the easiest part of the process.

Good design, expert supervision, iterative, programmatic, and critical thinking are all still very much needed.

CoPilot isn't a substitute for a programmer, it's just super-intellisense.

@jsbarretto you forgot the part where they're trained on average code, with the average code being shit and buggy
@dysfun Well, the less said about that the better. But in all honesty, I think that's less and less a problem and grows to be a weaker argument each day.

@jsbarretto wonderfully said!

To add to that, "the machine" does not only reveal its own constraints in that process, but also the inherent flaws in the design you came up with. (Unless of course we view "logic" as an inconvenience imposed onto us by machines)

@guenther I do! Logic is not the native tongue of most humans. Humans hold mutually-contradictory ideas about the world all the time, and this contradictions frequently appear right down to the level of fundamental logic and not just a particular domain. Expressing an idea in code forces those contradictions to the surface and should result in a redesign.

@jsbarretto Nice! You've managed to capture into words the ideas about design that have been bouncing around my head for some time now.

Namely: I can't design a system just in my head. I have to design it with the help of a compiler. I need some way to discover the constraints of my design space * before I can design the system * and the fastest and most reliable way I've found to do that so far is to throw things at a compiler or into a prototype and see what walls I bump into.

@jsbarretto I think you just managed to put into words the reasons behind my strong but vague discomfort with the idea to use LLMs for coding. Thanks!
@jsbarretto this is an excellent thread, thank you. You've distilled some thoughts I was having, and I think now I realize why I get so much pushback when I bring linguistics into the conversation.
@jsbarretto it's well said. thank you. we agree strongly.
@jsbarretto the really frustrating part to us is that, as you note, this is not a new insight. this is a thing that's been obvious to programmers for over thirty years.
@jsbarretto it's really quite something to watch capital convincing everyone to pin all their hopes on things that we already know don't work.
@ireneista @jsbarretto I won’t even customise my IDE or change my VIM settings - anything that makes me write code faster is objectively dangerous 🤣

@gadgetoid @jsbarretto we bounced off of normal autocomplete really hard back in the 90s when it was new (at least, new to us)

it slows us down, it always has, it breaks our brain's pipelining. besides, it's trying to get us to not know our codebase as well; why would we want that?

with the benefit of our labor vocabulary that we have today, it was always at least partially motivated by deskilling, though we know that some people like it and are glad they have the choice

@gadgetoid @jsbarretto anyway, we don't even like plain autocomplete, so we're definitely not here for the spicy version
@ireneista @jsbarretto Look at Python, where they have been adding typing to the language (since they didn't think it was useful when they started).
@jsbarretto as a not-programmer, allow me to say, brilliant analysis. Applies to art and writing as well; there is a necessary grappling to create anything worthwhile.
@aethernaut Absolutely! It's a depth the tech bros will never understand.

@jsbarretto This coincides with my personal evolution as a developer. Around the 7 year mark of being a Python dev I started leaning more and more towards static typed langs.

at this point if I have to use python it's gonna be with pydantic.

@grendel84 I am happy to see Python moving in this direction.

@jsbarretto The Pydantic library is better than nothing but I don't think it means the lang as a whole is moving in that direction.

I know this is probably cliche, but Rust's borrow checker is to memory management what static typing is to type management. Sure, it may *feel* restrictive, but there's as an odd sort of freedom in those constraints.

It's almost as though what you lose in syntactic freedom you gain in semantic reliability.

@jsbarretto
I think we are still missing something important: The ability to constrain our data types further.

A lot of algorithms that work fine for e.g. integers up to 1000, or maybe 10000, completely fall apart at "insert large number here".

Because we fallible, human programmers assume that an algorithm that works fine for those low numbers will work equally well for large numbers. Which it often doesn't.

@wakame Dependent types and pattern types! We've had them since Ada, and they're good :)
@jsbarretto Reminds me of a bug that stumped me for hours until I realized Javascript was silently converting one of my numbers into a string
@PavelASamsonov @jsbarretto ahh yet another poor soul tortured by JS "types".
@jsbarretto imagine compromise being *necessary*. that's terrifying to a micromanager.

@jsbarretto

Speaking as a practicing engineer, i've had many managers/customers that do not understand the laws of physics... 🤣

Edit: cRsp tYpong... 🤦‍♂️ 🤣

@jsbarretto hollow is a great way to put it.

Honestly I love tapping AI on the shoulder for quick questions or debugging an error message. But I've not really enjoyed the code output from any LLM honestly.

@codemonkeymike Agreed! I've occasionally found it useful to bounce ideas off or to learn about new domains that I know very little about.
@jsbarretto this warning goes hard as hell. merlin said this to a vain, talented youth
@jsbarretto I'd suggest this disinclination toward the dialectical sits at the root of many of our problems today. E.g., the way we relate to nature.
@jsbarretto In slightly different words, programming languages solve two problems:

1. Providing a format that a computer is capable of executing
2. Removing as much ambiguity as possible

LLMs do
kinda solve the first step. I would argue that they don't solve it very well, but it's something

The real problem with "plain language" is the ambiguity, and that's generally the hard part of programming. Instead of forcing clarification after clarification after clarification, LLMs just
guess at what the user meant (not really how they work, but close enough). This is disastrous when you're trying to program anything of reasonable size, an activity that involves building up step after step.
@jsbarretto Indeed, pleasing the user is a key aspect of their success.
@jsbarretto
> LLMs promise something enticing, but ultimately hollow: the ability to skip the dialectic and impose one’s will on the machine by force. They are appealing because they do not allow space for their user to be wrong, or to be forced to encounter the consequences of their unrefined ideas.

this caused several disparate things i've been thinking about lately to converge together like pieces of a puzzle
@jsbarretto Aulë versus Morgoth...

@jsbarretto
> written to appease a master that does not understand the conflict between their ideas, nor the compromises necessary to resolve them.

This sounds like a lot of code-for-hire gigs

@jsbarretto A programming language and computer hardware dictate software architecture, much like building materials and the laws of physics dictate regular architecture. One is not generally allowed to build things for other people without demonstrating an understanding of how these constraints work.

LLMs understand nothing; they aren't *capable* of understanding. They replicate the results of comprehension, but can never develop their own.

@jsbarretto Trying to train an LLM into self-awareness is sillier than trying to teach English to a parrot. Both can repeat phrases they're given, and the LLM can do it very convincingly, but the parrot is actually sentient (and it was before it ever heard a word of English).
@jsbarretto That's a very good point I hadn't thought of before. Almost like LLMs are an extremely dynamically typed language that captures even fewer assumptions.
@jsbarretto Am I not experienced enough yet or are you not being entirely serious with that post?
>They do not, for the most part, spend their time wrangling & getting frustrated at their tools.
@light In that section: not entirely serious. I too spend hours each day bashing my head against the keyboard trying to make things work. But that's a tooling problem, and it feels like one that we're very slowly fixing. The bar for new tools is much higher than it used to be.
@jsbarretto Programmers get less frustration from their tools and more from the Plain English.
@jsbarretto this is fascinating; where does it leave pseudocode as a dev practice?
@sakhavi Was pseudocode ever really a thing? It seems to be a shifting target that climbs up and down the abstraction levels as appropriate for the domain. It's a form of communication, but it's rarely sufficient in its own right to depict anything other than a very narrow window into a particular idea - and simply reading it verbatim still requires one to incrementally build up those more abstract ideas in one's mind. The pseudocode is not, in itself, the idea.
@sakhavi @jsbarretto I think the existence of pseudocode is a good illustration of how natural language was never adequate in the first place. When I reach for pseudocode, it's because I'm trying to sketch an algorithm to a colleague or in a blog post, and I don't want to bother fleshing out the exact syntactic details; but natural language is either far too imprecise to describe the abstraction in my head, or else far too verbose.

@jsbarretto I agree with your wider point, but I wouldn't go so far as "they do not, for the most part, spend their time wrangling & getting frustrated at their tools"! 😅

Not while you're deep in the flow, sure. But I can't be alone in spending a good fraction of the time wrangling tools and configuration. Which makes me wonder if I should be leaning on LLMs much more for that side of things, hmm.

@jsbarretto Good points, but not entirely true, I am wrangling and getting frustrated with my tools right now. Your vision of programming as deep abstract thought followed by a few masterfull key strokes is appealing, but it's an unachievable ideal to have it like that all the time.

Does it mean the tools are bad, that I should git gud, or that I shouldn't be doing this at all and instead contact somebody experienced with the tools?

Or use LLM to actually reduce the wrangling and frustration?

@janbogar You're not wrangling! Wranging is when you fight tools while learning nothing of their workings. You might spend a lot of your time fighting bad tools, bugs, obscure APIs, strange building systems, etc. (I know I do) but it's a process of discovery that ups the chance of success next time you attempt something similar. That's fundamentally different to asking an LLM to spit out an answer.

Using an LLM to more easily access information? That's fine, I'm not arguing against that.

@jsbarretto "please solve the halting problem"