Thought experiment: imagine if the successor of LLMs were deterministic and yet able to output machine code that was always correct, depending on how well written a specification was, so effectively a magic black box for #programming.

Ignoring anything beyond the practice of programming for fun or profit, i.e. without questioning any societal implication, would we lose something?

Please feel free to comment after voting!

Yes
77.4%
No
22.6%
Poll ended at .
@RosaCtrl that's literally just another programming language. And a poorly specified one at that.
@nyancient oh no no, I’m explicitly saying a programming language wouldn’t be necessary and the machine would be super good at guessing what should be done, or at least you will be able to successfully direct it that way all the time. So you could even interpret the poll as do programming languages matter?

@RosaCtrl a program which deterministically turns a specification into machine code is literally a compiler, even if the specification is written in a natural language. Writing such specifications is programming, ergo the language used to write such specifications would be a programming language.

It's the same old "if we just make programming look like English/flowcharts/UML diagrams, we won't need programmers anymore" idea that has been around since basically forever. It never works because it's based on the fundamental misunderstanding that syntax is the hard part of programming, when it's actually creating a specification that's detailed and unambiguous enough to get you the results you want.

For the same reason, I expect such a deterministic LLM-based language to be pretty much useless. Being good at guessing what the spec intends is great for quick and dirty hacks or "write me a game like pacman but different"-type demos intended to wow people into forking over money for your tool, where the point is that you get something but exactly what you get doesn't really matter. But it's absolutely awful for building real systems where details matter. Having to write those specs in an informal language is just going to, counter-intuitively, make syntax way, way more of a problem.

@nyancient OK, so you do agree something would be lost, as programming purely in a natural language wouldn’t be as precise
@RosaCtrl if that were the only language left then yes, but there are so many terrible languages already that I don't think another one would make any real difference, even if it had the "AI" marketing hype going for it.
@nyancient that’s a good point. I think a lot os the enthusiasm about LLMs for programming comes from the “I don’t have to touch this crap directly anymore” sentiment. And it frustrates me because the goal should be to improve whatever is the crap here. Not embrace it

@RosaCtrl does it though? In my (obviously very anecdotal) experience, LLM enthusiasm generally comes from:

  • "yay, I can run a software business without having to pay anyone to know anything about software" ("entrepreneurs", "founders", investors and other deranged people)
  • "yay, I can skip most of this boring task and focus on one I care more about" (developers)
  • "yay, we can hack a shitty solution for this pain point we wouldn't otherwise have been allowed to spend time on at all" (developers)
  • "yay, I feel so much more productive" (developers with a poor understanding of the difference between perceived and actual productivity)
  • Except for the first category, which generally doesn't care (or know anything about) tech at all, I've rarely heard anyone see LLMs as a way to make up for bad languages. If anything, most people I've talked to who are not outright hostile to the very concept of LLMs put more emphasis on the importance of good language tooling to be able to effectively use LLMs.

    @nyancient I think the last three cases may have bad languages a source. But when I said crap above I should have clarified that it may be a language, a library, a method, whatever we find crappy.

    For example someone asked me if I wanted the keep spending hours updating dependencies in big projects, or if I preferred to let an LLM to do it. My answer was I use very few dependencies, and I fight at work to get the time to update them often.

    Now regarding PLs in particular, I know a former GHC contributor that told me, to my face, he doesn’t care about programming languages that much anymore. He still has to deal with them, but theres hope that that won’t be the case in a near future. That kinda broke my heart, as I do believe we would lose a lot if we achieve purely natural language programming

    @RosaCtrl ouch. Are you sure they don't have a stake in some LLM business or research grant? To be fair though, I've been pretty disillusioned with the whole SML branch of the FP community after almost everyone there I knew and respected went all in on crypto, and even before then I don't really think that crowd is representative of anything in tech.

    Even so, I don't think it's unreasonable to work on the problem from both ends. Better synthesis and analysis tools are a valid band-aid for the billions of lines of code that are already written in bad languages. Of course, whether LLMs will end up meaningfully contributing to that is very much an open question, and anyone considering putting all their eggs in the LLM basket is very poorly informed at best.

    Considering that just with current investments the "AI" industry would require revenue of 70 EUR per human on the planet in perpetuity to deliver just a 10% ROI, according to JP Morgan, LLMs rapidly ruining the training material their own existence depends on, I'm far from convinced that they will be.

    @RosaCtrl also, there definitely are tasks that are just inherently boring, no matter how good languages become. I can't imagine that fine tuning SELinux or AppArmor rules for a complex, distributed system can ever be anything but frustrating, for example.

    Unfortunately, that particular example is also completely unsolvable with LLMs. Which I had to explain to our CTO at great length a few weeks back. ಠ_ಠ

    @nyancient no, this is a young colleague, but it’s not the only one. My main reason to be melting down over LLMs is people I respect making claims I consider unreasonable. I don’t want an LLM to embrace the current state of affairs, I want to improve it!

    SELinux is a good example. I haven’t worked with it, but I’ve tweaked firewalls and similar stuff, and I find it fascinating! Yes things could be better. UX is often very bad, but this is actually the kind of thing I believe could be fixed with a Turing-incomplete language. And now that PLs don’t matter we will never get there

    @RosaCtrl solution: stop respecting their opinion on technical matters. It's jarring AF at first (I almost had a stroke when my PhD advisor started working on a fucking metaverse product), but the catharsis is so worth it.

    Just like crypto didn't make actual money obsolete, LLMs won't make actual software development obsolete.

    @nyancient oh, yeah! It’s very very jarring. Specially when you think you are the dumbest one! 😅

    I keep melting down because I feel like I’m missing something. Hence all my questions and polls.

    Edit: if LLMs don’t make programming obsolete, then they surely leave us a pile of crap to work on!

    @RosaCtrl I feel you! When "everyone" is gushing over how LLMs are going to obsolete everything from accountants to software to courts, it can be pretty hard to not get swept up in it. Especially since so much of the marketing is disguised as worry about "the consequences of the next industrial revolution" (remember that open letter from "AI" companies about "AI" being "too dangerous", asking for a six month moratorium on new models? Excellent marketing right there).

    I think the best way to immunize oneself is to just get a good understanding of how LLMs work and why they're not what people claim. I've spent countless hours using state of the art tools to troubleshoot problems that were too hard to figure out myself in a few minutes, and the main takeaway from those sessions is the sheer insidiousness of how working with an LLM feels like you're making progress while you're really just being led around in circles. Not once has any of these sessions contributed to me solving the problem at hand. Instead, I've wasted my time being tricked into believing that the solution is just around the next prompt.

    Same with code generation. LLMs are great for templating trivial things ("give me a set of python dataclasses matching this openapi spec/example request"), but completely fall down when it comes to anything that more complex than that which also needs to be maintained. Seeing the terrible PRs submitted by colleagues who don't even stop to think "is this code even necessary" before generating 300 lines of technical debt really reinforces that it's not you who missed something - it's them.

    I can really recommend the book "building a large language model from scratch", which walks you through building GPT2. Aside from making a pet bullshit generator being a fun exercise, it really pulls back the curtain on the whole "LLMs are just one step from being a self-aware super intelligence" spiel. The thing that separates OpenAI and friends from anyone with a bit of Python experience and basic understanding of matrix multiplications isn't some mythical AI secret sauce; it's just having access to more bandwidth and GPU compute.

    That said, I still worry sometimes about the effect the AI hype will have on the world, but the realization that the danger is just the same old large scale irrational hype capitalism that's already fucking us, and not some new scary alien technology with the power to reshape reality, makes it infinitely less anxiety-inducing.

    EDIT: and before some smartass comes along to crow about how more recent LLMs are nothing like GPT2: they're just minor iterations on the same concept with a few improvements that are so obvious they could have been a BSc thesis, if BSc theses had access to infinite GPUs.

    @nyancient thanks for sharing! My experience using LLMs for debugging and generating code is pretty much what you describe. Very disappointing. And yet a bunch of people keep reporting amazing results I don’t see, which instead of making me lose respect, made me feel dumber and dumber.

    Is this the book? My understanding of LLMs is just vibes to be honest. I was looking for something like it some months ago because I recognise the only way to tackle my anxiety here is by building more understanding, and I appreciate a direct recommendation https://www.manning.com/books/build-a-large-language-model-from-scratch

    Build a Large Language Model (From Scratch) - Sebastian Raschka

    How to implement LLM attention mechanisms and GPT-style transformers.

    Manning Publications

    @RosaCtrl yeah, that's the book! The physical copy comes with a free ebook copy so that's nice, though Manning will send you ungodly amounts of spam after you register for it.

    Do you personally know anyone reporting these amazing results? I only really know one person who claims any results even close to amazing, and they're kind of well known for taking ill-advised shortcuts and being terrible at estimating stuff. And even they concede that LLMs are only really good for particular types of tasks.

    Having more or less grown up on 4chan, I just assume that anything posted on the internet is a complete fabrication until proven otherwise, so I don't really put much stock in self-reported internet success stories. 😅

    @nyancient yeah, I know more than one. As I said one is a colleague, two others are folk I started to engage with in social media, and now I know personally. That’s why what they say matter to me, I’ve learned to ignore opinions from randos 😅

    And that’s why now I’m in this “let’s assume I’m the stupid one here” phase. So let’s asume they are right, let’s asume you can get software out of a magic box. The thing is in the way I look at software, there’s no perfect method. I don’t think we shall strive to get bug free React app everywhere. I believe a buggy app written in a novel way is valuable.

    I think now the what I’m asking here is somehow similar to is basic science important even if it never produces anything useful?