I don’t really like supportive tooling for writing code. I work in vim and prefer bazel (with occasional support from a language server client for things like large scale renames).

I’m a reasonable person I swear to god. I know that’s just a preference, my coworkers who prefer IDEs or whatever are equally valid, and I try to help them where I can. If they say they can’t use generated files because they don’t autocomplete with their IDE, or if they want 4 repos instead of 1 because they can’t have multiple languages at once, I’m cool with that, really, I try to be accommodating, I try to help where I can.

that said, my coworker who writes his code with chatgpt is just not working out for me. I mean for one thing the code is just a mess: meaningless parameters passed everywhere, error checks that can never trigger because the implementations of the interfaces don’t throw the errors that are being checked.

but the main problem is that it’s not *designed* at all. every single one of these things is like Alton Brown used to say about unitaskers. this code passes the (generated) unit tests and that’s it. trying to use it for anything that it wasn’t explicitly and specifically designed and tested for is like trying to use a pork puller bear claw when what you want is a fork.

it’s not extensible, it’s not maintainable, if you want to add a parameter or tweak an implementation detail the only reasonable call is to rewrite it from scratch.

And what’s so frustrating about that is that it LOOKS fine at a glance. It even looks fine in code review, one chunk at a time!

@gnat I think about this post from @ceejbot all the time when thinking about coding with AI: https://blog.ceejbot.com/posts/programming-as-theory-building/

AI cannot have a theory of the system or a point of view on the problem space, so it's actively detrimental to introduce into a codebase imho. It hinders the team's development of a theory of the system, which is the only long-term artifact that holds value.

Programming as Theory-Building – Ceejbot's notes

This internet thing seems to have taken off.

@gnat I'm not dogmatic about it. Some people use syntax-based auto complete and AI coding is basically that if you squint really hard. But God it's frustrating in a lot of cases.

@paddy @gnat What helps my productivity in an IDE is stuff like navigation help (e.g. finding all calls to a function or where a function is defined), function suggestions, autogeneration of boilerplate. Also, refactoring options like pulling out common code that I have selected into a separate function, because I like to clean things up.

All that was available and working before we had LLMs.

@paddy oh man, yes! This post feels so good to read - like finding a word that’s been on the tip of your tongue for months. I really feel like you have succinctly said what I have been trying to express for months
@gnat I read it and was like "oh shit that's the thing I can't put my finger on that I'm trying to communicate here"

@paddy @gnat @ceejbot my view is that you don't just need a theory of a system, but intuition. a gut feeling for which direction to start digging, so that you produce better than random attempts, which you can then validate against your theory of the system

as an example in the linux kernel, I judge locking designs by how they feel. it works really well, with the caveat that intuitive insights have no timeline for when they'll arrive. and sometimes that waiting is nerve-wracking

@sima @paddy @gnat @ceejbot

> my view is that you don't just need a theory of a system, but intuition. a gut feeling for which direction to start digging,

This is how I wrote programs in my teens. But that doesn't get you very far.

@TomSwirly @sima well apparently it gets you to some significant success writing locking designs in or around the Linux kernel, ha. not all that many higher echelons to reach!

@gnat yeah, I've done enough locking design in the kernel that I've done a talk about locking design patterns http://blog.ffwll.ch/2023/07/eoss-prague-locking-engineering.html

links to my blog posts in there, too

or put differently, i barely chuckled seeing that toot you're replying to fly by 😏

EOSS Prague: Kernel Locking Engineering

EOSS in Prague was great, lots of hallway track, good talks, good food,excellent tea at meetea - first time I had proper teain my life, quite an experience. ...

@gnat @sima Having been involved in the review of a "lock-free queue" in C++, I'm extremely skeptical that intuition alone, without proving correctness, is going to get good results.

I do rely on intuition to at least write concurrent code, but that intuition is backed up by a pretty solid mental model of how the specific concurrency I'm using works.

I'll bet if I objected to some aspect of your locking, you'd have a very solid reason for me, not just intuition!

@sima @paddy @gnat The intuition is an effect of having the theory-- a thing that our human brains do for us in the background. So yes! Agreed!

@paddy @gnat @ceejbot

I do agree with you, but I have to quibble: we don't know for sure that some AI in the future can't have, effectively, a theory of the system.

I mean, we are essentially meat computers, so we know it's possible.

However, I am quite certain that today's LLMs have no such theory.

@TomSwirly @paddy I actually have an MS in this exact topic - my thesis was on decision theoretic natural language generation. Even got a derivative paper published in AAAI. On that basis I can tell you that in my educated opinion, we are not much more likely to have this than any other iain banks technology.

Anything can happen, but I advise everyone to be wary of thoughts like “maybe someday” when it comes to AI - the field is littered with these predictions and they come to pass so rarely!

@gnat @TomSwirly I'm very open to changing this view if the material conditions change, but I also see no harm in baking my views on this subject on the material conditions that exist today instead of trying to guess which ones might exist in the future.

@gnat @TomSwirly @paddy

They were describing computers as "electronic brains" and saying that they would replace us since before I was born. And I'm at an age where many sensible people would retire.

The future is unpredictable.

But LLMs are *NOT* likely to get us there.

@gnat @paddy

I do actually make my living in this field...

My claim is that we know that thinking machines are **possible**, because we are thinking machines.

I'm almost completely certain we won't build *artificial* thinking machines, because we're speed-running the death of technological civilizations, but thinking machines exist and one is typing this response.

So what are you objecting to, exactly? Is it that you're 100% certain that it's not happening, but I'm only almost 100%?

@TomSwirly @paddy which field is that? If you’re talking philosophy, cognitive science, or even robotics or something like that, that’s pretty interesting; if all you mean is that you’re a software engineer like the rest of us it’s not much of a credential in this context, sorry.

Anyway, my objection is that it was silly for you to interject “sure, you’re correct today, and in every reasonably foreseeable future, and I’m almost completely certain you’re right, but —”.

Another objection is that any definition you’re using for “thinking machine” that is flexible enough to include both you and a hypothetical descendent of chatgpt is uselessly broad.

I probably have more but I’m going to work now; have a good morning.

@gnat @TomSwirly I mean I think my whole objection here is I'm saying "machines can't" (present tense) and you're jumping in to say "machines could" (future tense) and like sure, machines could and if they do I'll update my stance here, but nobody is talking about what machines could do in the future we're talking about what they can do in the present, so why bring it up?
@gnat @TomSwirly When you combine the tangent with the fact that the LLM industry loves to use claims about what is theoretically possible to justify things that don't make sense given demonstrated abilities in the present, the whole thing comes across like you're best case trying to pull us into a different conversation than the one we're having or worst case trying to pull the same sleight of hand the LLM industry is so fond of.

@paddy @gnat

I didn't pull you into anything. You wrote: "AI cannot have a theory of the system or a point of view on the problem space."

This isn't a statement about today, it's an absolute statement about what AIs could conceivably do, ever.

Now, my actual beliefs are these:

* LLMs are a massive heist for the benefit of a tiny number of rich people.

* It's hard to believe that these utterance averagers will ever become even competent.

1/

@paddy @gnat

* Intelligent machines are clearly *possible*, because humans are such machines.

* But I see no path from LLMs to actually intelligent machines. I think today's AI is a boomscam.

* And it makes no difference. What's actually going to happen is that we will devastate our climate and ecosystem, our technological civilization will collapse, and computers and integrated circuits will be history.

2/

@paddy @gnat I do have this issue as a mathematician with a science background writing on the Internet.

People seem to interpret "possible" as "having a significant chance of happening" - say, p > 0.1.

But when I use the word "possible" I use it to mean exactly that: p > 0.

So I can believe that today's AI bubble is a scam, a bubble, a heist, and almost certainly will go nowhere, while still believing that AGI is *possible*.

/thread

@gnat @paddy

I'm a trained mathematician, now working on the pytorch project, so somewhere between those two.

You are attacking something I did not say. The annoyance directed at me in this thread for me saying things that are undeniably true isn't very nice.

PP made the absolute statement "AI cannot have a theory of the system or a point of view on the problem space". The word "today" does not appear anywhere in there.

The person who wrote this *cannot know this for sure*.

1/

@gnat @paddy

Actually, I wrote a longer thread: here it is.

https://toot.community/@TomSwirly/114448829400151307

I think that all this AI stuff is almost certainly bunk, and it's instead a heist, a crime, and a fake. I strongly do not believe LLMs have any theory of mind or of systems.

But as a mathematician, I also strongly object to people being certain about future events, particularly when we have irrefutable evidence that intelligent machines, such as ourselves, exist.

Are we clear now?

Tom Ritchford (@TomSwirly@toot.community)

@paddy@mastodon.cloud @gnat@tech.lgbt I didn't pull you into anything. You wrote: "AI cannot have a theory of the system or a point of view on the problem space." This isn't a statement about today, it's an absolute statement about what AIs could conceivably do, ever. Now, my actual beliefs are these: * LLMs are a massive heist for the benefit of a tiny number of rich people. * It's hard to believe that these utterance averagers will ever become even competent. 1/

toot.community
@TomSwirly your point is that intelligence can exist. this is trivial, obvious, and widely understood by absolutely everyone. I think it is a silly thing to say. I have lost patience with this extremely verbose nitpick.

@gnat

> your point is that intelligence can exist.

No, it isn't. It baffles me that you can read what I wrote and say that. It reads like mockery. Indeed, none of your responses are polite.

I agree; it's not worth wasting my time when there are so many kind and respectful people on Mastodon.

@TomSwirly @paddy @gnat No feedback loop: no actual intelligence.
@paddy @gnat @ceejbot I don't know if I'm tired or what, but that essay was hard to read
@paddy @vaurora @gnat @ceejbot this article is soooo good. I added "legacy code archaeology" to my resume's pile of skills section many years ago after realizing how often I had to reconstruct past authors' thinking. I routinely remind folks on teams that they are writing comments for future people who will not have the context they do and that incomplete or out of date docs are better than none because it shares their thinking.

@paddy @gnat @ceejbot I had posted these quotes last year (without any commentary): https://discuss.systems/@burakemir/112621313656511067

I think it stands to reason that what is called "design" should be considered part of the theory.

But there is a challenge: the various artifacts we can come up with are never enough to capture the theory.

burakemir (@burakemir@discuss.systems)

Papers to be read, again and again: Peter Naur "Programming as theory building" (1985) https://pages.cs.wisc.edu/~remzi/Naur.pdf "The present discussion is a contribution to the understanding of what programming is. It suggests that programming properly should be regarded as an activity by which the programmers form or achieve a certain kind of insights, a theory, of the matters at hand. This suggestions is in contrast to what appears to be a more common notion, that programming should be regarded as a production of a program or certain other texts."

discuss.systems
@burakemir @paddy @gnat The map is not the territory. The map helps, though.