@krismicinski @shriramk @jfdm @csgordon @lindsey @jeremysiek we have been claiming for decades that we are not just educating coding monkeys, so it shouldn't really matter that LLMs can now do all the coding. As far as I see it, it's still necessary to identify and clearly formulate verifiable requirements and specifications, come up with a modular design, and verify the whole thing, because I still believe the ultimate responsibilty lies with the developer. So students still need to understand the fundamentals. But yes, it has become much harder to check *at scale* whether they actually grasped them.

@GeorgWeissenbacher @[email protected] @jfdm @csgordon @lindsey @jeremysiek
Yes to most of that. I think it's not that hard to assess if that is what people were always assessing that.

I actually disagree w/ your opening comment. Most intro CS educators will say (and have said), "I don't teach programming, I teach *problem solving*" (whatever the fuck that is). My response is, "great, this should be your liberation! Programming got easy, what are your «problem solving» ideas?"

@shriramk @GeorgWeissenbacher @krismicinski @jfdm @csgordon @lindsey @jeremysiek ... did programming get easy? Can one be said to be programming if one asks someone else (or an LLM) to write a program for you? Or is some other kind of (not- or not-quite-programming) interaction going on?
@tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @lindsey @jeremysiek
I very much think of what I'm doing with Claude Code as a kind of programming — indeed, the kind of programming I always wished I could do! But if it makes you happier to use a different term for it (not "vibecoding", that has too many specific connotations and is definitely not how *I'm* doing things), and it's *useful* to have that other term…that's fine by me. I guess my slogan is: "Philosophy…but not too much".
@shriramk @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek This comment made me realize something about myself: this is *not* a kind of programming I always wished I could do. I really only like programming because I like manipulating formal systems. That might explain a lot about why this kind of programming doesn't appeal to me, aside from all the bad externalities.
@lindsey @shriramk @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek another issue (at least for me) is it's far easier to write code than it is to read somebody else's code, and i think ppl using llms to write code are optimizing for the wrong kind of thing (also negative externalities aside, carbon tax wen)

@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Yes, I'm sure people like me know so little about software that we're all "optimizing for wrong kind of thing". 🙄

You do know how absurd these kinds of remarks look, right?

@shriramk @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek oh apologies, I didn't mean to infer that you were optimizing the wrong thing, I guess I was refering to the vibecoding community

im also not saying that what you're doing is vibecoding either, my remark was more for the ppl using llms to vibecode

i should have been more specific 🫠

@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Implicit apology accepted, but I still don't agree with you. The vibecoders aren't out to read any code at all *by definition*. So who cares whether the code is readable?

There are lots of tasks that computing can help with. I keep describing programming as a "superpower": there are things you wouldn't even do if you didn't know you about programming. As an example: ↵

@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
In my courses, I make up all kinds of wacky grading schemes for assignments, because I have full knowledge that come semester's end, I can figure out how to turn these into grades — using code. Sometimes it takes me 2-3 hours to wrangle, but I *know* can do it. Most profs don't have that power, so they just use whatever crappy grading scale Canvas or Gradescope or whatever give them. ↵
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Likewise, I teach my students SQL and Unix commands because they are another kind of superpower. Even if you know to code, trying to write what a 5-Unix pipe does could take a half day of Java, so you would never get started; whereas you'd bang out the code on the shell and get on to your next task that depends on that output. ↵

@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
If, now, *everyone* can have that superpower within reach — why are we not all *CELEBRATING*?!?

We should be using out knowledge and strength to figure out how to enable people to do this safely and responsibly, not acting as high priest gatekeepers like the men standing around IBM mainframes locked away in "computer rooms", dressed in such a way you can't tell if the photo is b&w or color.

@shriramk To me, the answer of "why [am I] not celebrating" is that it feels like LLMs are an abjuration of the idea that we should care about what we're working on; that the question of "so why or how does it work" can and should be safely delegated to the machine and we go on our merry way, filling an ever-diminishing gap.

In my personal experience it's very easy to beget software with an LLM that I don't understand. I find it really shocking to take a project I made with an LLM originally and then start coding in it by hand - it's like I'm using a library whose documentation I read, but whose code I've never worked with, which is... imo quite accurate. There are alternative workflows for using LLMs, but those also get rid of a lot of the productivity improvements at the same time so it's hard for me to see the point.

A colleague of mine, who works in physical modeling, said this:

> Trying to help out in such a situation [where there's an issue in an agent-built model] can be a uniquely frustrating experience since you're not actually talking with the brain that built the model, there is now a clueless human in between. "I don't know, the agent did it" as an answer to each and every question is a clear indication that the human modeler has outsourced too much of the understanding to the agent.

I see a lot of LLM systems where the author doesn't in any meaningful way understand how their system works and feel like ultimately this is a step backwards in how well people understand the software and overarching systems that they're building. The answer the community seems to be moving towards is ultimately that this doesn't matter, that you don't need to understand how any of the systems work because the agent will deal with that for you, but I personally find this an extremely unappealing and objectionable outlook.

@ckfinite Yes to all that, but I also don't understand how a lot of the code I work with works, and I struggle to understand my *own* code from, say, six months ago.

There are certainly programs I've written that were t the very, very edge of my mental powers when I wrote them, that I now just take on faith, and *others* also take on faith. ↵

@ckfinite Most code isn't that important or difficult or interesting. A lot of code is just reproducing things in the "thick of the distribution". I think it's great if that can be made quick and cheap and easy to produce.

As someone who's spent a whole lifetime caring about meaning and correctness, I can separate that from stuff that Really Matters. (Of course, *many* things are "mission-critical", it depends on one's mission!) ↵

@ckfinite We've never before had the opportunity to ask "If code were cheap to come by, what are the interesting problems?" That was a sci-fi question until now. Now that it's some approximation to reality, I want to think about how we can move up the reasoning chain. The need for meaning and correctness don't go away, but maybe they can be tackled in different, new ways.
@shriramk
> As someone who's spent a whole lifetime caring about meaning and correctness, I can separate that from stuff that Really Matters.
What gives me pause about this is that frequently the stuff that I didn't think really mattered - the unknown unknowns, if you will - ended up really surprising me. It's the loss of the bits of friction I didn't expect that makes me worried about the ability to maintain context long term while using LLMs.

@ckfinite Both points are valid.

But I feel a bit like how I feel about being contrarian (while not being truly contrarian) re. self-driving cars: *have you seen the alternative*? I think we often construct mythical baselines, when all the time I've spent studying humans and understanding the literature on humans leaves me way, way less optimistic and more despondent about that baseline.