@GeorgWeissenbacher @[email protected] @jfdm @csgordon @lindsey @jeremysiek
Yes to most of that. I think it's not that hard to assess if that is what people were always assessing that.
I actually disagree w/ your opening comment. Most intro CS educators will say (and have said), "I don't teach programming, I teach *problem solving*" (whatever the fuck that is). My response is, "great, this should be your liberation! Programming got easy, what are your «problem solving» ideas?"
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Yes, I'm sure people like me know so little about software that we're all "optimizing for wrong kind of thing". 🙄
You do know how absurd these kinds of remarks look, right?
@shriramk @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek oh apologies, I didn't mean to infer that you were optimizing the wrong thing, I guess I was refering to the vibecoding community
im also not saying that what you're doing is vibecoding either, my remark was more for the ppl using llms to vibecode
i should have been more specific 🫠
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Implicit apology accepted, but I still don't agree with you. The vibecoders aren't out to read any code at all *by definition*. So who cares whether the code is readable?
There are lots of tasks that computing can help with. I keep describing programming as a "superpower": there are things you wouldn't even do if you didn't know you about programming. As an example: ↵
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
If, now, *everyone* can have that superpower within reach — why are we not all *CELEBRATING*?!?
We should be using out knowledge and strength to figure out how to enable people to do this safely and responsibly, not acting as high priest gatekeepers like the men standing around IBM mainframes locked away in "computer rooms", dressed in such a way you can't tell if the photo is b&w or color.
@shriramk @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek >why are we not all *CELEBRATING*?!?
ik this is a rhetorical question, but id have to do some introspection to answer this, apologies \:
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Please do, I welcome it. Hearing thoughtful other views helps me sharpen my thinking.
To be clear, I see problems too. But at least *some* of those problems have been around from the dawn of time: arguably the whole field of software engineering (beyond PL) exists because "humans always fuck up, how can we help them".
Of course, speed, scale, etc. are all different, and there are subtleties like copyright. ↵
@shriramk To me, the answer of "why [am I] not celebrating" is that it feels like LLMs are an abjuration of the idea that we should care about what we're working on; that the question of "so why or how does it work" can and should be safely delegated to the machine and we go on our merry way, filling an ever-diminishing gap.
In my personal experience it's very easy to beget software with an LLM that I don't understand. I find it really shocking to take a project I made with an LLM originally and then start coding in it by hand - it's like I'm using a library whose documentation I read, but whose code I've never worked with, which is... imo quite accurate. There are alternative workflows for using LLMs, but those also get rid of a lot of the productivity improvements at the same time so it's hard for me to see the point.
A colleague of mine, who works in physical modeling, said this:
> Trying to help out in such a situation [where there's an issue in an agent-built model] can be a uniquely frustrating experience since you're not actually talking with the brain that built the model, there is now a clueless human in between. "I don't know, the agent did it" as an answer to each and every question is a clear indication that the human modeler has outsourced too much of the understanding to the agent.
I see a lot of LLM systems where the author doesn't in any meaningful way understand how their system works and feel like ultimately this is a step backwards in how well people understand the software and overarching systems that they're building. The answer the community seems to be moving towards is ultimately that this doesn't matter, that you don't need to understand how any of the systems work because the agent will deal with that for you, but I personally find this an extremely unappealing and objectionable outlook.
@ckfinite Yes to all that, but I also don't understand how a lot of the code I work with works, and I struggle to understand my *own* code from, say, six months ago.
There are certainly programs I've written that were t the very, very edge of my mental powers when I wrote them, that I now just take on faith, and *others* also take on faith. ↵
@ckfinite Most code isn't that important or difficult or interesting. A lot of code is just reproducing things in the "thick of the distribution". I think it's great if that can be made quick and cheap and easy to produce.
As someone who's spent a whole lifetime caring about meaning and correctness, I can separate that from stuff that Really Matters. (Of course, *many* things are "mission-critical", it depends on one's mission!) ↵
@ckfinite Both points are valid.
But I feel a bit like how I feel about being contrarian (while not being truly contrarian) re. self-driving cars: *have you seen the alternative*? I think we often construct mythical baselines, when all the time I've spent studying humans and understanding the literature on humans leaves me way, way less optimistic and more despondent about that baseline.