@GeorgWeissenbacher @[email protected] @jfdm @csgordon @lindsey @jeremysiek
Yes to most of that. I think it's not that hard to assess if that is what people were always assessing that.
I actually disagree w/ your opening comment. Most intro CS educators will say (and have said), "I don't teach programming, I teach *problem solving*" (whatever the fuck that is). My response is, "great, this should be your liberation! Programming got easy, what are your «problem solving» ideas?"
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Yes, I'm sure people like me know so little about software that we're all "optimizing for wrong kind of thing". 🙄
You do know how absurd these kinds of remarks look, right?
@shriramk @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek oh apologies, I didn't mean to infer that you were optimizing the wrong thing, I guess I was refering to the vibecoding community
im also not saying that what you're doing is vibecoding either, my remark was more for the ppl using llms to vibecode
i should have been more specific 🫠
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Implicit apology accepted, but I still don't agree with you. The vibecoders aren't out to read any code at all *by definition*. So who cares whether the code is readable?
There are lots of tasks that computing can help with. I keep describing programming as a "superpower": there are things you wouldn't even do if you didn't know you about programming. As an example: ↵
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
If, now, *everyone* can have that superpower within reach — why are we not all *CELEBRATING*?!?
We should be using out knowledge and strength to figure out how to enable people to do this safely and responsibly, not acting as high priest gatekeepers like the men standing around IBM mainframes locked away in "computer rooms", dressed in such a way you can't tell if the photo is b&w or color.
@shriramk @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek >why are we not all *CELEBRATING*?!?
ik this is a rhetorical question, but id have to do some introspection to answer this, apologies \:
@nemo @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Please do, I welcome it. Hearing thoughtful other views helps me sharpen my thinking.
To be clear, I see problems too. But at least *some* of those problems have been around from the dawn of time: arguably the whole field of software engineering (beyond PL) exists because "humans always fuck up, how can we help them".
Of course, speed, scale, etc. are all different, and there are subtleties like copyright. ↵
@shriramk To me, the answer of "why [am I] not celebrating" is that it feels like LLMs are an abjuration of the idea that we should care about what we're working on; that the question of "so why or how does it work" can and should be safely delegated to the machine and we go on our merry way, filling an ever-diminishing gap.
In my personal experience it's very easy to beget software with an LLM that I don't understand. I find it really shocking to take a project I made with an LLM originally and then start coding in it by hand - it's like I'm using a library whose documentation I read, but whose code I've never worked with, which is... imo quite accurate. There are alternative workflows for using LLMs, but those also get rid of a lot of the productivity improvements at the same time so it's hard for me to see the point.
A colleague of mine, who works in physical modeling, said this:
> Trying to help out in such a situation [where there's an issue in an agent-built model] can be a uniquely frustrating experience since you're not actually talking with the brain that built the model, there is now a clueless human in between. "I don't know, the agent did it" as an answer to each and every question is a clear indication that the human modeler has outsourced too much of the understanding to the agent.
I see a lot of LLM systems where the author doesn't in any meaningful way understand how their system works and feel like ultimately this is a step backwards in how well people understand the software and overarching systems that they're building. The answer the community seems to be moving towards is ultimately that this doesn't matter, that you don't need to understand how any of the systems work because the agent will deal with that for you, but I personally find this an extremely unappealing and objectionable outlook.
@ckfinite Yes to all that, but I also don't understand how a lot of the code I work with works, and I struggle to understand my *own* code from, say, six months ago.
There are certainly programs I've written that were t the very, very edge of my mental powers when I wrote them, that I now just take on faith, and *others* also take on faith. ↵
@ckfinite Most code isn't that important or difficult or interesting. A lot of code is just reproducing things in the "thick of the distribution". I think it's great if that can be made quick and cheap and easy to produce.
As someone who's spent a whole lifetime caring about meaning and correctness, I can separate that from stuff that Really Matters. (Of course, *many* things are "mission-critical", it depends on one's mission!) ↵
@ckfinite Both points are valid.
But I feel a bit like how I feel about being contrarian (while not being truly contrarian) re. self-driving cars: *have you seen the alternative*? I think we often construct mythical baselines, when all the time I've spent studying humans and understanding the literature on humans leaves me way, way less optimistic and more despondent about that baseline.
@shriramk To me this is an odd (and, frankly, extremely personal-seeming) attack. It is possible to have a formal system that one enjoys manipulating (a subjective assessment) that is limited in scope or does not capture other aspects of the system. I, fundamentally, do not think that you must formalize your entire world in order to pick parts of it off into interesting and fun abstractions that can be manipulated into and of themselves.
I read this as arguing that Lindsey's personal, subjective, enjoyment was invalid due to not having a complete formal world model, and I ultimately think that this isn't a valid position. You might not have the same opinion, but I find it bizarre to argue that the very enjoyment is wrong.
@shriramk To me, I read quite a lot of the terminology you're using as being very specifically judgemental, and when combined with the second person framing it becomes very personal.
When contextualized with a statement about a subjective position, about a personal feeling, it comes off as either:
* A value judgement of the feeling that she's expressing in and of itself; that it is wrong to feel enjoyment from a partial formalism and that your viewpoint is categorically better, or
* A category error where you're making a point about the community come off as a judgement about a person.
I agree with your point about the community, but think that the way that this is framed is coming off very strongly as the former where your response to me was that you were intending the latter.
> as in, we think we're super-formal when we're actually only 1-2 degrees more formal than the entirely-informal
I would contrast this answer with what you said earlier
> To be fair, I too suffered from this flaw until Gregor Kiczales kindly beat it out of me with a few pointed remarks.
to imply that no, you are different, you are entirely formal because you are disclaiming yourself of the flaw.
I'm not different; I'm very informal. These days I work mostly in physical simulation and embedded control systems, and there's tons of informality in both settings. What I think that those domains illustrate, though, is that even for a "fully formal" software system, where the architecture, the temporal properties, the concurrency requirements are all fully specified large parts of the system aren't and go quietly unnoticed.
A classic example is Therac-25, where the race condition was between the speed the user typed at, the UI state, and the speed the machine's turntable rotated at. A more modern example would be rowhammer. Even in systems where everything's formally verified and proven down to RTL you still have informality in the analog domain.
@ckfinite Thanks for this critique. I definitely did not mean to make this *personally* about @lindsey . Since I have given that impression — sorry, Lindsey. I always think of Lindsey as one of the "good eggs", so I *especially* don't mean to call her out personally.
I do mean to call out the PL community broadly, though. And I say this as someone acknowledging my own flaws in this regard. I'm not sure how you read my Gregor comment as me saying I am "entirely formal", but probably no matter.
there are maybe a lot of different dimensions here i think. personally one of the main reasons i do a lot of programming is motivated by dislike for how programming feels to me now versus i i feel it could feel, and my desire to make systems which feel different. my main personal interest is in ways of interacting with computing that are more spatial and tangible, which led to me initially disliking chat-based approaches, which seem to be taking me farther away from the goal state.
i've basically since done a 180 on this as the combination of increased model capabilities and refining my processes have led to a point that, despite not being what i want at an immediate level, the 'new programming' feels like a more happy intermediate to me than the old way of doing things. it seems to allow me to interact with code at a bit closer to the conceptual level i wanted to, even if the cost is a layer of intermediation that feels very different than i imagined. it still feels very much like a k step forward m steps back situation when k and m are very rapidly in flux, even as the ratio has been trending mostly positive.
@shriramk @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek
@shriramk @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek right, I hadn't realized how bottlenecked I was by lack of student + my own time. like I have one million ideas and now I can pursue 0.0007% of them instead of 0.0003%. or whatever.
LLM coding is perfect for profs-- we're time-limited experts and we're mostly not expected to produce really awesome code anyhow.
@regehr @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek
Papert, Kay, etc. used the phrase "tools for thought". (@tonofcrates is teaching a course by that name!)
This feels like a new tool-for-thought. Including exposing both the weakness and incompleteness of my thought, which is what a good tool ought to do.
As a PL person, I'm excited to be able to rapidly prototype a PL and actually *use* it, not just reason through calculi (a different tool for thought).
@jbigham @regehr @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek @tonofcrates
I really like Jakob Nielsen's summary of 30 years of UX research on 3 levels of interactivity:
0.1 second = instantaneous
1.0 second = uninterrupted
10 seconds = keeps attentionª
ª still true?
Anything beyond that, and your brain wanders. These things have the *worst* response time, returning in minutes rather than a day.
That's why we have *three* social media.
https://www.nngroup.com/articles/response-times-3-important-limits/
@shriramk @regehr @lindsey @tonyg @GeorgWeissenbacher @krismicinski @jfdm @csgordon @jeremysiek @tonofcrates
i think the big opportunity, which is hard, is how to usefully keep people attending with these things. i'm starting to see work where humans are still in the loop. but, that requires a lot more focus on updating the human's mental model (need way more than just following along the chat), and then how would you usefully intervene?
@jbigham @regehr @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek @tonofcrates
To me a really exciting open research question is how can we keep humans in the loop in a way that their work is
- minimal/moderate
- meaningful
Either one is easy. Security alerts were minimal, but useless because not meaningful. "Review all the generated code" is (maybe) meaningful, but not moderate.
In specialized sub-domains we have some answers. Don't yet know how to generalize.
@tonyg @shriramk @regehr @lindsey @GeorgWeissenbacher @jfdm @csgordon @jeremysiek
A simple, albeit terribly incomplete, analogy is that an LLM is a fancy search engine. Like any search engine, it can hinder thinking (e.g. searching for solutions to a homework problem, plagiarizing sources for an essay) or support thinking (e.g. facilitate debugging, surface unexpected sources).
@tonofcrates @tonyg @regehr @lindsey @GeorgWeissenbacher @jfdm @csgordon @jeremysiek
As someone who thinks about metaphorical and analogical thinking, I'd say this is a terrible analogy in general and a fantastic analogy in this specific case and for this audience. (-:
That is, it fails very badly as an analogy for understanding mechanism, but it works very well as an analogy for understanding use/effect. I hadn't really thought about that distinction before: analogies for purposes.
@shriramk @regehr @lindsey @tonyg @GeorgWeissenbacher @jfdm @csgordon @jeremysiek this!
I’ve been exploring language design (a language I’ve been brewing for 30ish years)
I’ve been exploring a new operating system (first bring-up on hardware yesterday)
The flexibility of “yeah, this is a dead end, try that” is liberating
@shriramk @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek it is what it is, and I suspect we'll all feel differently (not sure how though) once the technology stabilizes and the novelty wears off.
but I am sure happy at the prospect that I might never have to fight with tikz, cmake, or some fiddly LLVM API the hard way again
@regehr @shriramk @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek but all of the above-mentioned problems—
-APIs too fiddly (bc you don't have any say in which development efforts are funded)
-student-to-instructor ratio too high (bc you don't have the power to set limits & force hiring of instructors to match enrollment)
-not enough time for interesting research/experiments
...could be better solved with a union & no LLMs, than with no union & unregulated LLMs.
@regehr @shriramk @lindsey @tonyg @GeorgWeissenbacher @[email protected] @jfdm @csgordon @jeremysiek Regarding the two camps: I believe there also is a temporal aspect. Say, 110% in the code camp as a grad student and assistant professor, and maybe one grows a little bit out of it later on.
LLM-based coding allows me to do much more prototyping and playing around with new ideas that I had written down into my notebooks over ten years ago. So the alternative would just be to have nothing instead...