@rupert @EmilyEnough

As a system architect, this is also what I do. The thing is, I absolutely depend on the people who do the implementation having good judgement. They need to fill in the gaps (if there were no gaps, I would have an implementation already) but also tell me if there are real problems with some of the ideas. This is why the first thing I do with a design is have it reviewed by people who will implement it. If they tell me ‘actually, this thing you forgot to consider is where our critical path is’ then that often leads to a complete redesign, or at least to significant change. The LLM will just produce something. With an ‘agentic’ loop and some automated testing, it will produce something that passes my tests. But it won’t tell me I’m solving the wrong problem.

I don’t have a problem with constrained nondeterminism in general. There are loads of places where this is fine. The place I used machine learning in my PhD was in prefetching. Get it right and everything is faster. Get it wrong and you haven’t lost much. This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one. The other place it works well is if you have a way of immediately validating the output. I supervised a student using some machine-learning techniques to find better orderings of passes for LLVM. They were tuning for code size (in a student project, this was easier than performance, which requires more testing). You run the old and new versions, one is smaller. That gives you an immediate signal and so using non-deterministic state-space exploration is great. You (probably) won’t get the optimal solution but you will get a good one, for far less effort than trying to reason about the behaviour of the interactions between dozens of transforms.

It’s not clear to me that LLMs for programming have either of these properties.

@david_chisnall @rupert @EmilyEnough

"This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
@david_chisnall

Good god. Not if the incorrect answer leads to the mass death of the innocent. Which it most always does.
ST

"Evil knows no ideology or boundary, only an eloquent stance behind them."
SearingTruth

@SearingTruth @david_chisnall @EmilyEnough
I don't think anyone's claiming that there's any benefit of a correct answer that "massively outweighs the cost" of mass death.

@rupert @david_chisnall @EmilyEnough

"This kind of asymmetry is great for ML-based probabilistic approaches: the benefit of a correct answer massively outweighs the cost of an incorrect one."
@david_chisnall

@SearingTruth @david_chisnall @EmilyEnough Right, and if that asymmetry doesn't apply, as in your example, then it's not a good candidate for ML.

@rupert @david_chisnall @EmilyEnough

It's a perfect example.

As machine learning comprehends nothing.
ST

@SearingTruth @david_chisnall @EmilyEnough Which is why the decision to apply it is made by people. And people can decide how to weight the mass death of innocents and we should not allow those decisions to be made by people who will get it wrong.