@matt Apologies for wall of text, but it's a nuanced topic which I am not able to do justice succinctly.
This is hard to give a precise answer to because “work” is not a discrete binary category here. (I suspect this might also be driving some of the disagreements). Personally I am specifically worried about code that compiles and seem to do the right thing… yet have security, reliability or other quality problems that are not caught. Sure human programmers also make that kind of mistake
I suppose the short honest answer for I think I both don't think it works well enough to my satisfaction. Specifically, to me, one of the great things about software is how it's deterministic (well, mostly). Considering that randomness is core how what makes LLMs work well (to the degree they do), I have a hard time accepting that as “working” I suppose.
I think this is interesting, because I suspect a lot of us are looking at the same data, but we're interpreting it differently (think of two researchers categories statistical numbers differently… they have different viewpoints despite looking at the exact same raw data).
I *do* know of devs who are very clear that if it worked as claimed it would not change their opinion a dime. I would like to be believe I am one of those, but I have not been in the situation where I have been genuinely convinced they work well. A lot of aspects of tech are already highly immoral yet I still participate in at least some of that tech (but I also reject some and have made some parts of my tech life significantly less convenient because of it).
Another aspect of why I don't use LLM is neither about its ethics or how well it works, but the fact that I'm apprehensive of the cognitive effects it will have on *me*. Whether it's human language or writing, both of those are useful way to *think through problems* and carefully consider ideas and how they connect. I am not confident that my abilities for abstract reasoning will not be negatively affected by LLM usage, so that is also I reason I refrain.