this is what actual nuance about AI looks like - Dorian Taylor @doriantaylor on how LLM coding is vaguely possible, but of limited usefulness for real work

https://buttondown.com/dorian/archive/slop-machine-future/

@davidgerard @doriantaylor even these nuanced takes get a lot wrong and assign too much usefulness where none is deserved
@cap_ybarra @davidgerard cool, what do they get wrong

@doriantaylor @davidgerard

"it's good for writing unit tests"

is it really though? how are you sure the tests fail in the circumstances you need them to? are all unit tests equally valuable?

"i didn't have to read the documentation for x"

so you understand all the ways it can fail, and the circumstances under which you expect it to succeed? you're sure it didn't miss some critical detail?

the answer to all of these questions, btw, comes from grokking the code, which will mean you need to read the code and toy with it enough to sufficiently inhabit it, in which case how much time did you save, really?

@cap_ybarra @davidgerard

1) at no point did i say it was *good* for writing unit tests, i just said it was *possible* to generate them; whether they're any good is a separate consideration

like these things are known to make bad tests and even alter tests to pass; my point was you can't get away with skipping test coverage whether you write the tests by hand or it generates them, because of the way it works ("works")

but as we both pointed out, no guarantee generating tests will save any time

@doriantaylor @davidgerard you're right, my read was insufficiently close. i'm a jerk