What makes LLMs work isn't deep neural networks or attention mechanisms or vector databases or anything like that.

What makes LLMs work is our tendency to see faces on toast.

@jasongorman No, seriously - that's exactly what it is & it's a mechanism exploited by charlatans since always: https://softwarecrisis.dev/letters/llmentalist/
The LLMentalist Effect: how chat-based Large Language Models rep…

The new era of tech seems to be built on superstitious behaviour

Out of the Software Crisis
@jwcph @jasongorman I've never had a piece of toast with a face on it write hundreds of lines of code for me that actually works because I talked at it tho
@feld @jwcph Going by the hard evidence, nobody's had an LLM do that, either. Not without them fixing it.
@jasongorman @jwcph what do you mean? I tell it that it must pass the test case, pass the formatter, linter, and static analysis... and then it emitted code and executed what I asked and produced a working result that passed the test case

it automatically reads the results of the formatter, linter, compiler, static analysis, and tests cases and corrects any errors it encounters

if you don't give it thorough instructions it will produce garbage. so just give it the same guidance you would a jr dev along with examples of best practices
@feld @jasongorman Nobody believes you, dude. Let it go.
@jwcph @jasongorman why would I lie? I have literally nothing to gain by convincing someone that you *can* coerce these tools into doing something useful
@feld @jasongorman - and yet you're still trying...
@jwcph @jasongorman do you think spreading these lies will really make these tools disappear or something? what a strange obsession

they'll still be around 5 years from now. they're not going anywhere.