RE: https://mastodon.social/@ekis/116298222342530412

been thinking about the key factors that lead to thinking LLMs as they exist now are replacement to programmers

1) misconception in how far a working demo is from production ready software, people without experience, think that once you have a working demo, that you are like 90% of the way done; when it reality you are more like 10%

2) its like gambling, sometimes, especially early, its possible to get good results, and so one ignores the diminishing returns, or all the times it didnt work

obvio, there are ways to use them

I still believe a local LLM for FIM autocomplete is a nifty tool especially for writing scripts

I think they can be helpful with debugging by processing large amounts of logs

But the trend isn't towards that, its towards hyperscalers and agents which are incredibly slow, the resulting quality or the amount of required iteration incredibly unimpressive

Thats before the even addressing the fact that the math doesnt add up on the concept of agents, or the waste

@ekis Case in point, I'm working on a personal software project right now, and I spent the last month carefully building a foundation that I can expand with additional features as I go. I had an architecture in mind, I knew where I wanted to go, and I built this with the larger context in mind. That's paying off now, and adding features is easy.

Ai literally cannot do this. The people bragging about their Ai velocity are shipping hideous spaghetti code. It'd be funny if it weren't so dangerous.

@ekis since starting to use LLMs (claude in particular) more, I'm shocked at how much it can do with the right checks in place.

For example, if it's not writing tests or writing the wrong tests and you don't see it, then you should change your approach (or install a off-the-shelf skill suite)

I'm beginning to understand why people like Steve Yegge, and others convinced these things can replace us or change our roles to "Agent Managers", and no longer code writers. It's still garbage in, garbage out though, so thinking of it as amplifier of your skills/knowledge/wisdom makes more sense than anthropomorphising it.

I don't want to stop coding by hand (because I enjoy it, like feeling ownership, and don't want to see my skills atrophy), but I also cannot ignore this tool's usefulness as a small company. We're facing competitors far bigger and more well funded than us. Anything that gives an edge is important.

Ultimately I want local LLMs, but it's just not there yet (in my limited experience). One use I've been thinking about is trying to use small context windows for highly specific tasks locally (like characterization testing), but I'm yet to implement it and play around.

Last year I felt that LLMs were still a no-go for me when it came to production code henearion on brownfield systems, but now even some of our smartest engineers are using it and just baffled at what the current state of the art ones are able to do compared to previously.

We're optimistically cautious and experimenting, looking into better sandboxing, but keeping a human in the loop. You still need to think about your design, but now you can afford to generate various implementations and choose the best outcome, flipping the current economics of coding on its head.

It may in the end be easier to have different architectures to make more effective use of these things (at least in the web app / SAAS space). Ideas like: append-only code, layered context, event modeling are some that come to mind.

@ekis the first AI ever produced was "the dog". It's artificial (does not exists in nature) and it's intelligent.

I see no meaning in asking "will hunting dogs replace hunters"? .

I see more K9 (humans + dogs) are better at hunting.