Coding with LLMs in the summer of 2025 (an update)

https://antirez.com/news/154

Coding with LLMs in the summer of 2025 (an update) - <antirez>

My question on all of the “can’t work with big codebases” is how would a codebase that was designed for an LLM look like? Composed of many many small functions that can be composed together?
I believe it’s the same as for humans: different files implementing different parts of the system with good interfaces and sensible boundaries.
This matches my take, but I'm curious if OP has used Claude code.
Yep when I use agents I go for Claude Code. For example I needed to buy too many Commodore 64 than appropriate lately, and I let it code a Telegram bot advising me when popular sources would have interesting listings. It worked (after a few iterations) then I looked at the code base and wanted to puke but who cares in this case? It worked and it was much faster and I had zero to learn in the proces of doing it myself. I published a Telegram library for C in the past and know how it works and how to do scraping and so forth.

> ## Provide large context

I thought large contexts are not necessarily better and sometimes have opposite effect ?

LLMs performance will suffer from both insufficient context and context flooding. Balancing is an art.

Translation: His company will launch "AI" products in order to get funding or better compete with Valkey.

I find it very sad that people who have been really productive without "AI" now go out of their way to find small anecdotal evidence for "AI".

Did you read my post? I hope you didn’t.

This post has nothing to do with Redis and is even a follow up to a post I wrote before rejoining the company.

> Gemini 2.5 PRO | Claude Opus 4

Whether it's vibe coding, agentic coding, or copy pasting from the web interface to your editor, it's still sad to see the normalization of private (i.e., paid) LLM models. I like the progress that LLMs introduce and I see them as a powerful tool, but I cannot understand how programmers (whether complete nobodies or popular figures) dont mind adding a strong dependency on a third party in order to keep programming. Programming used to be (and still is, to a large extent) an activity that can be done with open and free tools. I am afraid that in a few years, that will no longer be possible (as in most programmers will be so tied to a paid LLM, that not using them would be like not using an IDE or vim nowadays), since everyone is using private LLMs. The excuse "but you earn six figures, what' $200/month to you?" doesn't really capture the issue here.

It’s not that bad: K2 and DeepSeek R1 are at the level of frontier models of one year ago (K2 may be even better: I have enough experience only with V3/R1). We will see more coming since LLMs are incredibly costly to train but very simple in their essence (it’s like if their fundamental mechanic is built in the physical nature of the computation itself) so the barrier to entry is large but not insurmountable.