Using a free software stack, you could be an effective developer with a relatively low budget. A cheap or used laptop and an internet subscription.

LLM coding is changing that too. You either need a very powerful and expensive machine to run a local model, or (currently more likely) an LLM subscription. We are lead to believe you have to pay a monthly fee to be an effective developer.

The prospect of your output as a developer being tied to a proprietary service seems risky at best.

@jani how many hardware refresh cycles before you can run a small but capable coding model—say, qwen3 coder next—locally on modest hardware? We’re in the same position today that Xerox Alto was with GUIs circa 1978, but with broader and more diverse access to the future than 50 employees in one office.

@leeg @jani None. It's impractically slow for actual work-use, but I run qwen3-coder-next on a little ~€750 AMD machine with regular 64GiB DDR5 DIMMs. Granted, I got that RAM 2.5 years ago before the craze. But with Strix Halo hardware I'd say its feasible.

Like you said, how accessible was hardware, let alone software, in the early days of (personal) computing? I paid ~€130 for a STUDENT license Visual C++ in ~1998!

The original premisse feels almost entitled.

@RandySimons @leeg

The point is not just about the money.

When you bought that Visual C++ license, you knew you could use it as long as you wished, regardless of whether the vendor went belly up or discontinued the product or decided they didn't want you or anyone in your country as a customer.

If you want to avoid that lock-in, the hardware cost is significant, and recurring.

@jani @leeg But that qwen3-coder-next model is more free (as in libre) than that VC++ license ever was. Runs locally on my machine. Never requires new hardware.
So present, affordable hardware (minus DRAM...) can already run free, capable AI. It will get better still. And newer (free) models might require beefier hardware, but how is that different from free software?

@RandySimons @leeg I think we have a difference of opinion on how useful the local models that run on inexpensive hardware are, and how the evolution of language models require beefier hardware faster than any other field in software development.

Maybe that will change in the future, and hardware evolution catches up again.

Other than that, I completely agree using a local model avoids the lock-in and rent on LLM subscriptions. That is the model I would personally prefer as well.

@jani @leeg I don't know if we differ on that. I use Claude/GPT/Gemini models for work. They *are* way better than qwen3-coder-next, which I've used on my recent hobby project. But it's (already) useful.

My bigger concern is that these free models are still too expensive to train by free/libre orgs. We just got some freebies/appetizers from big commercial orgs. If (when?) those dry up, we indeed get to a point were only subscription to proprietary services exist as option.

@RandySimons @jani there’s Apertus, unless we think the Swiss government might dry up?

@leeg @jani Sure, and there are others, like Olmo, but those models are currently not really feasible for software development.

Of course, I do hope those orgs will keep up, and eventually release models better suited for development as well.