Brewlog: Coffee & Agents

My latest #blog outlines how I built a super-niche application for tracking my specialty #coffee brewing and consumption, but also marks some of the most complex work I've done with an agent to date.

Built in #Rust with Axum, backed by #SQlite and with a #TailwindCSS based UI.

Take a read for some takeaways on the effective use of tools like Claude Code and Github Copilot, or if you fancy hosting your own super nerdy coffee tracking app!

https://jnsgr.uk/2026/03/brewlog-coffee-and-agents/

Brewlog: Coffee & Agents

I built a self-hosted coffee logging platform with Rust, Axum, SQLite and Datastar. The project was my most complex agentic coding effort to date, with Claude Code acting as my long‑lived pair‑programmer for almost all of the implementation. This post covers the motivation, design decisions, and what I learned about building non-trivial software with AI assistance, as well as some patterns I’ve adopted for agentic coding.

Jon Seager
@jnsgruk Could you share how much it costed you to build this app you so nicely described?
@michalfita Difficult to say exactly. I currently have a Claude Code Max plan, and I wasn't tracking the token usage particularly carefully. There's no doubt that it would have been quite costly if I was on a more PAYG type arrangement with something like OpenRouter.
@jnsgruk OK, thanks. I'm interested as not everyone can afford paying #Anthropic or #OpenAI what creates a new aspect of the social division.
@michalfita yes, that’s probably true. In reality it would still be possible to build this with lower-tier or even free means (self hosted models), but it would definitely take longer.

@jnsgruk Self hosted doesn't seem to be well explored. Testimonials I've seen were from people running models on $3k or more costing hardware.

Personally I tried free tiers, they get we nowhere in terms of anything than a simple console application. And available tokens are burned with three prompts.

@michalfita we’re doing some interesting work at Canonical to try and make access to silicon-optimised models a bit simpler.

In reality this isn’t perfect yet either, but as this trend develops more and more more silicon will become more capable of running inference efficiently - we’re really at the start imo.

https://canonical.com/blog/canonical-releases-inference-snaps

Introducing silicon-optimized inference snaps | Canonical

Canonical today announced optimized inference snaps, a new way to deploy AI models on Ubuntu devices. Install a well-known model like DeepSeek R1 or Qwen 2.5 VL with a single command, and get the silicon-optimized AI engine automatically. […]

Canonical

@jnsgruk That's noble.

Would it work with AMD GPU? Not listed in docs. Can VL model somehow "see" the website it would be working on?

@michalfita I don't think we have an optimised model with AMD *yet*, but that's absolutely the goal.

We're trying to use our partner relationships to get optimised models for Nvidia, Intel, Ampere, Qualcomm, AMD, etc...

I think we have already got models out for Nvidia, Ampere and Intel.

Different models will naturally have different capabilities, and further depends on the quantisation the hardware can support - but in theory the answer is yes if you feed a capable model a screenshot of the work, for example.

@jnsgruk The best NVidia I have is Quadro M2200 in old ThinkPad P51. I don't play games so I never invested in GPU heavy machines.

But I have to give it a spin on what I have.

@michalfita this is probably the best one to use with an nvidia card now: https://snapcraft.io/nemotron-3-nano
Install nemotron-3-nano on Linux | Snap Store

Get the latest version of nemotron-3-nano for Linux - Nemotron 3 Nano inference snap

Snapcraft
@jnsgruk I installed it on the hardware mentioned and it's very slow. Zed's assistant times out and I can't find settings to increase the time 🤦‍♂️