Benjamin Schmid

@bentolor
84 Followers
77 Following
730 Posts

Being a discreet human, not a jerk.

searchable musings on #Java, #Kotlin, Web, #Container & #Cloud, Code Quality, #InnerSource, #Security, #asciidoc #asciidoctor

Identity Proofhttps://bentolor.de/
GitHubhttps://github.com/bentolor/
The register explains a few fallacies in the whole #ai business dynamic, or better say storm, currently ongoing.
https://www.theregister.com/2026/03/17/ai_businesses_faking_it_reckoning_coming_codestrap/
AI still doesn't work very well, businesses are faking it, and a reckoning is coming

interview: Codestrap founders say we need to dial down the hype and sort through the mess

The Register
I miss the days when NFTs were the stupidest thing I'd ever heard of.

"Across multiple coding agents and LLMs, we find that context files tend to reduce task success rates compared to providing no repository context, while also increasing inference cost by over 20%"

I've suspected this all along. Folks spending mucho-plenty time curating project-level .md files have been deluding themselves that it helps.

https://arxiv.org/abs/2602.11988

Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?

A widespread practice in software development is to tailor coding agents to repositories using context files, such as AGENTS.md, by either manually or automatically generating them. Although this practice is strongly encouraged by agent developers, there is currently no rigorous investigation into whether such context files are actually effective for real-world tasks. In this work, we study this question and evaluate coding agents' task completion performance in two complementary settings: established SWE-bench tasks from popular repositories, with LLM-generated context files following agent-developer recommendations, and a novel collection of issues from repositories containing developer-committed context files. Across multiple coding agents and LLMs, we find that context files tend to reduce task success rates compared to providing no repository context, while also increasing inference cost by over 20%. Behaviorally, both LLM-generated and developer-provided context files encourage broader exploration (e.g., more thorough testing and file traversal), and coding agents tend to respect their instructions. Ultimately, we conclude that unnecessary requirements from context files make tasks harder, and human-written context files should describe only minimal requirements.

arXiv.org

Show HN: I built a sub-500ms latency voice agent from scratch (www.ntik.me)

Link: https://www.ntik.me/posts/voice-agent
Comments: https://news.ycombinator.com/item?id=47224295

How I built a sub-500ms latency voice agent from scratch | Nick Tikhonov

Nick Tikhonov's blog

This is really a "WTF how could they ever think this is a good idea?" kind of vulnerability. Usually the kind of stuff you get from shady, incompetent startups, but this is Google...
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
Google API Keys Weren't Secrets. But then Gemini Changed the Rules. ◆ Truffle Security Co.

Google spent over a decade telling developers that Google API keys (like those used in Maps, Firebase, etc.) are not secrets. But that's no longer true.

By the way, if you go to https://github.com/claude and "block this user", every Github repo you visit containing code credited to Claude will actually have a warning sigil

CF

If you use AI-generated code, you currently cannot claim copyright on it in the US. If you fail to disclose/disclaim exactly which parts were not written by a human, you forfeit your copyright claim on *the entire codebase*.

This means copyright notices and even licenses folks are putting on their vibe-coded GitHub repos are unenforceable. The AI-generated code, and possibly the whole project, becomes public domain.

Source: https://www.congress.gov/crs_external_products/LSB/PDF/LSB10922/LSB10922.8.pdf

"As fun as it is to rib on Mozilla, the web needs Firefox. I feel for the Firefox developers who actually care. State of Mozilla will inspire no one. The sloppy prose are borderline unreadable. The presentation is designed to stop you reading."

https://dbushell.com/2026/01/28/mozilla-slopaganda/

Mozilla Slopaganda

The one where I question reality

dbushell.com

It’s finally happening. The Ouroboros is complete.

#ai #slop #bubble

This chart shows the total number of Stack Overflow questions asked each month. As you can see, AI summaries in Google and AI coding tools have nearly killed the site. It is only a matter of time before the site shuts down completely. The golden age of independent news, blogs, forums, and specialized sites like Stack Overflow is over. Whether this is good or bad, only time will tell. Personally, I think we are now restricting all internet traffic to just a few Gen AI apps https://data.stackexchange.com/stackoverflow/query/1926661#graph