Piet Eckhart

15 Followers
107 Following
42 Posts

We have "growth hackers" but no "stability hackers." "Disruptors" but no "preservers." Our entire vocabulary is oriented toward the new. We have no language for the equally difficult work of keeping existing things from falling apart.

https://www.joanwestenberg.com/the-rime-of-the-ancient-maintainer/

When developers produce code faster than they can understand it - by whatever means - it creates something I call "comprehension debt".

At some point, the code will break, and the means by which it was created will fail to fix it. And at that point *someone* will need to understand it.

e.g., the head of engineering at Meta ran a little internal experiment measuring time taken to fix bugs against % of code that was "A.I." generated . Guess what he found?

"I speculated that transformer performance would converge on not-quite-good-enough. Needs more work. See me after. Not so much 'super-intelligence' as 'super-mediocrity'."

#gpt5

https://codemanship.wordpress.com/2025/01/11/the-llm-in-the-room/

The time is upon as, folks. If anyone doubted that LLMs have hit a performance wall, it's undeniable today. This is as good as they're gonna get, and it ain't good enough.

The LLM In The Room

Over 2 years ago, the at-the-time not-for-profit research organisation OpenAI released a new version of their Large Language Model, GPT 3.5, under the friendlier brand name of ChatGPT, and started …

Codemanship's Blog

My new hobby is watching product managers on LinkedIn pretend they're building complex software applications using "coding agents" and trying to out-hype each other in the comments.

They don't need us anymore? Cool. We'll reserve our very special "not needed anymore" rate to come fix your shit.

Let me share with you what 30+ years writing software have taught me about estimates: they're bad and wrong and you should never do them.

A thread!

1/

I am now being required by my day job to use an AI assistant to write code. I have also been informed that my usage of AI assistants will be monitored and decisions about my career will be based on those metrics.

I gave it an honest shot today, using it as responsibly as I know how: only use it for stuff I already know how to do, so that I can easily verify its output. That part went ok, though I found it much harder to context switch between thinking about code structure and trying to herd a bullshit generator into writing correct code.

One thing I didn't expect, though, is how fucking disruptive it's suggestion feature would be. It's like trying to compose a symphony while someone is relentlessly playing a kazoo in your ear. It flustered me really quickly, to the point where I wasn't able to figure out how to turn that "feature" off. I'm noticing physical symptoms of an anxiety attack as a result.

I stopped work early when I noticed I was completely spent. I don't know if I wrote more code today than I would have normally. I don't think I wrote better code, as the vigilance required is extremely hard for my particular brand of neurospicy to maintain.

As far as the "write this function for me" aspect, I've noticed that I tend to use the mental downtime of typing out a function I've designed to let my brain percolate on the solution and internalize it so I have it in my working memory. This doesn't happen when I'm simply reviewing code written by something else. Reviewing code and writing it are completely separate activities for me. But there's nothing to keep my fingers and thoughts busy while I'm coming up with what to write next.

I didn't think we were meant to live like this.

"Ah, but Jason, LLMs raise the level of abstraction so systems can be programmed in plain English."

Two words, buddy: language entropy.

And even if you could get past that (which you can't, because if you could, then you'd be a programmer), even if the meaning of your prompts was completely deterministic, the responses won't be.

I created an interesting new (free, web-based) puzzle app for mobile devices (works best on iPads, less-well-to-not-at-all on other devices). Want to try some challenging puzzles that blend spatial reasoning and manual dexterity? Just Slide to Unlock! https://slide.isohedral.ca/
@mike_bowler On that note, this is one of my favourite examples; a screenshot I kept to share with students.
POKI+ | Ernst-Jan Pfauth | Substack

Alexander Klöpping en Wietse Hage duiken samen dieper in de wereld van AI en bespreken wekelijks de nieuwste ontwikkelingen op het gebied van kunstmatige intelligentie. Ze filosoferen en proberen in de toekomst te kijken, maar zijn ook scherp op het heden. Click to read POKI+, a Substack publication with hundreds of subscribers.