I had two thoughts this morning about "AI" and I kinda hate that I can't stop thinking and talking about this stuff, but here we are.

1. I think it would be easier for me to take the tech more seriously if the financial side would look less like circular investment scams and increasingly unrealistic bets on a future that seems increasingly more unlikely (see NVIDIA, OPEN AI), and if the hype men would sound a lot less like being part of a mass psychosis (See Gastown, moltbook, openclawd)

2. I think the handful of people I know who are "AI" enthusiasts and seem to get good results with coding agents seem to miss that they have something very similar to survivorship bias or "works-for-me" syndrome that leads them to dismiss or ignore all of the negatives that would occur in wider contexts (for example in a larger organisation).

(And this is before thinking about the ethical, environment and social externalities, which I personally cannot ignore but many people obviously can)

@halfbyte It's been particularly interesting for me for this exact reason. I'm getting incredible results and being very productive nights & weekends with open source projects. And at the same time, my job is in a context where the experience (and productivity) is severely different.

@kerrick I think there is a particular sweet spot for agentic coding stuff right now (I'm an observer, not a user, so grain of salt etc etc) which is 1. Experienced dev, 2. Working solo 3. On projects with limited scope.

In almost every other context (again my somewhat informed interpretation) the individual productivity gains are more or less meaningless because coding productivity is not the deciding bottleneck.

(1/2)

@kerrick the downsides are that agentic coding more or less stops knowledge propagation in organisations (for example: no pair programming). Less experienced devs don't even get the productivity boost and probably produce worse code, so codebases deteriorate, this in turn then destroy any meaningful productivity gains the experienced devs might have because they need to mop up the slop.

My Post was partially triggered by this thread, with some good thoughts:

https://mastodon.social/@nobsagile/115995138042782971

(2/2)

@halfbyte I cannot get folks to even read The Phoenix Project, let alone The Goal or The Principles of Product Development Flow.

@halfbyte

1. Agreed. I think many people agree.

2. Agreed. I think this is under-discussed! Massive codebase churn in a team project ranges from social faux pas to completely prohibited.

3. TBD. I'm working on testing the limits right now. Working in the open (saving chat logs & artifacts to the repo) right now: https://git.sr.ht/~kerrick/tokra/tree/trunk/item/doc/contributors

@halfbyte I‘m getting extremely tired of trying to have open discussions about pros and cons and realistic applicability of anecdotal successes and always having to ignore the ethical/environmental/social part of the argument. Starting on that part is a killer argument, but it doesn't go away by ignoring it.

@tja same. But, let's not forget that we are all ethically flexible up to a point and are very good at ignoring all kinds of externalities for all kinds of actions in all kinds of contexts, so I am somewhat flexible here if needed for the sake of the discussion.

I'm drawing a hard line for myself, but I am understanding towards people not willing or not able to do that for themselves.

@halfbyte You are right. Like you, I usually can't resist at least adding a side note of "leaving everything environment and copyright and ... aside" at some point, though. And then hoping it doesn't come across as too high-horsey.

@halfbyte Right, AI brings serious problems we mustn't ignore. But at the end, it's a tool—impact depends on use. It's not about one-line “do it for me” prompts.

With coding agents, bad teams get worse; good teams though e.g. gain better analysis and planning before implementation, improved test coverage, and even reduce comprehension debt.

IMHO AI can help to become a better dev.

@denny I mean, I obviously disagree. That being said, it's been a while since I spent time in larger teams. Maybe it is my missing imagination that keeps me from understanding how coding agents are supposed to help with the claims you made.

(And this is before thinking about the ethical, environment and social externalities, which I personally cannot ignore but many people obviously can)

@halfbyte You could be right — teams working on one large. complex project for years are different when it comes to AI coding. At the end of the day, it is another teammate. Bit weird, but mighty.

Out of tons of use cases I can share: I'm already happy about devs asking the agent those "stupid" intimidating questions about implementation details, domain knowledge and alternatives they might otherwise never ask. It's empowering.

@halfbyte (And I absolutely agree with your general perspective on the damage of AI. It's scarry. Now, educate me about my options, please.)