Wrote up about my personal journey from AI skeptic to someone who finds a lot of value in it daily. My goal is to share a more measured approach to finding value in AI rather than the typical overly dramatic, hyped bait out there. https://mitchellh.com/writing/my-ai-adoption-journey
My AI Adoption Journey

Mitchell Hashimoto
@donaldball It's still a death dealing fascism machine, I'm not going to set the world on fire just so I can code faster, and I look with a lot of askance at the men who think this is okay.

@mitchellh this is pretty spot on with my journey. Phase 1 & 2 are always awkward. Flashbacks to learning vim motions and pair programming while "almost" being proficient 😅

Shocker - agentic coding also requires learning it.

@kejne @mitchellh I'm curious, do you use pair programming routinely and efficiently?

I've done it occasionally and it's helpful. I'd put it in the same category with sparring i.e. rubberducking with a colleague - it helps when you're stuck and depleted the obvious first options and/or in need of a change.

Usually it's some specific item or ticket and there's an oppotunity to use two heads for it. One on the computer and one as the codriver. After a swift half hour it usually diffuses to "I'll take it from here, thanks". At work there's always a bit of pressure not to "waste" other people's time and pairing up for extended periods starts to feel like it's nearing diminishing returns.

I've also tried mob programming in a student project which was interesting. However I haven't had the people or the energy to drive further experiments to dig more into these. I wonder if there's still some corners unturned for me.

@rojun I've been using mob programming effectively in several teams. It has been extremely helpful to build team culture rather than the direct output I would say. A new team player with mixed experience taking turns to work on the code. (Got a blog post about it on my homepage, btw)

Key points: rotate often, git handover, create an environment of trust.

Same with pair programming really. I always leaned towards several smaller mobs or pairs that swarm rather than one big one.

With AI, things have changed a bit I think, but essences remain the same. Now you might pair up and learn how to use agents together for one (Also blog post)

@kejne I'll read those. Interesting.

Had you titled the section as Blog instead of Homepage, I would have noticed it previously when looking at your profile.

@rojun thanks for pointing that out. That makes sense, so making the change 😊
@mitchellh i'm currently cloding the concept of some kind of a consul for agents tools/peers discovery ;) and it's goddamn effective and beautiful to see it in action ; but as i'm a not so good engineer. i'm wondering how you would have designed such stuff ;)

@mitchellh I thought letting agents roam wild would work eventually and I could shepherd them as I would a junior sending me PRs for review. This, unfortunately, didn't work for me.

I ended up using smaller, "dumber" models that I could theoretically self-host (GLM-4.7, Qwen-3-Coder-Next) and working in a tighter loop with them (I design the interface, they fill it in).

Primeagen's 99 tool was a big big saviour here

@budududuroiu @mitchellh I am on a similar journey. Now focussing on how to define a good agent. It is working out to create simple, focussed agents that can do one task well. It’s a bit counterintuitive but it has to do with. 1. Choosing the right model and temperature 2. Limiting context by having a system prompt focussed on the one task. 3. Limiting the tools it can use and the information it can access. And keep tweaking that. Often I do that by letting the agent modify its own system prompt
@budududuroiu @mitchellh So instead of feeding the agent as much information and as many instructions and tooling as possible, I’m now at creating as little distractions for the agent to do his one job well. I keep in mind that LLMs do not learn, have a hard time focussing and cannot judge their own accuracy. We have to help them.
@yth @mitchellh interesting on the modifying system prompt route. How are the results of your experiments with that?
@[email protected] @mitchellh Basicly I tell the agent what I want from it and put that in to markdown in “its own words”. Tell it to shape the prompt in such a way that it will help it help me. And keep it compact. So it’s not doing it autonomously, I just use the agent to develop its own system prompt.
@budududuroiu @mitchellh I did experiment with a system prompt (in a markdown file) that ended with something along the lines of “if you learn anything that will help you in doing your job, you are allowed to edit this file”. And it did.

@mitchellh my journey is similar and the agents have opened up a world where I build things that make me more efficient - like CLIs, linters and tool plugins (obsidian) but also fun things (media players).

I’ve started a process of tight engineering practices such as TDD, BDD, consistent architecture (hexagonal) and heavy linting (including custom ones). The idea here is that if these practices are consistent then I can out tool potential slop. BDD is tedious but might pay off the most as it forces you to document edge cases and hopefully it catches context that humans take for granted.

Claudes new tasks and teams are going to make things better/faster/stronger.

@mitchellh if you don’t mind me asking: does this workflow requires a high budget in terms of token spending? I saw claude getting stuck in a loop a few times and the idea of that happening while I’m not looking is scary

@mitchellh I think "harness engineering" sounds more like work on the agent CLI "runtime" itself, I called it "feedback loop engineering" as the main problem to solve is getting high signal and facts based feedback back into the context quickly so errors don't compound:

https://www.danieldemmel.me/blog/feedback-loop-engineering

Feedback loop engineering

Why the most important skill with AI coding agents isn't prompting or curating context – it's designing how they verify their own work

i am dain
Great write up. I also feel very mirrored by your journey.
I'm usually the guy that try things and propose stuff at the company I work and I'm kind of settling to the idea that trust is the word.
People who often delegate work feel super good working with LLMs, but for people that don't is harder. Creating guidelines as a team is also a high churn activity and I've found that the best results I can get are when I dedicate time to find these guidelines.