it's truly amazing what LLMs can achieve. we now know it's possible to produce an html5 parsing library with nothing but the full source code of an existing html5 parsing library, all the source code of all other open source libraries ever, a meticulously maintained and extremely comprehensive test suite written by somebody else, 5 different models, a megawatt-hour of energy, a swimming pool full of water, and a month of spare time of an extremely senior engineer

@tuban_muzuru

1. Or maybe the senior engineer would have simply written all the code and it wouldn't have taken that long. Nobody measures this. We don't even know yet if it's a "force multiplier" or a distraction. I've written at length about this phenomenon here: https://blog.glyph.im/2025/08/futzing-fraction.html

2. Or maybe they would have solved the actual social problem instead, i.e. that the original library is insufficiently maintained, rather than rewriting to move the locus of control closer to themselves.

The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@glyph @tuban_muzuru

1. We have (for a given value of we, also have). It is. With dragons and foot guns.

2. Great point. One of my favourite techniques for working with LLMs is telling it not to write a thing 😂.

I did enjoy your opening post though.

@ashguy @tuban_muzuru

1. [citation needed]. and not like, sarcastically. I have heard this claim from enthusiasts over and over, but it’s always in secret internal discussions where there’s no methodology (or even results) to evaluate.

meanwhile, in the public scientific record:

https://hackaday.com/2025/07/11/measuring-the-impact-of-llms-on-experienced-developer-productivity/