A developer on Blind talks about how their coworkers now ship features in an hour that used to take days (3 or 5 story points) thanks to Claude Code.

This has created competitive pressure to ship features so quickly that they don’t even take time to understand the code and feel they are now the bottleneck in the way of the AI.

There is going to be interesting fallout across the industry as companies adjust to the reality that execution is essentially “free” (if you can afford the tokens).

@carnage4life It is going to be "fun" to see what happens when (inevitably) the big LLM companies decide they have captured enough market and can start making people pay the tokens enough to recover their (enormous) losses.
@j_bertolotti I expect it will end up looking like automation in physical industries where some jobs like assembling a car are worth automating while flipping burgers at McDonald’s is cheaper to have a human do than a robot.

@carnage4life The jobs worth automating are the ones that are easy to specify ("put a screw here, and turn until the torque you need becomes bigger than this value") + quality control is also easy to automate.
McDonald would happily automate flipping burgers, if making a machine that can cook anything reliably wasn't so damn hard!

A lot of use cases for LLMs will stop being viable if they cost 10x what they cost today, and even more will have to reckon with how difficult it is to do quality control for them.
A few use cases will survive and thrive (if I could predict which ones I would be starting my company now and become a billionaire in 10 years), but most won't.

@carnage4life @j_bertolotti I think it's going to be another booster for inequality. Jobs that are too complex or risky to be automatized will be left untouched, as well as jobs that already have that low wages, that automatization has hardly a ROI.
The jobs 'in the middle' that are well-paid enough to offer a ROI for automatization and are also less complex are at risk. I honestly don't know how many jobs are going to be in that range.
@carnage4life Not sure what Blind is, but any product that just explodes feature wise tend to get bloated. I know you know that. Reducing speed to get features right does not change because of AI. Also, what happens to those PRs? Claude also verifying that and merge to production with little oversight? That gonna be a super fun code base to navigate when Claude (or any AI) has been building new features with such little oversight. Sounds like a terrible place to work.
@carnage4life They are falling prey to the temptation of eating the surplus. If you are shipping 20 times faster, you can and indeed must invest some of that time back into quality, refactoring, improved harnesses etc.

@carnage4life problem we’re finding is that as a small team, a human PR review is the bottleneck to maintain quality in the codebase itself.

After experimenting with automated reviews *only*, we found that our product quality dipped enough customers were complaining. We also found each team member were less able to articulate what everyone else is doing.

Definitely a difficult problem to navigate: we produce a LOT more work, but still require a human in the loop before shipping

@carnage4life deploying PRs just by manual testing sounds like a death by thousand cuts. If you have an infinite unreliable code generation that produces more than what humans can review that doesn't mean humans are the bottleneck. This is like saying there is a factory that can produce planes really fast but half of them are broken and the QA is the bottleneck. It sounds like making your developers work more for the same salary.
@carnage4life this is going to lead to massive quality issues. Will we see QA become more important, or will companies and/or users just put up with minimum standards?
@carnage4life If you don't understand the code AND you don't understand the unit tests, because you pushed whatever Claude spat out, you don't know you produced correct code at all. You're purely trusting the machine not to produce fragile sludge.