I've tried Claude Code twice in the past month and cannot reconcile the insane results people are reporting with its actual performance.

It's superhuman at specific things (my ability to join two CSVs is capped by my typing speed), but anything that's much more complex than that starts to get into weird territory where I'm typing so much English that it might have been less tiring to simply write the code.

@ludicity

> I'm typing so much English that it might have been less tiring to simply write the code.

Oh this is so true! I've realized that my prompts have grown over time to the point where I type out entire novels which then makes it not hallucinate too much, but then I could almost write the code myself.

@woosh @ludicity I would love some examples of this as that really shouldn't happen if you use it correctly. Can you provide some?
@drtau @ludicity Well I'm not really doing A/B tests, it's just something I've observed over time. I assume there must be a positive correlation between amount of useful context in the prompt and quality of the output perceived by the user.