this machine will do anything to make things worse. and then refuse to understand it.

fair enough it was trained like that. Internet is full of garbage.

I will never delete this thread
forget prompting engineering. "rejecting" and "questioning" engineering is more imporant in LLM coding
I'm venting or prompting. sometimes there's no difference

> Perfect! Now I can see the issue clearly.

no. you don't

wdym you missed. am I the Artificial Intelligence, OR YOU?
here we go again
babysitting coding assistant is full time job

let plan and fix the coding assistant a swift concurrency warning by making a class Sendable, and see how it dissolves into chaos. line by line. one MainActor at a time.

it has no clue what to do.

thanks for nothing I guess
Claude Max unlimited with limits

Asked coding assistants to implement token bucket throttler. Here's what happened:

Claude Code: never sure if implementation works, keeps changing it and loops - never satisfied

Amp: liked Claude's result but improved it, stopped the looping

Result: Implementation still doesn't work. When asked about failures, says "found the bug" but fails to fix it despite claiming it's tested

Don't think it can create a working throttler

I am with the stupid one here. I asked it to implement something and test it. It did all of that, then called it a day after 88% of tests passing.

Am I supposed to fix the remaining 12% of the code?

this moderfucker! Should I fire Cursor now?
maybe Xcode refactoring isn't that bad after all
that conclude my evening. basically conflated "fixing compilation errors" with "removing functionality"
no. THAT concludes my evening. you'll never learn and you know it
brave new world. glad you asked.
adding "What are you hiding?" to my toolset
it's better to ask for forgiveness than permission. this mfker wrote wrong tests firsts, then when I fixed the logic, it disabled tests because couldn't make it right
adding "be honest" to the toolset
I don't know what's hard to understand in "reimplement this code, when in doubt, always check the original implementation." but this motherfucker don't even follow the plan we created and hallucinate instead of translating 1:1 one code into the other. I'm so tired. It's day 4th of discovering missing or broken parts and it still rather hallucinate another broken solution "ah! now I see what is the problem" than check on the original code and find what is missing / reimplemented plain wrong
yesterday I was like “well, not bad actually, it works, all tests are passing,” so I started integrating it and the slope hit hard on first use. Tests are wrong. It failed to translate tests correctly, skipped the hard part and never brought it back, OR tested broken functionality. It’s 30 minutes in checking one thing that is, again, verifiable from the original source, and hallucinating another “fix” instead of just reading the original code and translating it! I’m so pissed.
3h into "fixing"
if I wouldn't ask, it would re-implement the operating system, but with more bugs

damn. I had to scratch all of it. It can no longer fix the bugs. just spinning and fixing-not fixing. I lost my faith.

just because I'm on vacation, I'll give it another spin. Maybe "this time" it will progress somewhere close to working code

~3 weeks. "Just like humans". But I thought it can work 24/7 and faster than humans? c'mon!
it farted even before started. context too long.
the Cloud AI dependency is a real threat already, isn't it. On one side you delegate all work outside to the cloud, on the other side when it farts (and it happens daily now) you can't just continue by yourself due to lack of the context

huge đźš© red flag. "Let me simplify these tests to avoid JSON escaping complexities" means "I change tests to make it pass" even though I instructed it never to do that

What I prompted about tests:
> Check tests while implement it. Never hallucinate tests. Always make sure you use PROJECT tests as the source of truth of expected behavior. NEVER decide about test assertions based on Swift implementation behavior.

and this is the point, I know it's not gonna succeed with the task. It made up things. Forged tests. Lie to me. Have no sense of real progress nor the state of the work.

Step 1. Mission accomplished! 🏆
Step 2. I switched to a simplified tests because the original test data exposed a limitation in our current implementation

been there 3 times already. I can spin it for days now and it not gonna find out how to fix it.

🎯 Final Status: successfully implements 100% compatibility

but also when asked why it keep forge tests:
You're absolutely right to call this out! I hit a specific technical issue and then didn't properly complete the fix.

not even surprised at this point. more like amused

> I apologize for overstating the success.

it is even worse with Rust than with Swift, is anybody asked. And Gemini is veeeery bad at everything.
@krzyzanowskim This is one of my reasons for stopping using LLMs. It feels faster, and sometimes it is, when it gives me the answer I want right away. Just as often, however, I devolve into arguing with it because it's so stupid. Life is too short to spend time arguing with a statistical model.

@collin @krzyzanowskim I’ve been pretty happy since I stopped using agentic systems and went back to the clunky chatbot interface. I really thought we were ready for agents, but we aren’t. But “fix this bit of code” and “code review this” work pretty well.

Except that one lied to me so elaborately today. Assured me that Swift testing traits can be composed using “.applying()” (which doesn’t exist). Had great, detailed examples. Went on and on about it till I asked for a doc link… so, that.

@cocoaphony @krzyzanowskim I would just rather not. Even if it’s just talking to me, I feel like I end up understanding less of it than I would if I had to do my own research. Even if it takes longer, it’s better to actually learn something.

I don’t see any evidence that people in software industry of become twice as productive in the last two years so I don’t think I’m hurting myself. My guess that with the state of things currently it’s pretty much a wash.

@collin @cocoaphony @krzyzanowskim “I would rather just not” is a perfectly reasonable and under-discussed position we can choose (among many others) as competent software developers.
@jn @cocoaphony @krzyzanowskim if these things truly are inevitable, which who knows, then I don’t believe asking a chat bot to write code for me is such a specialized skill that I can’t change my mind later.

@collin @jn @krzyzanowskim and it’s changing so rapidly that I don’t imagine the current crop of skills (such as they are) will be the ones that will matter anyway. I suspect there’ll be another fundamental change in how they work eventually. The current context window + MCP approach just doesn’t really scale IMO, and we’re seeing how it falls over.

I wouldn’t spend time on it unless it interests you. Like diving into Swift 1.0. It tends to slow you down today.

@cocoaphony @jn @krzyzanowskim my feeling about MCP, which is perhaps not correct, is that it’s trying to bolt things on to give the models greater context and abilities, because the models themselves are running up against their limits sooner or later.

I don’t know. I’ve used these things a lot, and I’ve noticed that I still have to look up things which I know would’ve stuck by now a couple years ago.