I see my co-worker using Cursor a lot, mostly to validate that the code he's about to run does what he wants it to do. Occasionally, he asks it what to do next if he's stuck on why the damn terraform isn't planning.
Mostly it's right about what the TF will do. But especially for the "help me solve this" tasks, it's wrong. Just flat wrong. If it were human, sociopathic levels of wrong.
But it's not a human, it's an inference engine. It's not capable of being "right" or "wrong". Those attributes are meaninglessly applied to AI.
Even though I'm new to the job, I've had to fix fix two major AI fuck-ups.
Despite this, my co-workers continue to use Cursor. Meanwhile, I'm trying to retain my humanity by not using itâknowing that is ultimately futile not because the machines will take over, but because humans will cede their authority to inference engines and plagiarism machines.