My take on LLMs is they have their uses but they make way too many errors to be trusted without verification. An example: the idea that you can replace customer service with LLMs is hilarious yet also shows contempt for the customer.
As for concerns of replacing engineers: I got bad news, or is it good news idk. Vibe coding doesn’t work for anything bigger than a toy or demo. It ends up being a mess that usually needs to be rewritten anyway.
There are harmless uses of course. Sure someone might ask Claude for a sorting algorithm. Big fucking deal as long as a human has checked it. Sorting is a solved problem but with solutions aggregated in tons of code and papers scattered all over.
“But what if the AI makes buggy code???” Ohhh you really didn’t use Linux like 10 years ago, did you? Humans make shitty code constantly. It gets fed into LLMs. Why do you think they produce so much slop? It didn’t just invent slop, it just automated its production. lol
@Elizafox I worked for an AI startup but I’ve been telling my boss that LLMs don’t really help me with code and cannot and will not and I’ve been turning him down (he’s a professional UI designer but not an engineer) whenever he has ideas like “but it can do boilerplate no?” or “but it can give you some ideas on how to approach the problem, no?”
I always just said: but that’s my entire profession, that’s what they want my expertise for, to solve problems with reasonable solutions.
it’s like if you were asked to use a tool, as a UI designer, that spits out templates and fills them in with info
“okay but where do I do the job?”