CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant

https://lemmy.dbzer0.com/post/63081488

CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant - Divisions by zero

Lemmy

These morons really think AI is going to allow them to replace the technical folks. The same technical folks they severely loathe because they’re the ones with the skills to build the bullshit they dream up, and as such demand a higher salary. They’re so fucking greedy that they are just DYING to cut these people out in order to make more profits. They have such inflated egos and so little understanding of the actual technology they really thing they’re just going to be able to use AI to replace technical minds going forward. We’re on the precipice of a very funny “find out” moment for some of these morons.

The scary part is how it already somewhat is.

My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.

the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.

Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”

There’s absolutely no way this can be effective for anything other than simple changes in each PR.
I should ask them at some point how it is now that its been deployed for a bit. I wouldn’t expect so either based off how I’ve seen open sourced projects using stuff like that, but they also haven’t been complaining about it screwing up at all.

I found out that some teams at my company are doing the same thing. They’re using it to fix simple issues like exceptions and security issues that don’t need many code changes. I’d be shocked if it were any different at your friend’s company. It’s just surprising to me that that’s all he was doing?

LLMs can be very effective but if I’m writing complex code with them, they always require multiple rounds of iteration. They just can’t retain enough context or maintain it accurately without making mistakes.

I think some clever context engineering can help with that, but at the end of the day it’s a known limitation of LLMs. They’re really good at doing text-based things faster than we can, but the human brain just has an absolutely enormous capacity for storing information.