CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant

https://lemmy.dbzer0.com/post/63081488

CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant - Divisions by zero

Lemmy

These morons really think AI is going to allow them to replace the technical folks. The same technical folks they severely loathe because they’re the ones with the skills to build the bullshit they dream up, and as such demand a higher salary. They’re so fucking greedy that they are just DYING to cut these people out in order to make more profits. They have such inflated egos and so little understanding of the actual technology they really thing they’re just going to be able to use AI to replace technical minds going forward. We’re on the precipice of a very funny “find out” moment for some of these morons.

The scary part is how it already somewhat is.

My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.

the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.

Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”

I feel like so much LLM-generated code is bound to deteriorate code quality and blow up the context size to such an extent that the LLM is eventually gonna become paralyzed

I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.

They told me their main concern is that they aren’t sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it’s their job to give final say of “yes this is good”

At that point your m they’re just the responsibility circuit breaker, put there to get the blame if things go wrong.
Welcome to QA!
It would be interesting to know where your friend works and what kind of application it’s on, because your comment is the first time I’ve ever heard of this level of automation. Not saying it can’t be done, just skeptical of how well it would work in practice.
That was my general thought process prior to them telling me how the system worked as well. I had seen claude workflows which does similar, but to that level I had not seen before. It was an eye opener.
There’s absolutely no way this can be effective for anything other than simple changes in each PR.
I should ask them at some point how it is now that its been deployed for a bit. I wouldn’t expect so either based off how I’ve seen open sourced projects using stuff like that, but they also haven’t been complaining about it screwing up at all.

I found out that some teams at my company are doing the same thing. They’re using it to fix simple issues like exceptions and security issues that don’t need many code changes. I’d be shocked if it were any different at your friend’s company. It’s just surprising to me that that’s all he was doing?

LLMs can be very effective but if I’m writing complex code with them, they always require multiple rounds of iteration. They just can’t retain enough context or maintain it accurately without making mistakes.

I think some clever context engineering can help with that, but at the end of the day it’s a known limitation of LLMs. They’re really good at doing text-based things faster than we can, but the human brain just has an absolutely enormous capacity for storing information.