CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant

https://lemmy.dbzer0.com/post/63081488

CEO of Palantir Says AI Means You’ll Have to Work With Your Hands Like a Peasant - Divisions by zero

Lemmy

These morons really think AI is going to allow them to replace the technical folks. The same technical folks they severely loathe because they’re the ones with the skills to build the bullshit they dream up, and as such demand a higher salary. They’re so fucking greedy that they are just DYING to cut these people out in order to make more profits. They have such inflated egos and so little understanding of the actual technology they really thing they’re just going to be able to use AI to replace technical minds going forward. We’re on the precipice of a very funny “find out” moment for some of these morons.

These morons really think AI is going to allow them to replace the technical folks.

This specific moron was talking about people with a humanities degree.

A healthy chuck of CEOs have a humanities degree. It’s a common undergrad before moving to B and J-School.

B and J-School.

Blow and Jerk?

Even less plausible. There was a paper published recently arguing that by design LLMs are quite literally incapable of creativity. These predictive statistical models represent averages. They always and only generate the most banal outputs. That’s what makes them useful.
Well, every academic field needs creativity. But it’s nothing new that people from economic or tech bubbles have a disdain for humanities.

The degree of randomness in generative models is not necessarily fixed, it can at least potentially be tunable. I’ve built special-purpose generative models that work that way (not LLMs, another application). More entropy in the model can increase the likelihood of excursions from the mean and surprising outcomes, though at greater risk of overall error.

There’s a broader debate to be had about how much that has to do with creativity, but if you think divergence from the mean is part of it, that’s within LLM capabilities.

That’s a good point. The problem is that LLM’s are calibrated for prediction. Their randomness is tweaked for efficacy. Forcing them to be more chaotic just makes them much less effective. This inherent tension is why they’re mathematically incapable of any sort of consistent creativity.