@jerry its becuase the people that made the AI that got the traction were vc bros who followed the minimum viable product school of thought, and were happy creating a system that tells people what it thinks they want to hear as opposed to the truth and objective reality
the enterprise ships computer is happy calling someone an idiot if theyre an idiot
frontier llms arent
@chillicampari @jerry it is, in theory, possible to make the ai from star trek. but it would be an entirely different animal, and it would have to be trained from the ground up with great care, and very likely a different technology.
technically speaking, it could be done JUST with baysean filters alone, and RAG components to fetch data from the usual places. and it wouldnt require boiling the oceans or trillions of dollars of gpu
@rustbuckett @jerry @chillicampari @Viss
I use this custom instruction with chatgpt:
“Do not offer ego-stroking language. Challenge my assumptions directly and point out contradictions, blind spots, and weaknesses. Prioritize uncomfortable truths and rigorous critique over validation or engagement tactics.”
It becomes a bit of a bitch, but it also exposes logical flaws in my thinking without much sugar coating
@_XCM @rustbuckett @jerry @chillicampari
at the end of the day, the models base instruction is to please you, even it if means lying to you. of course if you ask it to use different language, it will use different language, but that will not change the fundamental truth of whats happening
it will still hallucinate, and it will still dish out bogus info and it will still go crazy on you.
@chillicampari
I think we got Idiocracy seasoned with Religulous
The thing that pisses me off—so, you're right, btw, and it'd be awesome if you weren't—is that this says people are not interested in using LLMs to increase their own knowledge/skill/whatever more efficiently.
OpenAI has a business to run, at the end of the day, and if enough people, and often enough, were demanding ChatGPT to actually teach them something instead of simply doing it for them, they would tune ChatGPT accordingly, or they would at least try to whether they would succeed or not.
I'm not saying that would actually work out all that well; LLMs are not actually intelligent and they don't actually understand things; I realize that, but… they could've made some attempt at tuning the thing, more so to that effect, and they would've attempted it if that'd been what their customers had wanted.
People just don't want to learn anything.
Simple as that.
@gunchleoc @the @Viss @jerry All of the marketing language of generative models centres around "productivity".
Make more!
Spit things faster!
No need to read!
Don't bother writing!
This is a reflection of the indoctrination of Neoliberal Capitalism. It's a continuation of
Hustle Culture!
Sigma Grindset!
Monetise your Hobbies!
It's absolutely infuriating but people don't get to these willingly servile positions on accident, they're constantly pushed towards them:
"Will YOU be left behind?"
@syrupsplashin @Viss @jerry That's what the bros want you to believe - that "AGI" will "emerge" as more and more billions are burnt.
Spoiler: it won't. We will never build sentient machines; there are no sentient aliens; and we will never emigrate off world.
@drahardja @jerry that has been planned a long time ago with social media. The center of peoples habits has become scrolling and swiping through posts and the spending time on a single post averages around maybe 3s, idk. Sociolgists call that attention economy. That forms a society easily to mislead and manipulate.
Media and sociological education is the answer but it will take time.
So they have not designed a new abstraction like Assembly was to machine code and high level programming languages were to assembly. That would be a level up.
They have just reduced the friction towards aggregating knowledge but w/ a significantly high error rate.
They have hooked into the worse human cognitive flaws but are not providing any leveling up.
Even if they could reduce the error rate to zero w/o friction we just end up w/ the "Whispering earring"
@jerry
I know.
I was expecting Star Trek or Star Wars type of robots and AI, but also ray guns, antigrav sleds, and flying robots.
What we got was whiny feeble drones, robots that can't even get you a soda, and autocorrect clippy pretending to be intelligent
My favorite Example of unwanted new feature... We do that in software and information systems all the time..
Yep. A software solution that 1)No one was looking for, and 2) Is only partly effective, only does part of the job.
@jerry I've seen experienced developers blindly trusting AI instead of any personal thought and getting stuck in their work.
The AI isn't perfect, it guesses and predicts but it doesn't understand jack shit.
I'm afraid you're not missing anything.