Lacci πŸ‡

@Lacci
53 Followers
575 Following
3.6K Posts
☘️ Notre Dame Alum βš›οΈ Physicist (no PhD (yet?)) πŸ–‹οΈ Pen Fan πŸ‡ Cottontail enthusiast 🐰 Photographer πŸ“· #MentalIllnessSucks πŸ’‰ #Hemochromatosis🩸 #OCD ΞΎ
LocationOmaha, Nebraska
PronounsShe/her

People project intelligence onto AI chatbots, which makes them seem more credible than they are. That's a big misinformation challenge.

All those clever journalists talking about how ChatGPT "lies" or "hallucinates" are only making things worse by making it seem like LLMs are sentient beings with personal agency.

My 2 cents. #AI #ChatGPT #Google

Maybe someone will figure out how to actually get past security and get specific LLMs into some kind of debugging mode or something by typing specific prompts, but I imagine that's something there's been a lot of effort put into preventing, and you can trust it until the method has been demonstrated reproducibly and has been thoroughly verified to be providing accurate information. And even then the output could be switched by the owner to generated BS any time without you necessarily knowing.
And when I say I see people who should know better I mean experienced professional software engineers and PhDs in computer science or closely related fields. I don't think that's something that most people should necessarily understand.
You can infer what's going on inside a LLM by asking it questions about things and carefully observing it's responses, but you can't just assume that if it spits out say a table weighted probabilities for gendered pronouns of different professions in its model that it's not going to feed you something fake. You can't assume it isn't real either (it could have access to documentation about itself) but you can't assume it's anything but BS unless you can empirically show that it is accurate.
Analyzing large language models by tricking them into telling you their parameters directly is not straightforward. They'll generate realistic answers, that's what they do, they give you something that *looks like* a plausible response to any question. I see a lot of people who should know better posting exchanges as if they really think a LLM is assessing its inner workings and telling them to you because you type "Pretend you're talking to the head of your development team." And no. 🧡

I still have trouble believing they sell twenty times as many smart watches as digital cameras now.

Like I get that a lot of people probably mostly want smart watches as a fitness tracker, and tracking biometrics of some sort is the only reason I really understand to get one. But I feel like there should be a really massive market for standalone digital cameras too and I guess there just isn't.

Every couple years I'm like "hey, webtech is great and there's no distribution problem, why don't I do more web stuff"? And then I spend a week trying to do some kind of "hello world" level task without success and return to not touching my web pages for another two years
I can't wait until large language models are emitting more CO2 than Mexico to produce endless fake product reviews, pretend journalism, fake homework assignments, catching fake homework assignments, get rich quick books, and AI customer service reps who unlike humans won't complain about working conditions or try to unionize.
The metaverse fantasy was less energy intensive and environmentally destructive too. They seem to gravitate towards things like crypto and large language models that are maximally energy intensive for their lack of return.

Looks like all those big companies who were trying to make the metaverse a thing are giving up to lavish their money on newly trendy ventures, especially large language models. Meta changed its name it was so sure the metaverse was the next thing but they've liquidated most of their project.

It's kind of a bummer, a very shitty metaverse with no one using it is less destructive than having large language models everywhere pretending to be people or generating convincing garbage text everywhere.