Billions of dollars. Untold megawatts of power. To create a low grade #Google #AI Moron. #AISlop
@lauren To be fair it said according to "a" calendar, not "the" calendar. Could be last year's.
@gollyhatch I didn't ask if it ever was 2024, or if some random calendar somewhere said it was 2024. I asked a question any child could answer: IS it 2025. It's a binary question, answerable by 99.999% of the people on this planet, I would suspect. No excuses.
@lauren @gollyhatch It is not an excuse but it is a possible explanation. "According to a calendar" is technically not a lie, that can indeed be 'any' calendar. LLM's are based on mathematics, it just spits out words according to a bunch of complicated formulas. It does not understand anything, there is no #actualIntelligence in there.
@alterelefant @gollyhatch I don't need a tutorial on LLMs. The point is that this garbage is being forced down the throat of users and the amount of misinformation is vast and apparently growing, nor is Big Tech willing to take responsibility for any damage done to users who assume that Google is giving them accurate information like it used to.
@lauren @gollyhatch it is indeed very worrying. Let's hope people are able to pick up on the fact that all of this LLM stuff is too much of a distraction and not improving their life in any significant way. It is only adding noise.
@alterelefant @gollyhatch Big Tech isn't giving them a choice. They're stuffing it into everything. It's all about trying to recover their enormous investment that they fear might never pay off unless everyone is forced to use these systems.
@lauren @alterelefant There's always a choice, the problem is the average person not caring enough to educate themselves and make the right one.
@gollyhatch @alterelefant NO. That is an attitude from techies (and I'm obviously one), that I've detested my long career, reaching all the way back to the early ARPANET days at UCLA. It is NOT the responsibility of busy, nontechnical people to have to "educate" themselves to not be abused by Big Tech hype and manipulation. Most only know enough tech to do what they absolutely have to do online, and may wish they didn't have to even do that much online -- but the alternative options have been vanishing. BLAMING THEM for these abuses is WRONG.
@lauren @alterelefant I'm not blaming them, but I do expect people to make conscious choices and not just accept whatever big tech is presenting to them. And you don't need to be a techie for that. People make these choices already. Netflix is shit? Let's check out other streaming services then. That's not a techie choice. Google AI is bollocks and their search results are 99% ads? Use an alternative then. There's always gonna be greedy bastards taking advantage of clueless people. Fighting one of them bastards won't solve the problem, educating the people will.
@gollyhatch @alterelefant I stand by my statement. I deal with nontechnical people who are abused by Big Tech pretty much all the time. They are NOT in a position to understand these systems, which even many experts don't understand. This isn't like deciding a streaming service isn't showing the movies you prefer!
@lauren @alterelefant Well and I stand by my position. You don't need to technically understand these systems to see that they're shit, like 2024 is not 2025, you made the best example there. Once you realize it's shit you should switch, or you really absolutely don't care, then that's fine too, but it's your choice.
@gollyhatch @alterelefant That's an easy example because it's so obvious. The most dangerous parts of these systems are the responses that seem authoritative but are wrong, or even worse, partly correct and partly wrong. The mixed response like that is absolutely devastating, likely to fool most people who aren't experts on the topic, and in general is one of the most potent spreaders of misinformation including of the most dangerous kind. This is a well studied area in terms of misinformation, disinformation, and propaganda in the offline world.

@lauren @alterelefant Absolutely! That's why I consider it my duty to educate my friends and family about the fact that AI is shit.

EDIT: …because there's simply no point in educating Google that what they're doing is shit as long as they make profits from being shit.

@gollyhatch @lauren @alterelefant "your friends and family" ... and your doctor, who didn't realize when they did a quick search of your symptoms it was AI that hallucinated a disease that sounded like the disease you actually have, your banker, your test proctor, the bureaucrat you're begging for your earned benefits because the AI auditor failed your application, in all likelihood... because you seem to be trying so hard to be obstinate, you're probably an AI engagement bot.

@lauren @gollyhatch @alterelefant

A kind-of related example of this is privacy: the tendency to put the responsibility for privacy on the user, by putting hard to understand decisions on them so you are no longer responsible for their privacy.

For example, "I had the user opt-in/out to this behavior, so any privacy problems are things they agreed to and not a problem with my privacy design."

@hackbod @lauren @alterelefant It's a complex thingy. In the EU they tried/try to counter that with the "cookie law": websites gotta display a big ass warning that they're using cookies to profile you and you gotta agree before they're allowed to actually do that. What happened? People got angry about the EU annoying them with pesky cookie warnings, rather than getting angry with websites asking them to agree to ~300 tracking services per page to spy on them.

@gollyhatch @lauren @alterelefant

I'm not talking about government regulation, I am talking about the responsibility of the tech industry to design things that are reasonably safe for normal users rather than putting unreasonable responsibility on them to protect themselves.

Though tracking is interesting -- that prompt is generally bad UX because it is hard for a user to understand. In that sense, something like Privacy Sandbox could be better by automatically providing increased privacy.

@hackbod @lauren @gollyhatch @alterelefant exactly, usually when we shift great responsibility on to users because we gave them some big powerful machine, like vehicles airplanes, and boats to command, we required training and testing. None of that is happening with AI users.
@lauren
One of my jobs is proctoring written tests for people applying for career licenses. Many of them are "non-techies", going into such work as nails and cosmetology, so computer expertise isn't their field. Recently, I encountered an examinee enthusiastically saying that she had used "ChatGPT" to study for the exam. She said, "Have you heard about it? It's great! It will answer anything you need to know!" I noticed that she seemed to struggle with the test, and used the full time allotted. I have a strong feeling that her study method led her badly astray.
@gollyhatch @lauren @alterelefant
the problem is lack of alternatives. Users, technical or not, are basically forced to sell their private data to monopolistic predators. I count myself a technical persons, e.g. I even run my own mail server and my own web server, but then I am extremely privileged here - hardly any non-technical person would do this. E.g. using gmail is basically letting Google read your emails and serve you targeted ads in return for them providing you a web-based email. Does Google have a paid-for alternative? They have something called Google Suite - but it's not cheap, not terribly easy to manage, and basically suffers ftom the same privacy issues.

@gollyhatch @lauren @alterelefant LLMs will never be significantly better than this.

Given that many people who love them just skip the “let’s try a search engine” step altogether now… maybe it’d be smarter of Google to double down on “we’re the trusted source” instead and to kill their “AI Overview” box.

@chucker @gollyhatch @lauren Google has already doubled down on the 'ai' thing and are rapidly alienating their users from them.

@gollyhatch @lauren @alterelefant

“not just accept whatever big tech is presenting to them.”

Not accepting requires agency… choice. Most people have been carefully maneuvered into a place where they either don’t have any, or can’t see it.