Billions of dollars. Untold megawatts of power. To create a low grade #Google #AI Moron. #AISlop
@lauren To be fair it said according to "a" calendar, not "the" calendar. Could be last year's.
@gollyhatch I didn't ask if it ever was 2024, or if some random calendar somewhere said it was 2024. I asked a question any child could answer: IS it 2025. It's a binary question, answerable by 99.999% of the people on this planet, I would suspect. No excuses.
@lauren Oh I wasn't defending it, just making fun. Sorry if that wasn't obvious. The answer was very clearly wrong, but the way it reasoned would allow for such an excuse. It's like asking what time it is and the answer is "well *somewhere* it's beer'o'clock" and factually that'd be right in a way, but doesn't make it better.
@lauren Just tried that search prompt myself and got confronted with the really deep questions of our (or any?) time. Maybe Google is onto something. 🫩

@gollyhatch @lauren It's so gullible!

(Well, whatever the equivalent of "gullible" is to a non-thinking algorithm)

@jack @gollyhatch @lauren The equivalent is brown-nose, i guess. Those tech bros are developing the perfect tool for their orange cult leader.
@jack @gollyhatch @lauren Bro done transported us to GTA 6

@jack @gollyhatch @lauren I just tried stoking some confusion, the equivalent of Captain Kirk confounding Mudd’s androids. AI Overview started working through a solution but suddenly bailed out with ā€œOops, something went wrong.ā€ I was hoping for ā€œdoes…not…computeā€ with smoke coming out its ears, but I’ll take it!

Search prompt: ā€œi'm glad it's 2031. if my next birthday is March 1, how old was i in 2026? assume i was born in the last leap year.ā€

@lauren @gollyhatch It is not an excuse but it is a possible explanation. "According to a calendar" is technically not a lie, that can indeed be 'any' calendar. LLM's are based on mathematics, it just spits out words according to a bunch of complicated formulas. It does not understand anything, there is no #actualIntelligence in there.
@alterelefant @gollyhatch I don't need a tutorial on LLMs. The point is that this garbage is being forced down the throat of users and the amount of misinformation is vast and apparently growing, nor is Big Tech willing to take responsibility for any damage done to users who assume that Google is giving them accurate information like it used to.
@lauren @gollyhatch it is indeed very worrying. Let's hope people are able to pick up on the fact that all of this LLM stuff is too much of a distraction and not improving their life in any significant way. It is only adding noise.
@alterelefant @gollyhatch Big Tech isn't giving them a choice. They're stuffing it into everything. It's all about trying to recover their enormous investment that they fear might never pay off unless everyone is forced to use these systems.
@lauren @alterelefant There's always a choice, the problem is the average person not caring enough to educate themselves and make the right one.
@gollyhatch @lauren There is indeed some room for improvement. Let's hope people will start moving away from this nonsense.
@alterelefant @gollyhatch @lauren attitudes may start to change once the AI Bros start to charge users, whether corporate or individuals, what LLMs cost, plus profits, to spit out answers.
I gather Musk's (unsigned) deal with Telegram is based on splitting the money raised by Grok subscriptions, I'd be interested to know what they propose to charge, how they will charge and whether anyone will actually sign up for this.
@marjolica @gollyhatch @lauren It's very much a loss-making venture and the main question is at what point they demand to see a return on their investment. Maybe the investors just use the losses to be able to lower their tax brackets and circumvent paying any tax at all? Only time will tell.
@gollyhatch @alterelefant NO. That is an attitude from techies (and I'm obviously one), that I've detested my long career, reaching all the way back to the early ARPANET days at UCLA. It is NOT the responsibility of busy, nontechnical people to have to "educate" themselves to not be abused by Big Tech hype and manipulation. Most only know enough tech to do what they absolutely have to do online, and may wish they didn't have to even do that much online -- but the alternative options have been vanishing. BLAMING THEM for these abuses is WRONG.
@lauren @alterelefant I'm not blaming them, but I do expect people to make conscious choices and not just accept whatever big tech is presenting to them. And you don't need to be a techie for that. People make these choices already. Netflix is shit? Let's check out other streaming services then. That's not a techie choice. Google AI is bollocks and their search results are 99% ads? Use an alternative then. There's always gonna be greedy bastards taking advantage of clueless people. Fighting one of them bastards won't solve the problem, educating the people will.
@gollyhatch @alterelefant I stand by my statement. I deal with nontechnical people who are abused by Big Tech pretty much all the time. They are NOT in a position to understand these systems, which even many experts don't understand. This isn't like deciding a streaming service isn't showing the movies you prefer!
@lauren @alterelefant Well and I stand by my position. You don't need to technically understand these systems to see that they're shit, like 2024 is not 2025, you made the best example there. Once you realize it's shit you should switch, or you really absolutely don't care, then that's fine too, but it's your choice.
@gollyhatch @alterelefant That's an easy example because it's so obvious. The most dangerous parts of these systems are the responses that seem authoritative but are wrong, or even worse, partly correct and partly wrong. The mixed response like that is absolutely devastating, likely to fool most people who aren't experts on the topic, and in general is one of the most potent spreaders of misinformation including of the most dangerous kind. This is a well studied area in terms of misinformation, disinformation, and propaganda in the offline world.

@lauren @alterelefant Absolutely! That's why I consider it my duty to educate my friends and family about the fact that AI is shit.

EDIT: …because there's simply no point in educating Google that what they're doing is shit as long as they make profits from being shit.

@gollyhatch @lauren @alterelefant "your friends and family" ... and your doctor, who didn't realize when they did a quick search of your symptoms it was AI that hallucinated a disease that sounded like the disease you actually have, your banker, your test proctor, the bureaucrat you're begging for your earned benefits because the AI auditor failed your application, in all likelihood... because you seem to be trying so hard to be obstinate, you're probably an AI engagement bot.

@lauren @gollyhatch @alterelefant

A kind-of related example of this is privacy: the tendency to put the responsibility for privacy on the user, by putting hard to understand decisions on them so you are no longer responsible for their privacy.

For example, "I had the user opt-in/out to this behavior, so any privacy problems are things they agreed to and not a problem with my privacy design."

@hackbod @lauren @alterelefant It's a complex thingy. In the EU they tried/try to counter that with the "cookie law": websites gotta display a big ass warning that they're using cookies to profile you and you gotta agree before they're allowed to actually do that. What happened? People got angry about the EU annoying them with pesky cookie warnings, rather than getting angry with websites asking them to agree to ~300 tracking services per page to spy on them.

@gollyhatch @lauren @alterelefant

I'm not talking about government regulation, I am talking about the responsibility of the tech industry to design things that are reasonably safe for normal users rather than putting unreasonable responsibility on them to protect themselves.

Though tracking is interesting -- that prompt is generally bad UX because it is hard for a user to understand. In that sense, something like Privacy Sandbox could be better by automatically providing increased privacy.

@hackbod @lauren @gollyhatch @alterelefant exactly, usually when we shift great responsibility on to users because we gave them some big powerful machine, like vehicles airplanes, and boats to command, we required training and testing. None of that is happening with AI users.
@lauren
One of my jobs is proctoring written tests for people applying for career licenses. Many of them are "non-techies", going into such work as nails and cosmetology, so computer expertise isn't their field. Recently, I encountered an examinee enthusiastically saying that she had used "ChatGPT" to study for the exam. She said, "Have you heard about it? It's great! It will answer anything you need to know!" I noticed that she seemed to struggle with the test, and used the full time allotted. I have a strong feeling that her study method led her badly astray.
@gollyhatch @lauren @alterelefant
the problem is lack of alternatives. Users, technical or not, are basically forced to sell their private data to monopolistic predators. I count myself a technical persons, e.g. I even run my own mail server and my own web server, but then I am extremely privileged here - hardly any non-technical person would do this. E.g. using gmail is basically letting Google read your emails and serve you targeted ads in return for them providing you a web-based email. Does Google have a paid-for alternative? They have something called Google Suite - but it's not cheap, not terribly easy to manage, and basically suffers ftom the same privacy issues.

@gollyhatch @lauren @alterelefant LLMs will never be significantly better than this.

Given that many people who love them just skip the ā€œlet’s try a search engineā€ step altogether now… maybe it’d be smarter of Google to double down on ā€œwe’re the trusted sourceā€ instead and to kill their ā€œAI Overviewā€ box.

@chucker @gollyhatch @lauren Google has already doubled down on the 'ai' thing and are rapidly alienating their users from them.

@gollyhatch @lauren @alterelefant

ā€œnot just accept whatever big tech is presenting to them.ā€

Not accepting requires agency… choice. Most people have been carefully maneuvered into a place where they either don’t have any, or can’t see it.

@gollyhatch @lauren @alterelefant There isn't a choice if the product doesn't tell you it has AI. There isn't a choice if all the alternative products also have AI. There isn't a choice when the product has a monopoly on the information you need for your research. There isn't a choice if you're too busy raising kids and working two jobs to research alternatives. There isn't a choice if you need assistive technology where every non-AI product has been bought out or out-competed by Big Tech. There isn't a choice when the govt mandates an OS in schools and public service which includes AI that can't be disabled. There isn't a choice when your boss orders you to use it.

Okay sure you could quit your job and live in a cave, that's *technically* a choice I guess. But not really.

@lauren @gollyhatch At the moment it is still a loss-making venture. Even when they charge 20 EUR/GBP/USD for a monthly subscription. But don't worry, when they charge everyone 200 a month they will start making money. It is like the drugs dealer, you just have to make sure your customers can't do without our 'product' and you will be able to charge them anything you like.
@lauren @alterelefant @gollyhatch Isn't "they are stuffing it in everything" pretty close to one of the behaviors listed in the Sherman Act -- use a monopoly position in one market to conquer share of an adjacent market?

@lauren @alterelefant @gollyhatch

There’s been a thousand reasons for me to stop using Google for search over the years, and yet I haven’t … *this* is the issue that has prompted me to load a couple alternative search engines and begin using them more often. I certainly don’ t need *more* bullshit in my life.

@lauren @alterelefant @gollyhatch

The legal framework that allows them to subvert our agency in the has been in careful development for the last 2 or 3 decades, but at every step the ā€˜Cassandra’s’ have been dismissed, because of shiny new things. We won’t be able back out of this corner easily without a significant legal and cultural shift and all the disruption that makes that possible.

@DavidM_yeg @lauren @alterelefant @gollyhatch

I have been using #duckduckgo almost exclusively for years with great results. The decline of #google makes the switch even more worth it.

@BeardlyDavid @DavidM_yeg @lauren @gollyhatch Agreed, #duckduckgo works for me. When looking up an English word I choose to use #deepl and when checking Wikipedia I use the actual search engine in Wikipedia. Spread your search queries over multiple dedicated platforms. That will make it harder to create a profile of you and the results are more direct and more reliable.

@alterelefant @lauren @gollyhatch

Metaphorical drunk toddler in a professors gown bedazzled the audience with technical truths!

@alterelefant @lauren @gollyhatch the math behind Neural Networks is actually not that complicated or complex. It's the number of neurons which make them powerful. Sometimes.
@felix_eckhardt @lauren @gollyhatch It is indeed the massive size of the neural network that makes it complicated and uncontrollable.

@alterelefant @gollyhatch @lauren

Nothing an LLM says is ā€œtechnically a lieā€ … that requires comprehension of semantic content, a thing you’ve just explained LLMs don’t have.

The bigger concern is why people who *do* comprehend insist on using bullshit machines to provide ā€˜answers’ to questions and keep sticking them into every available crevice of our digital lives, and the answer to that one is beyond me.

@DavidM_yeg @gollyhatch @lauren The fun part is that the internet is now full of content that is generated by LLM's which is a massive problem for training new neural networks. Because garbage in is garbage out. Maybe the problem will solve itself faster than we think? I hope people realise that it is all useless 'technology' and invest in their own capabilities instead of relying on some dodge piece of kit called LLM's.

@alterelefant @gollyhatch @lauren

There is a very significant percentage of people who seem to *prefer* comforting bullshit over truth in many areas… this is a significant impediment to the fight against the bullshit machines.

@DavidM_yeg @gollyhatch @lauren We always ask ourselves what brought down the powerful Roman empire. We already know what it is that will take down civilization as we know it.
@lauren @gollyhatch many Iranians would answer it's 1404, but I am not aware of any nation using a calendar where it's currently 2024.