First Sundar Pichai, now Tim Cook, both admitting that generative "AI," by dint of its very foundations (cf. "Bias Optimizers"), will not and cannot stop bullshitting (cf. "On Bullshit Engines").

Look at me. Listen to me: If someone is selling you a "search" and "knowledge" product they claim is smarter and better and faster than you, but it literally cannot be guaranteed to give you simple true facts about consensus reality when asked, then that product DOES NOT WORK and SHOULD NOT BE USED.
https://futurism.com/the-byte/tim-cook-admits-apple-ai-stop-lying

Tim Cook Admits Apple May Never Be Able to Make Its AI Stop Lying

In a new Washington Post interview, Apple CEO Tim Cook admitted that he's not "100 percent" sure the company's AI will stop lying.

Futurism

Use generative "AI" to spitball, hypothesize, overcome the tyranny of the blank page? I mean if the environmental costs weren't astronomical and the training corpora weren't largely stolen, then yeah, sure.

Use it for facts? Knowledge? To fully *Replace* thought, feeling, and creativity? Absolutely not.

And yes, the use of algorithms and machine learning to arrange search results based on profit motive has been a big problem for consensus knowledge for a long time, and everything here [https://ourislandgeorgia.net/@Wolven/112622292468046430], about search and knowledge in LLMs, Safiya Noble and others— including myself— long ago first wrote about as applying to said parameters of search.

And LLM integrations make it Worse.

Dr. Damien P. Williams, Magus (@[email protected])

First Sundar Pichai, now Tim Cook, both admitting that generative "AI," by dint of its very foundations (cf. "Bias Optimizers"), will not and cannot stop bullshitting (cf. "On Bullshit Engines"). Look at me. Listen to me: If someone is selling you a "search" and "knowledge" product they claim is smarter and better and faster than you, but it literally cannot be guaranteed to give you simple true facts about consensus reality when asked, then that product DOES NOT WORK and SHOULD NOT BE USED. https://futurism.com/the-byte/tim-cook-admits-apple-ai-stop-lying

Mastodon

Since this has escaped containment, here's a note that "Cf." is an abbreviation that means "compare," also often used to mean "see also." So when I say 'Cf. "Bias Optimizers" … "On Bullshit Engines,"' i'm not just throwing out phrases. I'm directing you to compare what I said to two Specific things:

https://www.americanscientist.org/article/bias-optimizers
https://www.youtube.com/watch?v=9DpM_TXq2ws

Bias Optimizers

AI tools such as ChatGPT appear to magnify some of humanity’s worst qualities, and fixing those tendencies will be no easy task.

American Scientist
@Wolven It is disruptive in the field of NLP. We measured in one case that it was 20% more accurate than humans for entity recognition tasks. We automate workflows for our customers and they have a big ROI. It is here to stay and it is more about automation than generating 'truth'. Truth is also not defined from a philosophical perspective. You always need to check your sources but I think you are aware of this.
@demiurg I'd like to see those experiments, and also what the comparative parameters were. Also I'd like to see the data preponderance of reinforced biases and disparities in those workflows, because it sounds like you think the automation of processes and the disposition of knowledge and belief and the implication of those two sets of things on people's lives don't have anything to do with each other, when actually they have Everything to do with each other.
@Wolven These are not experiments but numbers of a production ticketing system. The company measured errors for the same task before and after the automation. I am aware of the impact of automations and I work several years in the field. I am also sceptic about the technology and how it is handled. The observations I make are empirical, though. I had the impression you only talk about LLMs not generating facts, while for me this is clear and not the use case where it has the biggest impact.

@demiurg
You have a vested interest in people believing those outlandish claims of yours.

Fact is, the only profitable GenAI software vendor is Microsoft, and that's only because they have a lot of subscribers to CoPilot, because they crawled GitHub to automate bad code generation.

All the rest is just hot air being passed around the hype mill, waiting for a dumbass to buy into it.

Pump'n'Dump bullshit all the way.

@Wolven

@androcat @Wolven I do not seek for customers on Mastodon 😀 I just wanted to share my experience. I think you might have missed the part, where I said I am sceptic about how this tech is handled. My experience is empirical, though.

@demiurg @androcat @Wolven

Aren't these the kind of workloads that AI/Machine Learning/Expert Systems have been doing for a while now?

@michaelcoyote @androcat @Wolven Yes, exactly. But with a way higher initial effort. That why it is disruptive here. You just need a prompt which anybody could come up with and it works right away.
@Wolven big tech really needs to just stop with AI assistant tech. It doesn't work and it's going to completely undo any progress we've made trying to deal with climate change.
@Jennifer @Wolven We need to decarbonize the grid, pronto, AI or not
@quantensalat @Wolven sure but does anyone think that will actually happen? I don't.
@Jennifer @Wolven In that case, AI is the least of our problems imho
@quantensalat @Wolven oh I agree, I'm unfortunately in the "we're doomed" camp. I guess AI is just going to speed up reaching a climate apocalypse since our governments favor helping businesses keep making money over solving climate change.
@Wolven Yes. If OpenAI isn’t paying Apple for the integration into the phones, do they hope to use the AI in the phone to make money for OpenAI or Microsoft?
@Wolven yeah i think summaries might be an actual use case, for example

@nora @Wolven I know people who have used GenAI to take things they understand and format them into Appropriate Business Speak and it does that pretty well.

The overzealous mansplaining makes it worthless for anything where you're depending on it for knowledge.

@dagnymol @nora @Wolven I notice it as people trying to dismiss the technology by ignoring its strengths and focusing on their weak points, even if there are people working on those.

Why? I guess a lot of people are genuinely trying to warn people of them being badly used (which is something really common!) but I think a lot of people are afraid of them stealing jobs. As a hobbyist programmer, I love to see automation, yesterday I even took some of my free time to automate a boring part of my job. Thus I'm biased to like those things, I'm not afraid of it, I guess many people are.

@qgustavor @dagnymol @nora @Wolven If an observation like "Gosh this doesn't actually function as advertised, does it?" is just dismissive "focusing on the weak points," what could a solid criticism be?
@WesternInfidels @dagnymol @nora @Wolven It depends on who's advertising: many people are not overstating the features and are making clear which are the drawbacks. I guess many people don't read the fine print.

@qgustavor @dagnymol @nora @Wolven If there's "fine print" involved, it's because someone is trying to make things *less* clear. That's what fine print is for. This is not even an argument about the tech, it's about manipulation of people, it's about power. People *don't* tend to read fine print, literally and metaphorically. They shouldn't have to.

When Google hooks an LLM up to its search page, that sends a message. About what Google intended it for. What Google thinks it's good at. If a user can only clear things up by having the initiative to read the "fine print," that's a deliberate act of deception on Google's part.

In any case, observing that the LLMs aren't very good at answering the kinds of factual questions we might use a search engine for isn't dismissive or nit-picky, it's very much a core issue. "This 'wrench' is actually just a hammer" isn't dismissive or nit-picky, either, even if there's a research team working on "Hammer 2.0: Wrench Now" even as we speak.

@WesternInfidels @dagnymol @nora @Wolven Google search was bad even before that LLM thing: last year my parents had to expand hundreds because of their horrible Smart Answers which misquoted a website. That being said, I blame Google for my parents mistake? No, I warned them, they didn't listen to me. But it's a free market, as long as someone makes something better, and there are people working on that, either Google fixes that or people will stop using it, either because they learn from others mistakes or from their own.

@dagnymol @nora @Wolven

Ha ha.,. Perfect.

"Overzealous mansplaining".

I have been struggling to figure out what I hate most about Chat GPT et al.

Thank you.

@Wolven
We talked about this in cultural anth too, informed by Haraway's cyborg theory. The students were AI critical
@Wolven Exactly! I would absolutely use these tools to help me rewrite my own texts (especially helpful for us who don’t have English as our first language but have to write academic papers and research proposals in English all the time, which obviously puts us at a great disadvantage) IF they didn’t have such a horrendous impact on the environment and IF they weren’t built in such unethical ways.
But they are, so I don’t.

@Wolven
@inthehands

Be careful using generative "AI" to spitball and vomit draft initial versions.

The danger is that many of the biases and falsehoods will be baked in to that initial draft. Yes, it is possible to correct them, but it can also be hard to see them in that initial text. Like all the people arguing that fixing coding bugs is harder than generating good code in the first place, it is often hard to see implicit biases hidden in text.

What I have seen work is to write the initial text yourself and then ask generative "AI" to do something to the text, which gives you a new perspective. This is analogous to the classic poetry writing technique of inverting every word in a poem, then taking both together and seeing what you make of the combination.

(Of course, the concerns about environmental costs and stolen corpora still hold.)

@Wolven I really want AI to be awesome and useful, but everything being marketed right now is just absolute garbage. I think it's probably already ruined the term "AI" for any serious work.

Even if we ignore the environmental and ethical concerns (which we shouldn't), AI isn't even good at most of the the purposes they're pushing for it to be used. AI seems best at providing fuzzy answers to vague questions, not truthful answers to specific questions.

@Wolven Yes. I should have read this before responding to your last post.
How does a teacher then respond to a paper written with a little or a lot of AI in it?
In some ways, it might look better than a fully human paper, based on human reading and thinking. So are you grading looks, or reading and thinking?
@Wolven Who argues AI should "fully *Replace* thought, feeling, and creativity"?

@Wolven

It’s not hard to understand really 😱

@Wolven if you’re less the mediocre at a task, then LLMs might be an improvement.

@Colman @Wolven
Thing is, if one is less than mediocre at a task then possibly one also cannot tell whether LLMs are an improvement. So many information retrieval topics involve genuine consequences.

I'm sure there are genuinely sensible places for 'user-prompted fluent-appearing text generation'. I'm just not sure that it's in information retrieval *as such*. Negotiating the question and surfacing the outputs of an information retrieval system, sort of a RAG scenario, maybe.

@emmatonkin @Colman @Wolven
This matches my - very limited - experiences wit LLMs so far.
When I’m asking about something I have a certain expertise in, the responses tend to be trivial or faulty.
When I’m asking about something I don’t know much about, the responses look OK.
That makes me wary about their usefulness regarding knowledge based use cases. It also makes me wary about people whose first reaction to something they don’t know is asking chatGPT.
Unfortunately, I’ve already experienced several consultants acting that way.

@Wolven

Unless you like spaghetti cooked in gasoline.

@Wolven It used to be that the absolute baseline functioning of any search engine (which LLMs functionally are) was it's ability to accurately return relevant search results.

It's wild to us that anybody would accept a calculator that cannot accurately do simple arithmetic, and that is exactly where "AI" and LLMs are currently.

@Wolven Off to write a SciFi script about a machine overlord that is humored because it is so hilariously out of touch with reality. Hence it has nightly broadcasts and legions of fanatics.
@Wolven @fifilamoura you can, very easily. Turn it off and the lies stop.

@Wolven

So. AI and Trump have that in common

@Wolven search has never been guaranteed to give you simple true facts about consensus reality, back to yahoo and altavista days. I think The idea that search engines do not work and should not be used as a result is unreasonable.
@crschmidt @Wolven when did search return 'put glue on your pizza' as a recipe before now?
@crumbleneedy @Wolven The standard set was "guaranteed to give you simple true facts about consensus reality". Google search has always had the ability to return bad results; and those results have appeared normative at least as long as Featured Snippets have been around (so at least a decade). Bing implemented features to display StackOverflow snippets directly in the search result (and sometimes picked the wrong answer). There are no guarantees that search is returning the correct thing.
@crumbleneedy @Wolven For example, the answer to "How many rocks should you eat per day" has had a Featured Snippet telling you to eat 3 servings of rocks per day for years. That isn't new, and it isn't AI (even if people see it surfaced through AI). But the fact that search can give you bad results doesn't mean that using search is worse, overall, than not using search.

@crschmidt You've read Algorithms of Oppression, right? Because the use of algorithms to arrange search results due to profit motive has been an issue for a long time, and everything I said about LLM's? Yeah I and many others long ago wrote about how that first applied to the original parameters of search.

But you apparently work for alphabet/google; you know all this. Bye.
@crumbleneedy

@Wolven @crumbleneedy I have not read Algorithms of Oppression, though I agree with the broad summary as I understand it.

But if you believe that search engines "SHOULD NOT BE USED" to find information on the internet, what do you think people should do in order to find information on the internet?

My opinion is that any time you're using a tool, you need to know the limitations of using that tool towards your goals. Search engines are a tool, like any other, in that regard.

@crschmidt @Wolven @crumbleneedy The difference is that traditional search engines exposed their limitations in a much more honest way that could be reasoned about with a modest amount of critical thinking. The algorithms have been ratcheting up the obfuscation of those limitations for years now.

@crschmidt @Wolven @crumbleneedy The way that search engines meet the stated criteria, IMO, is that they were presenting consensus reality in a reliable, predictable way by saying, “here are links relevant to the topic that you can visit and determine the value of”. Yes, plenty of falsehoods and bullshit in those links, but a search engine was not saying, “here is the TRUTH”, just, “here are some places to look for it”.

We got to a vastly different place by degrees.

@crschmidt @Wolven there's a world of difference between providing a set of links for the user to sift through and make informed decisions about, versus a summary at the top of your search results confidently declaring nonsense as a Google-endorsed "factual" summary
@Wolven In my experience, generative models are most useful for transforming text from one format to another. I've used it successfully to create TikZ figures, even though I don't know the syntax. I describe the figure in precise detail, and it converts my description into code, usually correctly. I don't use it for anything that requires creativity.

@Wolven Did you seriously type 'look at me, listen to me' on a social media post?

It's pompous and grandiose. Your audience are adults, not naughty, inattentive children. Dont speak to us that way. Don't be such an asshole.

@freezeanopensore Did you seriously just come into the mentions of someone you *Do Not Know* to berate them for a stylistic writing choice within their *Own Space*, when you could have just… gone about your business and lived your life… and then call *That Person* an asshole?

That takes some serious lack of introspection and gall, I gotta say. Good bye.

@Wolven Yeah, I did, because you're being an asshole in a public space. I realise you're not used someome confronting you with how your own behaviour looks to your audience, but that's probably why you act like that.

I know you want a little kingdom of your own but that's not how this works.

@freezeanopensore
Holy shit.

The only thing here is pompous and inappropriately phrased as if to a child is this reply. What you wrote is simply not appropriate. Replies like this are the reason people flee Mastodon.

I’m sure you’re going to reply with something that prompts a block, but before you do, I wanted you to hear a third party here saying “YTA.”