"Why does it say the hospital is equipped for stroke emergency? We were there. They denied!"

"Maybe you should contact them that their website is wrong. This is dangerous."

"It wasn't on their website..."
*starts googling a specific question*
"Weird now it says no instead of yes."

I go to take a look and realise with horror, yes: Google AI summary.

Google AI summary made my parents-in-law visit the wrong, unequipped hospital for a potential stroke emergency. 🙃

@krautdragon

This is horrible, I'm so sorry 😞

It's not enough for LLMs to have disclaimers at the bottom, they shouldn't be on Google etc if the information isn't reliable.

Maybe LLMs need to be specifically banned from giving medical advice?

@FediThing meanwhile in Germany:

https://www.aerzteblatt.de/archiv/ki-in-der-hausarztpraxis-verband-sieht-grosse-chancen-8627368e-ca9a-4209-a9d5-5346170718e0

> The Association of Family Doctors sees enormous potential in the targeted use of artificial intelligence to significantly relieve the burden on medical practices in view of the enormous pressure on healthcare provision. This is according to a recently published position paper by the association. According to the paper, AI can provide support primarily in administrative tasks, diagnostics, and interaction with patients.

@krautdragon

KI in der Hausarztpraxis: Verband sieht große Chancen – Deutsches Ärzteblatt

Deutsches Ärzteblatt
@cybso @FediThing @krautdragon AI is not synonymous with LLM, even though the current marketing hype makes it feel that way. LLMs are generative models that produce plausible looking texts, but there are other branches of AI research and development which get much more reliable results. The entire field of machine learning is really useful for pattern recognition, and it has been used on a large scale in all kinds of software and firmware for a long time. It is AI, just not aggressively marketed as such.
Of course AI can be very useful. It's just that large language models aren't near as useful as the marketing divisions of AI companies tell us, and they're also computationally expensive, which means they will probably play a much smaller role in the near future, getting replaced by more traditional natural language processing systems that use some kind of symbolic logic for reasoning.
Symbolic AI is an entirely different branch of AI R&D. Symbolic AI is the old-fashioned kind where humans analyse all kinds of problems and write down the mathematical and logical rules on how to solve them methodically. The current approach of doing it all in machine learning, doing it all with larger and larger artificial neural networks, is just too expensive, uses too much power, and produces too much bullshit. A swarm of small artificial neural networks, each of them specialised for a single task, all of them glued together by a symbolic logic framework, that's the way to go. For parts of the natural language processing tasks, small LLMs (much smaller that those GPT ones) will be used, hedged in by other AI agents that provide strict logical reasoning and fact checks. And the entire system should be small enough to run on the local machine, no computing centre needed.
Of course this means that the entire LLM-based AI business is finished. When the bubble bursts, it will be worse than the Subprime Mortgage Crisis. But we will still have all the AI research results from this bubble, and even when the humongous LLMs shut down due to bankruptcy, all the small and small-ish open source models will still be around for everyone to tinker with.

@LordCaramac @cybso @FediThing @krautdragon it's a veeerrrry blurred line though. "AI", "genAI" and "LLM" are all being used interchangeably.

I am constantly trying to understand what is actually meant by the various internal press releases of my own company when they espouse yet another pointless service or use of AI in a project. Unfortunately, and to my disappointment, when you dig down deep it often turns out they really do mean use of an LLM and not specialised AI to deliver some kind of information tool.

@mossman

It's used interchangeably because most people simply don't have a clue and repeat some bits they've heard somewhere, resulting in others repeating that and so on. Laziness in language usage is one of the bigger problems in today's times.

In the first place, calling self-learning algorithms Artificial Intelligences is already wrong since intelligence is a cognitive process that requires to make connections between different pieces of information/knowledge. An algorithm is incapable of that and thus it's not intelligent. 🤷‍♂️

@LordCaramac @cybso @FediThing @krautdragon

@TobiWanKenobi I should add that when my company decided to go all-in on the "AI" thing (and this was a few months before it really hit the headlines) there were some internal conference calls about all the upcoming plans and how we should be trying to apply "AI" everywhere as much as possible.

I'm in engineering, so I half-joked "surely I can't find an application of genAI to calculate stress in a component or design a part!?" and was pretty firmly smacked down for not understanding the difference between LLMs and the "real AI" to be used in our business.

A few years later and I'm constantly shaking my head at the next announced "agentic AI solution" being deployed on our intranet to "help us" on client projects. It's all chatbot, all the time.

Needless to say I know of not a single colleague successfully making use of any of these tools for anything except document summaries etc.

@mossman

My biggest boon with this whole marketing thing about self-learning algorithms is not so much their (often lacking) usefulness. I do understand that there are some aspects like pattern recognition to process vast amounts of data which can be a big help in some fields like law, medicine, etc.

The problem we all should have with this corporate-driven marketing slop is the huge amounts of energy it requires. Because nowadays more energy means more fossil emissions. And the last thing we need is even more fossil emissions just so that some corporations and the oligarch owners behind them can make even more money for features no one needs.

@TobiWanKenobi people have tried to bring that up in conference calls as well, since another thing my company is big on - which they are genuinely trying to push hard for - is net zero as fast as possible. It seems most of us sympathise with the sustainability people who can't really fight back on this topic.

@mossman
To get back to topic, I guess Ai in the sense of pattern recognition is quite good and already used for detecting skin cancer stuff. That, though, is not new and had been used for some time already and can be such a good application. Of course with uncertainness, but a positive result would anyway cause you to contact a real doctor.

LLMs certainly are no real solution.
@TobiWanKenobi