"Why does it say the hospital is equipped for stroke emergency? We were there. They denied!"

"Maybe you should contact them that their website is wrong. This is dangerous."

"It wasn't on their website..."
*starts googling a specific question*
"Weird now it says no instead of yes."

I go to take a look and realise with horror, yes: Google AI summary.

Google AI summary made my parents-in-law visit the wrong, unequipped hospital for a potential stroke emergency. 🙃

I wish I had a screenshot of the google request + result. I feel stuck in a sitcom. Thankfully nothing horrible happened from this delay. But I'm so tired of this ✨ beacon of civilised humanity ✨
@krautdragon I'm so sorry. Not surprised - I notice this frequently. For example two conflicting answers when I searched for whether you'd get the bends going into a vaccuum:

@bluetea

Missing the point me, but the question sniped me.

This one's tricky because the decompression necessary to trigger the bends is very close to 1 atm. So it depends on how well pressurized Marcos kept his ship. The books make reference to belters running their ships in lean atmospheres, but it's never clear what that means beyond low O2 content.

Regardless, the treatment for the bends is repressurization, so Naomi would have at least had the impacts start to resolve when she made it back through the airlock on the other side, albeit slowly. IIRC the book tells of her being basically barely able to function on the other side & injured in a number of ways by the transit through vacuum.

@lackthereof ah interesting. Yeah I was curious. The symptoms she experienced didn't seem to be decompression in terms of the nitrous oxide (?) bubbles that I vaguely recall from tv shows about diving. But atmospheric compression and the massive pressures experienced by divers are at a different end of the spectrum so I imagine different mechanics going on.
@lackthereof (and how good is this series? I love The Expanse so much. Generally the research seems very good, certainly from my laypersons' perspective)

@krautdragon Oh, that's is so terrifying, like the "Black Mirror" episode IRL  

Hope that all is OK now.

@evgandr Yes, it feels so surreal and unecessary. :(

Thankfully the person in question seems fine so far and is now being checked up in another hospital.

@krautdragon

This is horrible, I'm so sorry 😞

It's not enough for LLMs to have disclaimers at the bottom, they shouldn't be on Google etc if the information isn't reliable.

Maybe LLMs need to be specifically banned from giving medical advice?

@FediThing meanwhile in Germany:

https://www.aerzteblatt.de/archiv/ki-in-der-hausarztpraxis-verband-sieht-grosse-chancen-8627368e-ca9a-4209-a9d5-5346170718e0

> The Association of Family Doctors sees enormous potential in the targeted use of artificial intelligence to significantly relieve the burden on medical practices in view of the enormous pressure on healthcare provision. This is according to a recently published position paper by the association. According to the paper, AI can provide support primarily in administrative tasks, diagnostics, and interaction with patients.

@krautdragon

KI in der Hausarztpraxis: Verband sieht große Chancen – Deutsches Ärzteblatt

Deutsches Ärzteblatt
@cybso @FediThing @krautdragon AI is not synonymous with LLM, even though the current marketing hype makes it feel that way. LLMs are generative models that produce plausible looking texts, but there are other branches of AI research and development which get much more reliable results. The entire field of machine learning is really useful for pattern recognition, and it has been used on a large scale in all kinds of software and firmware for a long time. It is AI, just not aggressively marketed as such.
Of course AI can be very useful. It's just that large language models aren't near as useful as the marketing divisions of AI companies tell us, and they're also computationally expensive, which means they will probably play a much smaller role in the near future, getting replaced by more traditional natural language processing systems that use some kind of symbolic logic for reasoning.
Symbolic AI is an entirely different branch of AI R&D. Symbolic AI is the old-fashioned kind where humans analyse all kinds of problems and write down the mathematical and logical rules on how to solve them methodically. The current approach of doing it all in machine learning, doing it all with larger and larger artificial neural networks, is just too expensive, uses too much power, and produces too much bullshit. A swarm of small artificial neural networks, each of them specialised for a single task, all of them glued together by a symbolic logic framework, that's the way to go. For parts of the natural language processing tasks, small LLMs (much smaller that those GPT ones) will be used, hedged in by other AI agents that provide strict logical reasoning and fact checks. And the entire system should be small enough to run on the local machine, no computing centre needed.
Of course this means that the entire LLM-based AI business is finished. When the bubble bursts, it will be worse than the Subprime Mortgage Crisis. But we will still have all the AI research results from this bubble, and even when the humongous LLMs shut down due to bankruptcy, all the small and small-ish open source models will still be around for everyone to tinker with.

@LordCaramac @cybso @FediThing @krautdragon it's a veeerrrry blurred line though. "AI", "genAI" and "LLM" are all being used interchangeably.

I am constantly trying to understand what is actually meant by the various internal press releases of my own company when they espouse yet another pointless service or use of AI in a project. Unfortunately, and to my disappointment, when you dig down deep it often turns out they really do mean use of an LLM and not specialised AI to deliver some kind of information tool.

@mossman

It's used interchangeably because most people simply don't have a clue and repeat some bits they've heard somewhere, resulting in others repeating that and so on. Laziness in language usage is one of the bigger problems in today's times.

In the first place, calling self-learning algorithms Artificial Intelligences is already wrong since intelligence is a cognitive process that requires to make connections between different pieces of information/knowledge. An algorithm is incapable of that and thus it's not intelligent. 🤷‍♂️

@LordCaramac @cybso @FediThing @krautdragon

@TobiWanKenobi I should add that when my company decided to go all-in on the "AI" thing (and this was a few months before it really hit the headlines) there were some internal conference calls about all the upcoming plans and how we should be trying to apply "AI" everywhere as much as possible.

I'm in engineering, so I half-joked "surely I can't find an application of genAI to calculate stress in a component or design a part!?" and was pretty firmly smacked down for not understanding the difference between LLMs and the "real AI" to be used in our business.

A few years later and I'm constantly shaking my head at the next announced "agentic AI solution" being deployed on our intranet to "help us" on client projects. It's all chatbot, all the time.

Needless to say I know of not a single colleague successfully making use of any of these tools for anything except document summaries etc.

@mossman

My biggest boon with this whole marketing thing about self-learning algorithms is not so much their (often lacking) usefulness. I do understand that there are some aspects like pattern recognition to process vast amounts of data which can be a big help in some fields like law, medicine, etc.

The problem we all should have with this corporate-driven marketing slop is the huge amounts of energy it requires. Because nowadays more energy means more fossil emissions. And the last thing we need is even more fossil emissions just so that some corporations and the oligarch owners behind them can make even more money for features no one needs.

@TobiWanKenobi people have tried to bring that up in conference calls as well, since another thing my company is big on - which they are genuinely trying to push hard for - is net zero as fast as possible. It seems most of us sympathise with the sustainability people who can't really fight back on this topic.

@mossman
To get back to topic, I guess Ai in the sense of pattern recognition is quite good and already used for detecting skin cancer stuff. That, though, is not new and had been used for some time already and can be such a good application. Of course with uncertainness, but a positive result would anyway cause you to contact a real doctor.

LLMs certainly are no real solution.
@TobiWanKenobi

@LordCaramac @cybso @FediThing @krautdragon
It is not now, never has been, and never WILL be artificial "intelligence".
Machine learning is a fair term of art for predictive computational modeling that improves with additional data, but that isnt intelligence. There is no #AI.
I've been modeling complex earth processes for literal decades. Most models have always relied on and weighted to favor new, more accurate and relevant data. The term AI is marketing hype by Musk types.
@Okanogen @cybso @FediThing @krautdragon Artificial Intelligence is a term for those branches of Computer Science that are concerned with solving complex problems for which humans use their intelligence.
Also, there are actually parts of AI research that can produce intelligent agents. In order to become truly intelligent (not as intelligent as a human, but as intelligent as a beetle or a worm), an agent needs to be autonomous, and it needs to be able to learn continuously from its interactions with its environment. Autonomous robotics research is a field where people work on such types of artificial intelligences.
Human level intelligence is probably not possible with the type of hardware we use today, we haven't got enough computing power on this planet to run a simulation of even a fraction of the human brain, even if all processors on this planet did nothing else. Also, nobody wants fully autonomous agents to become too intelligent because then nobody would be able to control them anymore. Just look at what the more intelligent wild animals do, how they cause all kinds of mayhem just because they can. Gangs of monkeys stealing food from supermarket shoppers in Asia. Keas opening rubbish bins. Now imagine what an autonomous robot with dexterous hands could do. It doesn't need to have human level intelligence to cause a lot of damage.
@LordCaramac @cybso @FediThing @krautdragon
These machine learning models do not "solve" complex problems. They output analysis or present potential solutions to complex problems. This can be valuable, but PEOPLE solve complex problems, sometimes with the help of these models. But at extreme cost and often creating new problems.
@Okanogen @cybso @FediThing @krautdragon Chess is a complex problem, yet it is completely machine readable, since it takes place in an abstract mathematical space of simple rules. And solving games like chess with computers is also considered "artificial intelligence". The term artificial intelligence doesn't mean that the machine is anything like human intelligence, it just means that it can do something humans would do with their intelligence. Just like artificial sweetener isn't sugar but still tastes sweet.
Artificial intelligence is just the name some people in the 1950s invented for a whole list of entirely different and unrelated branches of Computer Science because it sounded futuristic and sci-fiand made people dream of electronic brains. The name just stuck.
And of course machines can solve complex problems. However, they can only solve the specific type(s) of problem for which they were built. LLMs were built to transform some text input into the most statisically likely text output, which is indeed a very complex problem, and they are very good at it. The only problem is that people are made to believe that systems able to solve any kind of problem whatsoever are just around the corner.
@LordCaramac
Chess isn't a problem, it is a game, there is no solution to chess.
I appreciate you say "AI" isn't human intelligence, but it isn't any intelligence at all. Your's is basically a religious argument and I know there is no changing your mind, but no matter how sophisticated, computer models have no properties associated with intelligence, they lack understanding & self-awareness, cannot reason, are incapable of abstraction or critical thinking, can't plan or think creatively, etc..
@Okanogen Look at autonomous robots. Those actually need a certain type of self-awareness, they need a virtual representation of themselves and their environment to plan and execute their interactions with their environment. They need to monitor what they are doing, what the things they can sense are doing, and if that is still consistent with the simulation of their surroundings or not.
We're still at the stage of something like ants. Nothing even close to humans. But if you look at autonomous robots, especially those designed to work in environments with animals or humans or those designed to work in swarms, they build abstract representations, 4D maps of what is probably going on around them based on sensor data. They plot their own future trajectories through those maps.
I mean, seriously look at what autonomous robots can do today, and look at how fast the entire field has been progressing lately. Ignore self-driving cars, we won't have anything intelligent enough for that while still affordable any time soon if ever, right now they still need a human remote operator sitting in front of a screen for emergencies, which means they're just a PR stunt and not making any money, since the hardware is very expensive and the remote operator gets paid much more than a taxi driver. Look at autonomous drone swarms, look at robot sports, look at 3D and 4D mapping in the field, using drones to build computer models of forests in (almost) realtime. There is no reason to be afraid of some artificial gods yet, but there is a lot of technology out there that will give our tools abilities we wouldn't have thought possible twenty years ago. Tools that enable small teams or even single humans to do what used to take a large team months or even years in weeks or even days. Also, imagine the glitches and accidents that could happen. The world may become a very much weirder place very soon.
@LordCaramac
People need to stop anthropormorphizing software and mechanical systems.
I know how much better automation and modelling have become because I have literally been doing that since the 1980's. But these are just computed models based on human input and direction.
Like I said, this is a religious, metaphysical stance, not a reality-bound one. It does fit techbros VC narratives, tho.
@cybso @FediThing @krautdragon they will quickly change their tune when liability cases will start rolling in.
@FediThing @krautdragon LLMs need to be specifically banned from giving advice
@krautdragon Google has lots of money. Just post an open letter on an attorney blog site. You'll have 6000 responses within an hour. Want to be the lead in a future class action? This would be it.
@krautdragon Barf city. You'd think a lawyer would *have a field day* with that.
@krautdragon sadly, my family and i have visited medical facilities that listed specialties directly on their website and they have disavowed them to our faces while there. either the information was out of date, or the particular doctor we were assigned to was not familiar with whatever condition and so decided the whole facility was not equipped to handle diagnosis and treatment thereof.
@krautdragon I would imagine that if you ask a yes/no question then the response is likely to start with a yes or no based on random number generation, with the rest of the text following that prompt, a random change early in the text is going to have a massive change in the output, the possibility space of words used in response is going to have two very dissimilar barbells depending on the starting yes/no.
@krautdragon
If they can't deal with a stroke they are not a hospital. 🧀
@AnnyJoe I am in disbelief too. They told them to visit another hospital in the city that is better equipped for emergencies happening within the brain. That's two systems failing imo. One system (the hospital) should be better equipped then!? The other imo should fuck off entirely. 😅

@krautdragon
Sorry, but this is the left side hospital.

You need to find your way to the right side hospital, as it's clear you have broken your right arm.

Please check your spam folder for the survey we are about to email you. 🍄

@AnnyJoe
*you have broken your left arm

(it works like your brain, you know? 😅)
@krautdragon

@AnnyJoe @krautdragon exactly! What kind of sixth rate "hospital" can't handle strokes? 🤯
@krautdragon this calls for a lawsuit. Or is reckless endangerment an actual crime they can be arrested for?
@krautdragon btw these companies really fought hard to not be considered as publishers. What else are these summaries than publishing?

@krautdragon
(at least in the states) always, always call 911 when you suspect a stroke.

1. time is brain - they may be able to start treatment en route to the hospital

2. websites can be out of date, or just plain wrong: hospitals can be closed, under construction, out staffing, or no longer providing the care you need. (the MRI, CT scanner is broken)

3. even if a hospital is a stroke center they could be on diversion*. EMS will know this. Unless you are listening to a dispatch scanner- you won't know this.

*diversion - for example our hospitals here have 3 types of diversion:
stroke
STEMI
trauma
this means that the hospital already has all the patients they can handle for that specific issue. And while maybe they won't turn walk ins away at the door it may take longer to reach definitive care. (Ambulances will transport to the next nearest facility not on diversion for said ailment.)

@MsMerope @krautdragon This is very true in Australia too (but the number is 000). Re point 1: Some ambulances here have mobile CT scanners so they can start treatment on the way.
@krautdragon AI WILL kill people. Or put them in jail for a long time ("Is drugs legal in country X?")

@krautdragon That's so terrible, I'm so sorry.

It reminds me of the Designing for Crisis talk I heard @Meyerweb give years ago in Orlando

Obviously no one in AI has thought a bit about such matters, and it's going to cost lives

https://meyerweb.com/eric/thoughts/2016/01/25/designing-for-crisis-design-for-real-life/

Designing for Crisis, Design for Real Life

Back in October of 2014, at An Event Apart Orlando, I returned to public speaking with “Designing for Crisis”, my first steps toward illuminating how and why design needs to consider more than just the usual use cases. Now you can see a refined and updated version of that talk for free.

@krautdragon the whole "pleasantly supportive" thing that's programmed into so many of these tools. I'm so glad your family members are safe.
@krautdragon brb Googling for how to dial 112 on a web browser
@antiphase ngl I found it stupid as well, but I guess they wanted to find out if the closest hospital is specialised and patient in question is very weird about calling for help...
@krautdragon drag them to court and sue them. that is how they are going to learn it.
@krautdragon AI bros are literally evil and are getting people killed with their fashslop dysto-tech.

@krautdragon

Sue google, "attractive nuisance" is the legal term, one potential legal term, they say they're useful, they say, and they encourage us, you, to use google as a resource, then bait and switch. Endangering lives.

This could, would be, a momentous case. Hope some has money, or lawyers guts.

@krautdragon
Google or (😖) Gemini?
@NorCalWineLady that summary thing you get unrequested when doing what used to be regular googling in the browser.
@krautdragon
These days, I believe it's Gemini, the newer AI version of Google.
@krautdragon Not as critical - had same happen to me. I asked about paddle boarding on a reservoir near me. Instead of loading up the links for the rez - it loaded the summary - which indicated what a fabulous place for paddling it is - when I opened the actual link (that was down the page!), it clearly stated, "NO paddle boarding".
@krautdragon welcome to Google, chat gpt, etc world.
@krautdragon Google Gemini/AI is a clownshow/clusterfuck in terms of ability to trust anything it ever says. I used it heavily this year and it flushed down the toilet any remaining respect I had left for the quality of Google engineering. Shame on everyone who helped make it and ship it in its current form. Riddled with lies and well-known UI/UX anti-patterns. yikes
@krautdragon AI is leading so many people down the wrong road.