Andre

@andrekaiser
83 Followers
281 Following
722 Posts
Urgs, wie sieht denn der Netflix Player im #AppleTV jetzt aus!? 🙈

By Olga Beliaeva

#Art #Painting

Hmm, @castropodcasts doesn't seem to be showing any new episodes today. Has anyone else noticed that?
#iOS #Podcast
Mein Solarteur ist insolvent und verschollen. Soweit so schade. Wir haben einen Fronius Symo 8.2.3 von 2019 hier laufen.
Hat jemand Erfahrung wie und ob man über den Hersteller an einen Passwort-Reset der Service-Ebene ran kommt?
Gerne boosten
#fronius #pvbuddies #renewableenergy
@CARROT Is there no animation for rainy weather in the Sky theme? The app says it's raining, but it shows a clear, slightly cloudy sky.

RE: https://mastodontech.de/@bullubu/115950049872937179

Es gibt jetzt auch - endlich - eine Webversion 👍🏻

Sequel 2.7 is out now, and it's all about finding what you're looking for, whether it's in our database or not.

🔍 The brand new search engine delivers more relevant results.

✏️ Can't find something? Add it manually, so nothing falls through the cracks.

🙈 Hide the media types you don't use for a more focused experience.

🎭 Comprehensive series credits with their creators and full cast across all seasons.
https://apps.apple.com/app/sequel/id1630746993

So mal gesetzt den Fall es gäbe, rein hypothetisch, einen #podcast der sich locker flockig mit #3DDruck beschäftigt. Was fändet ihr denn da thematisch interessant? Bitte gerne teilen für Reichweite.
the only thing #apple can announce next week that's worth my time is a working #ios keyboard. i don't even care about dumb #siri anymore.
I can not emphasise this enough. Do not use chatbots for medical advice.

And no, it does not matter if the product is named something something "health".

« In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.

The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious. »

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies
‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

The Guardian