Big tech is moving into your medical records. Microsoft’s Copilot Health will soon merge Apple Watch data with clinical histories to "connect the dots" in seconds. It’s an efficiency win, but the privacy stakes are high: HIPAA doesn't apply here, and AI bias hasn't been solved yet. #MedSky #MedAI

A.I. Chatbots Want Your Health...
A.I. Chatbots Want Your Health Records. Tread Carefully.

Following rivals like Amazon and OpenAI, Microsoft is upgrading its artificially intelligent assistant to track your health. There are benefits and risks to consider.

The New York Times
A new JAMA report concludes that medicine is "flying blind" on AI. With widespread adoption but little proof of improved outcomes, it calls for standards for validation and oversight to close the gap between innovation and evidence. #MedSky #MedAI #MLSky

JAMA: Medicine is flying blind...
JAMA: Medicine is flying blind on AI

A year after its landmark artificial intelligence summit, JAMA says health systems are deploying unproven algorithms with little evidence they improve outcomes — or even do no harm.

MedicalEconomics
Current benchmarks for medical LLMs create an "evaluation illusion." They use simplified data and tasks that don't reflect complex clinical reality. Automated metrics also fail to assess safety and utility, meaning high scores don't translate to real-world value. #MedSky #MedAI #MLSky

The evaluation illusion of lar...
The evaluation illusion of large language models in medicine - npj Digital Medicine

While large language models (LLMs) hold promise for transforming clinical healthcare, current comparisons and benchmark evaluations of large language models in medicine often fail to capture real-world efficacy. Specifically, we highlight how key discrepancies arising from choices of data, tasks, and metrics can limit meaningful assessment of translational impact and cause misleading conclusions. Therefore, we advocate for rigorous, context-aware evaluations and experimental transparency across both research and deployment.

Nature

Ra mắt MedAI, ứng dụng Android mới sử dụng Gemini AI để tóm tắt đơn thuốc viết tay! Chỉ cần chụp ảnh đơn thuốc, AI sẽ trích xuất tên thuốc, liều lượng và thời gian sử dụng, giúp người dùng dễ dàng hiểu rõ hơn. Một bước tiến nhỏ giúp mọi người tiếp cận thông tin y tế dễ dàng hơn.

#MedAI #AI #YTe #Android #GeminiAI #PhanMem #DonThuoc

https://www.reddit.com/r/SideProject/comments/1o1a73g/built_medai_an_android_app_that_uses_gemini_ai_to/

MedAI: Ứng dụng Android mới dùng Gemini AI tóm tắt đơn thuốc! Chỉ cần chụp ảnh, AI sẽ đọc và trích xuất tên thuốc, liều lượng & thời gian sử dụng từ các đơn viết tay khó đọc. Giúp bạn hiểu rõ hơn về thuốc của mình.

#MedAI #AI #YTeSo #AndroidApp #GeminiAI #HealthTech #Medicine

https://www.reddit.com/r/SideProject/comments/1o1a73g/built_medai_an_android_app_that_uses_gemini_ai_to/

Patients are using AI chatbots like ChatGPT to interpret lab results from online portals. While this can help them understand complex data, experts warn of significant risks, including inaccurate information and privacy violations. #MedAI #MedSky

Lab results confusing? Some pa...
A study finds clinicians rate peers who use generative AI for primary decision-making lower in skill and competence. Framing AI as a verification tool partially mitigates this negative perception but does not eliminate it. #MedSky #MLSky #MedAI

Peer perceptions of clinicians...
Peer perceptions of clinicians using generative AI in medical decision-making - npj Digital Medicine

This study investigates how a physician’s use of generative AI (GenAI) in medical decision‑making is perceived by peer clinicians. In a randomized experiment, 276 practicing clinicians evaluated one of three vignettes depicting a physician: (1) using no GenAI (Control), (2) using GenAI as a primary decision-making tool (GenAI-primary), and (3) using GenAI as a verification tool (GenAI-verify). Participants rated the physician depicted in the GenAI‑primary condition significantly lower in clinical skill (on a 1–7 scale; mean = 3.79) than in the Control condition (5.93, p < 0.001). Framing GenAI use as verification partially mitigated this effect (4.99, p < 0.001). Similar patterns appeared for perceived overall healthcare experience and competence. Participants also acknowledged GenAI’s value in improving accuracy (4.30, p < 0.002) and rated institutionally customized GenAI more favorably (4.96, p < 0.001). These findings suggest that while clinicians see GenAI as helpful, its use can negatively impact peer evaluations. These effects can be reduced, but not fully eliminated, by framing it as a verification aid.

Nature
A recent case study reports a man developed bromism, a 19th-century psychiatric illness, after following advice misinterpreted from ChatGPT. For three months, he replaced dietary salt with sodium bromide, leading to severe hallucinations. #MedSky #MedAI #MLSky

Guy Gives Himself 19th Century...
Guy Gives Himself 19th Century Psychiatric Illness After Consulting With ChatGPT

"For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT."

404 Media