The F.D.A. “fumbled by allowing A.I. developers to keep their secret sauce under wraps and failing to require careful studies to assess any meaningful benefits.
“You have to have really compelling, great data to change medical practice & to exude confidence that this is the way to go,” said Dr. Topol, executive vice president of Scripps Research in San Diego. Instead, he added, the F.D.A. has allowed “shortcuts.””
https://www.nytimes.com/2023/10/30/health/doctors-ai-technology-health-care.html
Doctors Wrestle With A.I. in Patient Care, Citing Lax Rules

The F.D.A. has approved many new programs that use artificial intelligence, but doctors are skeptical that the tools really improve care or are backed by solid research.

The New York Times
@FrankPasquale The mention of the 737-MAX was an interesting inclusion in this piece. I recently visited Purdue to give an engineering ethics seminar on the MAX and on the way to Indy I talked to the pilots about the automated system that led to the crashes. One of them said, “So many of these flight control systems are automated now that as a pilot, each one takes you a bit further away from actually flying the plane.” This is consistent story w/automated decision systems.