"We’re about to be deluged with the most brazen and dangerous lies of our lifetimes."
Laura Helmuth has been flying free from SciAm for < 1 week and publishes a BANGER showing how RFK Jr. got where he is and why he's VERY dangerous. https://slate.com/_pages/cm3qctm2m0000lpkye5w6cw4v.html
Trans people do not pose a problem in restrooms.
Let. People. Pee. Seriously this isn't hard.
Today's rendition of the game "is this AI hype from 2024 or 1974"?
"In three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level and a few months after its powers will be incalculable."
Minsky ~1970.
I was just watching Drag Race and started imagining what Reviewer 2 roasts would be like:
Your citations are so outdated they've got Blockbuster cards.
Your arguments are so circular they're replacing the teacups at Disney World.
Your p-values are so high they were just cast in a Cheech & Chong movie.

Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.
Paying more attention to the health and social benefits of libraries is overdue. Libraries aren't just repositories for books, they're essential social infrastructure.