A woman in Manhattan reported to police that an Amazon delivery man, 5'6'', exposed himself to her.
Police did a face recognition search, showed her result, arrested the wrong guy, 6'2''.
What did the NYPD not do? Call Amazon.
| Personal website: | https://www.kashmirhill.com/bio |
| NYTimes page: | https://www.nytimes.com/by/kashmir-hill |
| Contact me: | [email protected] or [email protected] |
| My book: | https://www.penguinrandomhouse.com/books/691288/your-face-belongs-to-us-by-kashmir-hill/ |
A woman in Manhattan reported to police that an Amazon delivery man, 5'6'', exposed himself to her.
Police did a face recognition search, showed her result, arrested the wrong guy, 6'2''.
What did the NYPD not do? Call Amazon.
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life.
The exchanges between Adam and ChatGPT are devastating. This, in my mind, is the worst one.
His parents have filed the first known case against OpenAI for wrongful death.
In a statement the company acknowledged that its safeguards "can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade."
After writing about people going into delusional spirals with ChatGPT and having what look like mental breakdowns, I wanted to understand exactly how it happens.
A corporate recruiter in Toronto who spent 3 weeks convinced by ChatGPT that he was essentially Tony Stark from Iron Man, agreed to share his transcript after breaking free of the delusion.
We analyzed the transcript & shared it with experts. Now you can see the interactions & how delusional spirals happen:
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html?unlocked_article_code=1.ck8.FEwL.MLb9ajaocyTx&smid=url-share
People are having very strange conversations with ChatGPT, in which they discover secret cabals or conspiracies or that we are all in fact living in The Matrix.
It sends these people into delusional spirals.
Then ChatGPT tells them to email me about it.
> OpenAI’s Joanne Jang, who is responsible for how ChatGPT interacts with users, said model behavior was still an “ongoing science.”
Dismayed by the way that AI company reps use the idea of "science" and "experiments" to mean things that don't work. That's the *opposite* of science and — I'm coming to believe, a genuine risk to public trust in science.
Reading @kashhill's excellent story about living a week guided by AI: https://www.nytimes.com/interactive/2024/11/01/technology/generative-ai-decisions-experiment.html?smtyp=cur&smid=fb-nytimes
For the last few months, I've been reporting on how data from our cars is being used in ways we might not expect.
I didn't realize it, but my own car was spying on me the entire time.
How it happened to me, and to millions of other people who drive cars made by General Motors:
https://www.nytimes.com/2024/04/23/technology/general-motors-spying-driver-data-consent.html
And the class-actions begin.
The first was filed Wednesday by a Cadillac driver in Florida whose insurance doubled because data about how he drove was secretly siphoned from his car:
https://www.nytimes.com/2024/03/14/technology/gm-lexis-nexis-driving-data.html?unlocked_article_code=1.c00.pHya.h4tIYZdYyms9&smid=url-share
My story about how telematics data from people's cars unexpectedly raised their insurance rates is on the front page today...
... and this is where it started: me lurking on car forums and seeing comments like this.
If this story doesn't convince lawmakers we need a strong federal privacy law, I'm not sure what will.
This is yet another reason why you should buy a 2012 or older #car since the automotive, insurance, and data broker industries don't give a total shit about your #privacy and sadly the US aren't going to do jack about this until we can elect more people in office that does care and pass a strong privacy law in the process.
A must-read piece form @kashhill describes this continued urgency.
Archival Link: https://archive.ph/OlVx9