Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns
Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns
Huge Study
*Looks inside
this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.
Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.
AI sucks in a lot of ways sure, but this feels like fud.
Then any statistics you measure on that population might be fully accurate for those 100 but might be less able to predict what the next 100 will look like.
You can still measure stats with smaller groups, it just means the confidence interval is smaller. With 300, there’s a 95% chance your test results are close to reality. With 100 it might be more like 66%.
Population is a statistical term which means “everything”. There is no “next 100”.
The 300 number is specifically about very big populations where you’re trying to measure something like an average of an unknown variable. It doesn’t apply to just anything statistics.
I meant like births, as in even if you can enumerate every single individual, statistics can apply to future members that don’t yet exist.
And yeah, it’s been a while and I remembered that the proof didn’t depend on the population size but forgot that it assumed a large population size in the first place. I was wrong.