OpenAI is a menace. Two recent stories make that clearer than ever.

One day you have Sam Altman denigrating humanity to defend AI. The next, WSJ reveals OpenAI could have alerted Canadian police of a potential mass shooter, but refused pressure from employees. That person went on to kill 8 people.

https://disconnect.blog/sam-altmans-anti-human-worldview/

#tech #openai #genai #artificialintelligence #chatgpt #samaltman #cdnpoli #canada

Sam Altman’s anti-human worldview

OpenAI CEO downgrades humanity in pursuit of goal to merge with computers

Disconnect
@parismarx Sam Altman is a lying sociopath, to be sure...but let's pump the brakes. You think a private company should be alerting governments of people's hypothetical thought-crimes? I think that is a horrible take. What happened in Canada is awful, but the answer isn't "expand the surveillance state."
@devin_and_earth Respectfully, this is a stupid take. The modern tech industry is the surveillance state. ChatGPT and generative AI do not exist without mass data collection. This isn’t about thought crimes, it’s about people using technology in a way that presents a real threat to people’s lives. We have always had a balance between privacy and safety, and that doesn’t end because the internet and cyberlibertarian ideas that have been formed around it.

@parismarx Does Meta screen and report their users this way? Can you cite examples of this supposed balance between safety and privacy you speak of? I'm genuinely asking.

Truly, I wish that people being put on watchlists was only ever for protection of the public, but then those surveillance practices get weaponized when malicious administrations take power.

@devin_and_earth Yes, Meta does. Google also reports searches in certain instances. At least in Canada, therapists and similar professions have an obligation to report if patients intend to commit harm to themselves or others. These things are all very commonplace.

https://www.cnet.com/tech/services-and-software/facebook-scans-chats-and-posts-for-criminal-activity/

Facebook scans chats and posts for criminal activity

Facebook's monitoring software focuses on conversations between members who have a loose relationship on the social network.

CNET

@parismarx Licensed therapists who regularly engage with the mentally-unwell are a completely separate matter from tech platforms capable of mass surveillance, and I disagree with conflating the two.

We are actively witnessing state access to user data cause more harm than that which it prevents. I wish I lived in a country where it were otherwise.

@parismarx Relying on surveillance systems from tech companies to monitor theoretically dangerous people is a poor alternative to actually preventing mental health crises through robust welfare programs and meaningful reforms. Only one of those options can be weaponized by the state (or even the tech companies themselves, in some cases).
@parismarx If the public had the chance to vote on whether OpenAI should be denied access to their personal, verbatim interactions with ChatGPT, it would be overwhelmingly popular, because no one likes being exposed to companies like that. Thus I think endowing them with mental health monitoring responsibilities is a begrudged solution people resort to after leaping over way too many other questions and problems.