To absolutely no one's surprise, employees are feeding sensitive business data to ChatGPT👇🏾
To absolutely no one's surprise, employees are feeding sensitive business data to ChatGPT👇🏾
From January👇🏾
This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”
To absolutely no one's surprise, 43% of employees are feeding business data to Chad (ChatGPT) and 70% are not telling their employers 👇🏾
(responses from 11,793 professionals)
To absolutely no one's surprise, Samsung meeting notes and new source code are now in the wild after being leaked in ChatGPT 👇🏾
"Samsung Electronics sent out a warning to its workers on the potential dangers of leaking confidential information in the wake of the incidences, saying that such data is impossible to retrieve as it is now stored on the servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question."
This is *AFTER* the company *allowed* engineers at its semiconductor arm to use the AI writer to help fix problems with their source code. LOLLLLL
https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt
That ChatGPT has a privacy problem is obvious.
Scraping data from the web including any personal information you might have shared (probably don't do that) to create their generative text system.
What's really striking and sort of 🙄amusing is that "Open" AI doesn't directly mention its legal reasons for using people’s personal information in training data but says it relies upon *“legitimate interests”* when it “develops” its services. LOLLLLL....
These bros are so high on their own supply that they will say the absolute dumbest 💩in the furtherance of their quest to monetize anything and everything all while claiming that they are doing it in the furtherance of "good" for society and the morons on Twitter are doing work for the bros by shilling the "value add" brought by ChatGPT 🤷🏾♂️
Good Wired piece on how and why Italy has blocked ChatGPT 👇🏾
@chaoddity @alaric erm… not really. That are cases of missing brain.exe and how to handle sensitive data at all. Not really something any fancy tech tool will ever be able to solve 😩
Reminds me of that one executive who would copy customer data _out_ of a "secure" SAP environment into Outlook Express so he wouldn't be bothered to authenticate all the time he needed data 🤦
@bekopharm @alaric
The problem with things like chatGPT is that they learn from the prompts you pose them and can potentially share what it learns. Also, the power of the AI is in the hands of whoever owns it. If a bad actor owned ChatGPT, they'd have access to all the info these morons are putting into it.
What I am suggesting is no different than organizations having their own email server, chat server, tax software, or anything else.
I think that AI is best handled locally, not centrally.
@chaoddity as someone completely self hosted I can totally agree on that ;-)
Such wild guns will still find ways to misplace sensitive information. They are simply lacking the deeper understanding what and why this may even be an issue. Lessons history and SciFi books teach us.
@alaric @huxley one of these is almost expected stupidity. The other is entirely because of how fucked the medical system with insurance and medicaid is to even need to write these letters.
Read about a dental firm doing this too because it meant what took them several hours previously, if they even had time to address all denials received, now took at most an hour and addressed all of them.
It's bad putting that info in, but arguably as bad they almost have to.