#infosec #ChatGPT

To absolutely no one's surprise, employees are feeding sensitive business data to ChatGPT👇🏾

https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears

Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears

More than 4% of employees have put sensitive corporate data into the large language model, raising concerns that its popularity may result in massive leaks of proprietary information.

Dark Reading

@alaric

#infosec #ChatGPT

From January👇🏾

This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.

“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”

https://www.businessinsider.com/amazon-chatgpt-openai-warns-employees-not-share-confidential-information-microsoft-2023-1

Amazon warns staff not to share confidential information with ChatGPT

The advice, from an Amazon lawyer, highlights one of many new ethical issues arising as a result of the sudden emergence of ChatGPT.

Insider
@alaric ifAmazon’s data been trained on for chat gpt, classified documents are probably there too.
Edit: just notice the retaining bit. I think stink there’re classified documents there.

@alaric

#infosec #ChatGPT

To absolutely no one's surprise, 43% of employees are feeding business data to Chad (ChatGPT) and 70% are not telling their employers 👇🏾

(responses from 11,793 professionals)

https://www.fishbowlapp.com/insights/70-percent-of-workers-using-chatgpt-at-work-are-not-telling-their-boss/

@alaric

#infosec #ChatGPT

To absolutely no one's surprise, Samsung meeting notes and new source code are now in the wild after being leaked in ChatGPT 👇🏾

"Samsung Electronics sent out a warning to its workers on the potential dangers of leaking confidential information in the wake of the incidences, saying that such data is impossible to retrieve as it is now stored on the servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any sort of data leak could spell disaster for the company in question."

This is *AFTER* the company *allowed* engineers at its semiconductor arm to use the AI writer to help fix problems with their source code. LOLLLLL

https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt

Samsung workers made a major error by using ChatGPT

Samsung meeting notes and new source code are now in the wild after being leaked in ChatGPT

TechRadar pro

@alaric

#infosec #privacy #ChatGPT

That ChatGPT has a privacy problem is obvious.
Scraping data from the web including any personal information you might have shared (probably don't do that) to create their generative text system.

What's really striking and sort of 🙄amusing is that "Open" AI doesn't directly mention its legal reasons for using people’s personal information in training data but says it relies upon *“legitimate interests”* when it “develops” its services. LOLLLLL....

These bros are so high on their own supply that they will say the absolute dumbest 💩in the furtherance of their quest to monetize anything and everything all while claiming that they are doing it in the furtherance of "good" for society and the morons on Twitter are doing work for the bros by shilling the "value add" brought by ChatGPT 🤷🏾‍♂️

Good Wired piece on how and why Italy has blocked ChatGPT 👇🏾

https://www.wired.com/story/italy-ban-chatgpt-privacy-gdpr/

ChatGPT Has a Big Privacy Problem

Italy’s recent ban of Open AI’s generative text tool may just be the beginning of ChatGPT's regulatory woes.

WIRED
@alaric we still have to see chatgpt leak any sensitive data right? Though openAI has chat logs and of course can do Samsung and others dirty but we have no evidence of that yet.
@alaric I watched the segment with Lester Holt on AI. There is no oversight, threat monitoring. The two engineers said it’s basically a problem in the making as AI is being developed too quickly and with virtually no guardrails. The mission so far is what company can get out there the quickest for more $$$.
@alaric
Does this include github copilot and similar tools? I assume way more employees use generative ai but don't know they're using it, and sending business data over the line while doing so
@alaric another reason why companies need their own instances insteqd of running ai as a universal service
@alaric between chqtgpt and twitter… the word of this year is definitely “decentralization”.

@chaoddity @alaric erm… not really. That are cases of missing brain.exe and how to handle sensitive data at all. Not really something any fancy tech tool will ever be able to solve 😩

Reminds me of that one executive who would copy customer data _out_ of a "secure" SAP environment into Outlook Express so he wouldn't be bothered to authenticate all the time he needed data 🤦

@bekopharm @chaoddity @alaric There is a big awareness and training gap. The report also implies that data security services such as Cyberhaven already have access to the full text of whatever their clients employees are typing into applications.

@bekopharm @alaric
The problem with things like chatGPT is that they learn from the prompts you pose them and can potentially share what it learns. Also, the power of the AI is in the hands of whoever owns it. If a bad actor owned ChatGPT, they'd have access to all the info these morons are putting into it.

What I am suggesting is no different than organizations having their own email server, chat server, tax software, or anything else.
I think that AI is best handled locally, not centrally.

@chaoddity as someone completely self hosted I can totally agree on that ;-)

Such wild guns will still find ways to misplace sensitive information. They are simply lacking the deeper understanding what and why this may even be an issue. Lessons history and SciFi books teach us.

@bekopharm To be honest, I know exactly what you mean. Sometimes people scare the hell out of me with their absolutely reckless handling of extremely sensitive information.
I would bet you ... a lot of money... someone could make a fortune hacking small business computers and tapping their outlook for credit card details because I can tell you, from experience, people send ALL that super vital, super important information through completely insecure, inappropriate channels.
@alaric we've gotten the message at work that's all "c'mon guys I really shouldn't have to tell you this but please do not install editor plugins that send our proprietary code to a third party"
@codicil @alaric how about zoom plugins that do the same for every single meeting?
@alaric Update security awareness training to warn against providing sensitive business info to ChatGPT and the like.

@alaric @huxley one of these is almost expected stupidity. The other is entirely because of how fucked the medical system with insurance and medicaid is to even need to write these letters.

Read about a dental firm doing this too because it meant what took them several hours previously, if they even had time to address all denials received, now took at most an hour and addressed all of them.

It's bad putting that info in, but arguably as bad they almost have to.

@alaric Plus I hope we're all aware that nazis/fascists LOVE this tool...what do you think they are feeding it?
@alaric it's like the ultimate form of pastebin scrapping.