I haven’t really picked a side, mostly because there’s just not enough evidence. NYT hasn’t provided any of the prompts they used to prove their claim. The OpenAI blog post seems to make suggestions about what happened, but they’re obviously biased.
If the model spits out an original article by just providing a single paragraph, then the NYT has a case. If like OpenAI says that part of the prompt were lengthy excerpt, and the model just continued with the same style and format, then I don’t think they have a case.
The OpenAI blog posts mentions;
It seems they intentionally manipulated prompts, often including lengthy excerpts of articles, in order to get our model to regurgitate.
It sounds like they essentially asked ChatGPT to write content similar to what they provided. Then complained it did that.
How should the company be protecting user data, when - like you said -, the average person doesn’t take cybersecurity seriously, are not techies, don’t use a computer outside the office, and just want to log into their account with a password they remember?
Are you basically just saying the company should’ve enforced 2FA? Or maybe one of those “confirm you’re logging in” emails, every time they want to log in?
You’re right, but that was my point, you have to take a screenshot and translate it. It wasn’t something I thought about when my phone was blasting out a loud alarm.
In those kind of emergencies, either it should’ve been auto translated to the users’ default language, or a quick translate option should be available.