"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.

These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.

The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."

https://the-decoder.com/anthropic-uses-a-questionable-dark-pattern-to-obtain-user-consent-for-ai-data-use-in-claude/

#EU #AI #GenerativeAI #Anthropic #LLMs #Chatbots #Claude #DarkPatterns #Privacy #DataProtection

Anthropic uses a questionable dark pattern to obtain user consent for AI data use in Claude

Anthropic’s new data policy raises legal concerns with its use of questionable dark patterns.

THE DECODER
@remixtures those "not now" buttons should be for forbidden. Yes or no, not "bother me later again"
@apollo @remixtures likewise, forcing a reader to go through a maze to cancel all the trackers is just as bad
@hareldan lmao, I love how I just got the same pattern when wanting to read the article

@remixtures I hate these corpos as much as the next person (actually, I hate them more than the next person), but I don't think this specific instance is a dark pattern. There's a new privacy policy and you can accept it at any time between now and the 28th. The old privacy policy claims that your data is not retained for longer than required to offer the services and to comply with legal obligations. The new one gives the option to let them use the data to train their models, etc. The option is front and center in the popup. If you click "not now" nothing changes until you go to settings -> privacy which is exactly where the popup says it would be. In there there's the option to let claude retain your data, or the button to show the popup again if it was not accepted previously.

Should this setting turned off by default? Sure. Would it be better if they retained the old privacy policy? Definitely. But is this a dark pattern? At the moment, I don't really think so.

I want claude to disappear alongside openai and the others, and the owners to go bankrupt. But I don't really think they will be affected with such a non-issue.

However I would agree that it's a dark pattern if the "not now" button makes the popup not show ever again after the new policy is in effect. I will click "not now" and I will see if that's the case.

@remixtures
If it's the Irish regulator, it's unlikely Anthropic will be overly troubled...