"To date, the security measures implemented for LLM-based tools have not kept pace with the growing risks. In its response to The New York Times’ request for chat histories, Open AI indicated that it is working on “client-side encryption for your messages with ChatGPT” — yet even here the company hints at deploying “fully automated systems to detect safety issues in our products,” which sounds very much like client-side scanning (CSS). CSS, which involves scanning the content on an individual’s device for some class of objectionable material, before it is sent onwards via an encrypted messaging platform, is a lose-lose proposition that undermines encryption, increases the risk of attack, and opens the door to mission creep.
By contrast, the open source community has made positive strides in prioritizing confidentiality. OpenSecret’s MapleAI supports a multidevice end-to-end encrypted AI chatbot, while Moxie Marlinspike, co-author of Signal’s E2EE protocol, has launched ‘Confer,’ an open source AI assistant that protects all user prompts, responses, and related data. But for now at least, such rights-respecting solutions remain the exception rather than the norm.
Unbridled AI adoption combined with depressingly lax security practices demands urgent action. The security issues associated with advanced AI tools are the consequences of deliberately prioritizing profit and competitiveness over the security and safety of at-risk communities, and they will not resolve on their own. While we would love to see companies self-correct, governments should not shy away from demanding that these companies prioritize security and human rights, especially when public money is being spent to procure and build ‘public interest’ AI tools. In the meantime, we can all also choose to support open, accountable rights-respecting alternatives to the big name models and tools where possible."
https://www.accessnow.org/artificial-insecurity-compromising-confidentality/
#AI #GenerativeAI #LLMs #Privacy #CyberSecurity #OpenSource #Encryption