"Given the current state of play, what we’re concerned about is how little we actually know about how these AI tools work and what their long-term impact will be to citizens, especially as governments have access to troves of sensitive personal data in order to further develop and fine tune the technology. How are government AI models storing and processing data about public servants, citizens and/or residents of a country, whether in experimental sandboxes or beyond?
There are merits to streamlining tedious administrative tasks, but there are still big questions that must be answered around data privacy, transparency and automated decision-making if the government is about to promote and embed wide-spread agentic AI use for civil services.
On the data privacy front, several questions arise:
- How will personal data, including sensitive data, be processed and used by the AI tools deployed? What data protection safeguards will be in place to prevent data leakage?
- When involving third-party vendours, what are the contractual safeguards governments and AI companies have around data processing arrangements?
- What are the data protection safeguards governing fine-tuning?
Data protection in the govAI context is especially important when we consider how highly sensitive, and sometimes secret, government data might be that could end up being processed by a black box algorithm - e.g., where an employee or colleague lives, any confidential documents that appear in meeting notes, etc."
https://privacyinternational.org/long-read/5767/big-oil-big-algorithm-public-money-private-models
#AI #BigTech #GenerativeAI #Privacy #DataProtection #Algorithms #BlackBoxes



