Sandra Wachter

304 Followers
111 Following
65 Posts

Professor of Technology and Regulation at the University of Oxford. [email protected]

Research on the legal and ethical implications of emerging technologies, AI, Big Data, and robotics as well as Internet and platform regulation. https://www.oii.ox.ac.uk/people/profiles/sandra-wachter/

Founder of the Governance of Emerging Technologies (GET) Research Programme at the Oxford Internet Institute. https://www.oii.ox.ac.uk/research/projects/governance-of-emerging-technologies/

đź’« We will present our research on "Purpose Limitation for AI" at the Oxford Internet Institute (with @HannahRuschemeier Monday, 16 June, 12:00-13:00 UK time / 13:00-14:00 CET.
Sign up for the lifestream!
Infos about the event: https://www.oii.ox.ac.uk/news-events/events/updating-purpose-limitation-for-ai-a-normative-approach-to-ai-regulation/
Infos about the project: https://purposelimitation.ai
Thanks to Brent Mittelstadt and Sandra Wachter @SandraWachter
OII | Updating Purpose Limitation for AI – a normative approach to AI regulation

In this talk, we present our interdisciplinary approach to AI regulation by updating purpose limitation for AI. We address a critical blind spot in EU digital legislation: the secondary use of anonymised training data and pre-trained AI models.

My new papers on AI regulation & liability, hallucinations & human rights

Limitations & Loopholes in the EU AI Act & AI Liability Directives: What This Means for the EU, the US, & Beyond https://tinyurl.com/277c5xpe

Do large language models have a legal duty to tell the truth? https://tinyurl.com/3kzs777b

To protect science, we must use LLMs as zero-shot translators https://tinyurl.com/44m3h2p2

Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond | Yale Journal of Law & Technology

Come & work with me at the Hasso Plattner Institute & with Brent Mittelstadt & Chris Russell Oxford Internet Institute, University of Oxford

I am looking for 3 post docs on the governance of emergent tech.

CS: tinyurl.com/yr5bvnn5
Ethics: tinyurl.com/yc2e2km4
Law: tinyurl.com/4rbhcndp

Application deadline is 15.06.2025.

Come & work with me at the Hasso Plattner Institute & with Brent Mittelstadt & Chris Russell Oxford Internet Institute, University of Oxford

I am looking for 3 post docs on the governance of emergent tech.

CS: tinyurl.com/yr5bvnn5
Ethics: tinyurl.com/yc2e2km4
Law: tinyurl.com/4rbhcndp

Application deadline is 15.06.2025.

My new papers on AI regulation & liability, hallucinations & human rights

Limitations & Loopholes in the EU AI Act & AI Liability Directives: What This Means for the EU, the US, & Beyond https://tinyurl.com/277c5xpe

Do large language models have a legal duty to tell the truth? https://tinyurl.com/3kzs777b

To protect science, we must use LLMs as zero-shot translators https://tinyurl.com/44m3h2p2

Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond | Yale Journal of Law & Technology

Do LLMs need to tell the truth?

Fascinating new paper by @SandraWachter, Brent Mittelstadt and Chris Russell finds threats to science, education, and democracy through 'careless speech' produced by LLMs.

''Current governance incentives focus on reducing the liability of developers and operators and on maximising profit, rather than making the technology more truthful.'

https://www.ox.ac.uk/news/2024-08-07-large-language-models-pose-risk-society-and-need-tighter-regulation-say-oxford

#AI #Democracy #Education #LLM #Research #Science

Large Language Models pose a risk to society and need tighter regulation, say Oxford researchers | University of Oxford

Leading experts in regulation and ethics at the Oxford Internet Institute, have identified a new type of harm created by LLMs which they believe poses long-term risks to democratic societies and needs to be addressed by creating a new legal duty for LLM providers.

Next was an excellent panel on approaches to AI regulation, ethics, and the state of the field at the QMUL School of Law with @SandraWachter and @jackstilgoe. This is an incredibly refreshing examination of the problems with much of the recent discourse around AI, and is one of the few panels I've listened to where I'm in nearly complete agreement with the panelists. Highly recommend https://www.youtube.com/watch?v=jaNkaWz8plg (7/9) #AI #ethics #law
AI Minds and Governance Frameworks

YouTube

"The only thing that grows at this speed is cancer." Our first #WIConf24 panel discusses the environmental costs, the geopolitical nightmares behind global #AI supply chains, the effects on labor and its exploitation, as well as the collective harms of subtle hallucinations.

@SandraWachter
@ens
@zephoria

#Desinformation in Zeiten demokratischer Regression

mit Jeanette Hofmann, Principal Investigator am Weizenbaum Institut

15.30 Uhr
Stage 8
#rp24 #WhoCares

Wir haben unsere Fördermitglieder eingeladen, mit uns zusammen die Ergebnisse unserer Arbeit in der #ZDFMagazin Royale Sendung zu sehen und zu diskutieren. Wenn Du auch Fördermitglied werden willst, hier entlang: https://algorithmwatch.org/de/grenzen-ohne-ki/ #GrenzenOhneKI
Grenzen ohne KI - AlgorithmWatch

29.000 Menschen sind in den vergangenen zehn Jahren im Mittelmeer ums Leben gekommen – beim Versuch, die EU zu erreichen. Arbeiten die EU und Wissenschaftler*innen in ganz Europa deshalb fieberhaft daran, diese Tragödie mit neuesten Technologien zu stoppen? Nein, das Gegenteil ist der Fall: Mit sogenannter Künstlicher Intelligenz werden höhere Mauern errichtet, finanziert mit Steuergeld.

AlgorithmWatch