Ada Lovelace Institute

534 Followers
232 Following
48 Posts
An independent research institute with a mission to ensure data and AI work for people and society.
Websitehttps://www.adalovelaceinstitute.org
Newsletterhttps://nuffieldfoundation.tfaforms.net/149

📢 Join us for an event to launch the new ESRC
-funded project on participatory and inclusive data stewardship, led by @digitalgoodnet, with @AdaLovelaceInst and Liverpool Civic Data Cooperative.

When: 14:00, 26 Oct
Where: in person (100 St John Street, London) and online

https://www.eventbrite.com/e/participatory-and-inclusive-data-stewardship-where-next-tickets-723381131437

Participatory and Inclusive Data Stewardship - Where Next?

The launch of a new work programme on Participatory and Inclusive Data Stewardship

Eventbrite

COVID-19 technologies provide important insights and lessons for many emerging trends that require public acceptance and cooperation (for example, digital identity and wearable healthcare technology).

But how can we evaluate their effectiveness according to a human-centred approach?

Melis Mevsimler looks at the evidence (and evidence gaps) in our latest blog post⬇️

https://www.adalovelaceinstitute.org/blog/covid-19-technologies-human-centred-approach/

Evaluating data-driven COVID-19 technologies through a human-centred approach

What we can learn from missing evidence on digital contact tracing and vaccine passports?

Dame Julie Maxton has been appointed as the new Chair of our Oversight Board by the Nuffield Foundation.

https://www.adalovelaceinstitute.org/news/dame-julie-maxton-appointed-chair/

Dame Julie Maxton appointed as Chair of the Ada Lovelace Institute

Her three-year term as Chair will begin in October, succeeding Professor Dame Wendy Hall.

AWO analysis shows gaps in UK legal regime protecting people from AI harms. https://www.awo.agency/blog/awo-analysis-shows-gaps-in-effective-protection-from-ai-harms/

Analysis was commissioned by @AdaLovelaceInst whose report 'Regulating AI in the UK' was published today.
https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/

You can read our full legal analysis:
https://www.awo.agency/files/AWO%20Analysis%20-%20Effective%20Protection%20against%20AI%20Harms.pdf

AWO analysis shows gaps in effective protection from AI harms

A new data rights agency

Our new report analyses the UK’s proposals for AI regulation and makes recommendations for strengthening them across three categories, reflecting our tests for effective AI regulation: coverage, capability and urgency.

We hope the report will be useful for policymakers, regulators, journalists, industry, academia, civil society, and anyone else who is interested in understanding how AI can be regulated in the UK for the benefit of people and society.

https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/

Regulating AI in the UK

Strengthening the UK's proposals for the benefit of people and society

We recognise that the terminology in this area is contested and we expect that language will evolve quickly.

Our explainer is a snapshot in time and aims to help people understand current norms in uses of terminologies, and their social and political contexts. (2/2)

Foundation models, large language models, general-purpose AI, generative AI - what do these different terms mean?

Our new explainer aims to cut through the confusion and discusses why language is important – but tricky – in this fast-moving area. (1/2)

https://www.adalovelaceinstitute.org/resource/foundation-models-explainer/

Explainer: What is a foundation model?

This explainer is for anyone who wants to learn more about foundation models, also known as 'general-purpose artificial intelligence' or 'GPAI'.

How has biometric data collection caused harm in humanitarian contexts, such as conflicts and natural disasters, and where do future risks lie?

In their new post on our blog, Belkis Wille and Katja Lindskov Jacobsen examine the barriers to better data protection in these contexts.

https://www.adalovelaceinstitute.org/blog/data-most-vulnerable-people-least-protected/

The data of the most vulnerable people is the least protected

How has biometric data collection caused harm in the context of humanitarian interventions and where do future risks lie?

At @AdaLovelaceInst we are #hiring for 2-3 researchers with policy and social science #research skills (both qualitative and quantitative) and an interest/background in #AI and #data:
https://www.adalovelaceinstitute.org/job/researchers-june-2023/
Researcher (multiple roles)

We are hiring two or three researchers to support our work on the impacts of AI and data on people and society

In our new blog post we explain significant features of the UK Government’s proposals for AI regulation and how we intend to test them against three challenges: coverage, capability and criticality.

Coverage – How well does the UK’s regulatory patchwork address AI?

Capability – How well-equipped are UK regulatory institutions to deliver the principles?

Criticality – Will the UK approach enable a timely response to urgent risks?

https://www.adalovelaceinstitute.org/blog/regulating-ai-uk-three-tests/

Regulating AI in the UK: three tests for the Government’s plans

Will the proposed regulatory framework for artificial intelligence enable benefits and protect people from harm?