A few weeks ago, I published the report "How does journalism report on artificial intelligence? Analysis and recommendations from a human rights perspective." The report was commissioned by "Mèdia.cat - Critical Media Observatory and Lafede.cat - Organizations for Global Justice," and was co-authored with Paul Zalduendo.

🔗 https://www.media.cat/2023/02/16/com-sinforma-sobre-intelligencia-artificial/

The report is divided into two main sections:

1️⃣ The first is a content analysis of 253 news articles about AI from 7 media outlets.

2️⃣ The second section involves identifying debates based on the results of the first section, sharing them with a group of experts, and developing 20 recommendations for communicators and journalists.

Some conclusions and recommendations. Key points:

➡️ 70% of the news only highlight the benefits of AI: there is no information with a human rights perspective (limits and risks) but with a techno-solutionist perspective.

➡️ Companies and the technology industry have a very prominent presence. They are sources of news (members of companies, clusters, etc.) and are related to the subject (new implementations, software, etc.). Followed by the public policy sector.

🔗 https://twitter.com/MediacatCat/status/1626186185239306241

Tweet / Twitter

Twitter
On the other hand, non-governmental and civil society organizations only account for 12% of information on artificial intelligence. So which discourses prevail? Indeed, the generation of discourse is also biased. After the content analysis, a set of key debates in the analysed information was detected and shared with the group of expert individuals. As a result of these interviews, 20 recommendations are proposed for journalists and communicators who report on AI.

The recommendations are into the following sections:
➡️ Technosolutionist approach and magical language.
➡️ Biased sources.
➡️ Historical contextualization of AI.
➡️ Use of images.
➡️ Data, artificial intelligence, and black boxes.
➡️ Technical terms and neologisms.

In summary, it is recommended not to report on IA without mentioning and detailing the impacts of its implementation, the mechanisms for its regulation or the actors who lead its development. Checking and assigning responsibilities.

Reporting on artificial intelligence with rigor to avoid constructing narratives (and realities) of non-accountability. Technosolutionism is dangerous, technophobia is also. In response to this, journalism with a focus on human rights. Presentation of the report and debate at College of Journalists of Catalonia.

📽️ https://www.youtube.com/watch?v=PpUXBa5yMQo

Presentació de l'informe: Com s'informa sobre intel·ligència artificial?

YouTube