.@TheoLenoir writes in Tech Policy Press about the #DSA, and whether it's globally applicable.

Lenoir's main point is that the #DSA's foundations rest firmly on a European notion of universality, and other values (and tensions between them) that play out very differently around the world.

But most interesting for me was the more fundamental issue of trust in regulation. DSA defines a strong state regulator to watch over commercial platforms. And in many places (I would argue, also some inside the EU), these regulators cannot be fully trusted.

We have been facing the same criticism as we have advocated (as @openfuture ) for business-to-govt data sharing rules in the #DataAct

I nevertheless think that we have no other choice but to strengthen the role of states and public institutions in online ecosystems. #SharedDigitalEurope

#BrusselsEffect

https://techpolicy.press/can-the-dsa-be-useful-outside-europe/

Can the DSA be Useful Outside Europe?

Beyond the broadest of principles, our common interest often breaks down, writes Théophile Lenoir.

Tech Policy Press

Jack Clark has been writing on Twitter about the need of greater public engagement in the AI space.

I followed up on his reading as the topic of public involvement in digital is high on our priority list at @openfuture (#SharedDigitalEurope).

Jack Clark and Jess Whittlestone argue for robust govt monitoring of AI space. I like the point that such monitoring-driven policies would be more dynamic.

Their approach assumes that govts make use of data that is in the open - which would make a good case of the value of OpenX approaches to data, code, research.

But I would push further: the case they describe is a great example why we need Public Data Commons and B2G data sharing.

http://arxiv.org/abs/2108.12427

Why and How Governments Should Monitor AI Development

In this paper we outline a proposal for improving the governance of artificial intelligence (AI) by investing in government capacity to systematically measure and monitor the capabilities and impacts of AI systems. If adopted, this would give governments greater information about the AI ecosystem, equipping them to more effectively direct AI development and deployment in the most societally and economically beneficial directions. It would also create infrastructure that could rapidly identify potential threats or harms that could occur as a consequence of changes in the AI ecosystem, such as the emergence of strategically transformative capabilities, or the deployment of harmful systems. We begin by outlining the problem which motivates this proposal: in brief, traditional governance approaches struggle to keep pace with the speed of progress in AI. We then present our proposal for addressing this problem: governments must invest in measurement and monitoring infrastructure. We discuss this proposal in detail, outlining what specific things governments could focus on measuring and monitoring, and the kinds of benefits this would generate for policymaking. Finally, we outline some potential pilot projects and some considerations for implementing this in practice.

arXiv.org