Jack Clark has been writing on Twitter about the need of greater public engagement in the AI space.

I followed up on his reading as the topic of public involvement in digital is high on our priority list at @openfuture (#SharedDigitalEurope).

Jack Clark and Jess Whittlestone argue for robust govt monitoring of AI space. I like the point that such monitoring-driven policies would be more dynamic.

Their approach assumes that govts make use of data that is in the open - which would make a good case of the value of OpenX approaches to data, code, research.

But I would push further: the case they describe is a great example why we need Public Data Commons and B2G data sharing.

http://arxiv.org/abs/2108.12427

Why and How Governments Should Monitor AI Development

In this paper we outline a proposal for improving the governance of artificial intelligence (AI) by investing in government capacity to systematically measure and monitor the capabilities and impacts of AI systems. If adopted, this would give governments greater information about the AI ecosystem, equipping them to more effectively direct AI development and deployment in the most societally and economically beneficial directions. It would also create infrastructure that could rapidly identify potential threats or harms that could occur as a consequence of changes in the AI ecosystem, such as the emergence of strategically transformative capabilities, or the deployment of harmful systems. We begin by outlining the problem which motivates this proposal: in brief, traditional governance approaches struggle to keep pace with the speed of progress in AI. We then present our proposal for addressing this problem: governments must invest in measurement and monitoring infrastructure. We discuss this proposal in detail, outlining what specific things governments could focus on measuring and monitoring, and the kinds of benefits this would generate for policymaking. Finally, we outline some potential pilot projects and some considerations for implementing this in practice.

arXiv.org
@tarkowski @openfuture
So i read

"5 How could governments use AI measurement and monitoring?
5.1 Testing deployed systems to see if they conform to regulation."

And below ... no single idea how to reliably test #closedsource / #closeddata #AI models learned from petabytes of public available data. Any possible "test" says nothing about possible problems with model/data but is ,in fact, another way to further (and free!) model learning. Without access to the source code and learning data set, "monitoring" of already available commercial AI models is simply bullshit and nonsens.