A few hours ago, Chinese regulations governing AI-generated media services (from text to videos to face swapping, etc.) took effect.

These rules on so-called "deep synthesis" 深度合成 are among the first major efforts around the world to confront challenges from the likes of ChatGPT, DALL-E, and the multiplying similar large-model services. A quick thread and some thoughts: 1/

Our Stanford DigiChina Project translated a draft of the rules: https://digichina.stanford.edu/work/translation-internet-information-service-deep-synthesis-management-provisions-draft-for-comment-jan-2022/

The final version had minimal changes, and China Law Translate has a version of that:
https://www.chinalawtranslate.com/en/deep-synthesis/?tpedit=1
2/

Translation: Internet Information Service Deep Synthesis Management Provisions (Draft for Comment) – Jan. 2022 - DigiChina

This translation is by Rogier Creemers and was edited by Graham Webster with reference to an alternative translation by China Law Translate. Source: http://www.cac.gov.cn/2022-01/28/c_1644970458520968.htmArchived copy: https://web.archive.org/web/20220131190544/http://www.cac.gov.cn/2022-01/28/c_1644970458520968.htm TRANSLATION Internet Information Service Deep Synthesis Management Provisions (Draft for Comment) Article 1: These provisions are formulated in order to regulate deep synthesis activities in Internet information services, to carry forward the core […]

DigiChina

The scope of the Provisions is vast, with one key limitation. It applies basically to service providers, as opposed to independent developers or users of an algorithm (except in the case of fake news, which is prohibited at the user level).

So OpenAI would be regulated and would have significant responsibilities, but if you've downloaded Stable Diffusion to your laptop to mess around this would not be mainly for you. 3/

China's new rules on AI-generated media target services for producing text, simulated dialogue, voice synthesis or imitation, face generation or editing, general realistic image generation, VR/AR, and an "other" category.

Service providers working with biological characteristics such as faces or voices, or that might impact natsec, national image, the national interest, or the public interest--they'll need to conduct security assessments. Same for anything that might affect public opinion. 4/

As I said when speaking with @karenhao, who has a good article on this in WSJ, China is an early major mover on AI regulation. https://www.wsj.com/articles/china-a-pioneer-in-regulating-algorithms-turns-its-focus-to-deepfakes-11673149283

Despite a political system and regime goals that differ or are anathema to the systems and goals of some other countries, nonetheless the world has something to learn from how this effort turns out.

Will China effectively control societal effects of AI-driven media? Can it do so without effectively banning innovation in the area? 5/

@lilianedwards @gwbstr @karenhao @iszpotakowski I am missing @gwbstr’s thoughts. 😭😭😭😭😭