What’s new: China wants tech platforms that give users the ability to realistically change how people look and sound in videos to inform their users that they must get advance permission from people whose faces and voices are being altered.
The Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology, and the public security ministry issued the new regulations regarding “deepfakes” on Sunday. They will take effect on Jan. 10.
The provisions also require deep synthesis service providers to clearly label information that has been generated or changed using deep synthesis technologies, such as face manipulation and voice simulation.
What’s more: The rules are designed to set boundaries for this kind of technology, which the authorities defined as any that use algorithms such as deep learning and virtual reality to synthesize or generate text, photos, audio, video, or virtual scenes.
In 2019, a popular app called Zao that allowed users to insert their faces into scenes from movies and TV powered by such “deepfake” tech sparked concerns over privacy and potential misuse.
The new provisions spelled out the responsibilities of deep synthesis service providers, requiring them to authenticate the real identities of their users based on cell phone numbers or identity documents before allowing them to use their services.
In addition, platforms will be responsible for establishing a content management system to identify illegal and adverse information as well as a mechanism to debunk misinformation and report such cases to the authorities.
Contact reporter Kelly Wang (jingzhewang@caixin.com) and editor Michael Bellart (michaelbellart@caixin.com)
Get our weekly free Must-Read newsletter.