The Cyberspace Administration of China (CAC) has released a draft of the “Interim Measures for the Management of Artificial Intelligence Human-like Interactive Services” which aim to tighten oversight of AI services designed to simulate human personalities and engage users in emotional interaction. The proposed rules would apply to AI products and services offered to the public in China that present simulated human personality traits, thinking patterns and communication styles, and interact with users emotionally through text, images, audio, video or other means. The proposed measures would: (1) require service providers to assume safety responsibilities throughout the product lifecycle and establish systems for algorithm review, data security and personal information protection; (2) target potential psychological risks by requiring providers to identify user states and assess users’ emotions and their level of dependence on the service; (3) require providers to take necessary measures to intervene if users are found to exhibit extreme emotions or addictive behaviour; and (4) set content and conduct red lines, stating that services must not generate content that endangers national security, spreads rumours or promotes violence or obscenity. The measures are open to public comment by 25 January 2026.
Click here for the official article/release
Disclaimer
The Legal Wire takes all necessary precautions to ensure that the materials, information, and documents on its website, including but not limited to articles, newsletters, reports, and blogs (“Materials”), are accurate and complete. Nevertheless, these Materials are intended solely for general informational purposes and do not constitute legal advice. They may not necessarily reflect the current laws or regulations. The Materials should not be interpreted as legal advice on any specific matter. Furthermore, the content and interpretation of the Materials and the laws discussed within are subject to change.
