Governing AI misuse requires both exposing falsity and distinguishing authenticity

On Aug 19, Yuan Jianhua, deputy chief judge, and Zhang Qian, judge of the First Comprehensive Division of the Beijing Internet Court, were interviewed by CCTV-13 program News 1+1 regarding China's first case in which a platform determined that user-generated content was AI-generated.

The plaintiff posted an answer to a question on an online platform. The platform determined that the content constitutes a violation as it was "AI-generated but unlabeled," and imposed penalties including hiding the post and suspending the plaintiff's account for one day. The plaintiff's internal appeal was unsuccessful, after which he filed a lawsuit.
The defendant argued that the plaintiff's content has been identified by its algorithmic recognition system as "AI-generated," and that subsequent human review confirmed the text lacked "obvious human emotional characteristics."
The Beijing Internet Court (BIC) held that when a platform determines user-generated content to be AI-synthesized, the user bears the initial burden of providing prima facie evidence to prove the content is human-created. However, in this case the plaintiff's response was a short, real-time text post, for which drafts or originals could not reasonably be expected to exist.
By contrast, the defendant both controlled the algorithmic tool and relied on its results in making the judgment. As the controller of both the algorithm and the decision, the platform had both the ability and the obligation to reasonably explain or substantiate its determination. Since the defendant failed to provide adequate explanation of the algorithmic decision-making process or its results, it was found liable for breach of contract in its handling of the plaintiff's account.
The BIC ordered the defendant to lift the concealment of the content, delete the violation record, and dismissed the plaintiff's other claims.
Users shall comply with platform rules when using online services, and must truthfully label any content generated by AI tools. Given the inherent limitations of algorithmic technology, recognition accuracy cannot and will not reach 100 percent, and platforms may make erroneous determinations. In such cases, platforms should take preliminary measures to address misjudgments and seek to resolve disputes prior to litigation.
Where users provide original evidence demonstrating misclassification, and upon review the platform confirms the error, the platform shall promptly remove the AI-generated label and lift the penalties imposed on the user. Platforms should enhance the accuracy of algorithmic recognition, improve review mechanisms, and establish accessible, convenient, and effective appeal channels.

This case first affirmed the legitimacy of platform content review. Labeling AI-generated content helps the public distinguish human-created works from AI-synthesized content, protects the right to know, facilitates content traceability, and reduces risks of infringement or misuse.
At the same time, the court emphasized that platforms must provide sufficient evidence and explanations for their determinations, particularly regarding algorithmic decision-making processes and results. While platforms are not required to disclose all algorithm details or trade secrets, they must give necessary and comprehensible explanations relevant to AI-based determinations.
Starting from the trial of this case, from the user's perspective, they may lawfully assert their rights if misjudged by algorithms, but should also preserve and organize supporting evidence where possible. For longer works such as academic papers, articles, or novels, it is reasonable to require the author to provide drafts or originals. For the platforms, it is essential to enhance algorithmic transparency and review mechanisms, disclose basic principles, objectives and decision criteria where user rights may be affected, ensuring that effective appeal Mechanisms are available.

In recent years, the BIC has witnessed a marked rise in AI-related disputes, reflecting the rapid development and expanding application of AI technologies in China.
According to the court, cases mainly fall into three categories.The first involves personality rights disputes, such as cases involving AI-generated voices, face-swapping, companionship, and parody, often linked to infringements of portrait rights, reputation rights, voice rights, or personal information. The second concerns copyright disputes, including whether AI-generated outputs qualify as "works" under the law, whether unauthorized use of AI-generated content infringes copyright, and whether training AI models with copyrighted works constitutes infringement. The third category involves contract disputes, typically arising between platforms and users in relation to the use of AI technologies.
The court noted that the growing caseload highlights the dual challenge for the judiciary: to establish adjudication rules that both encourage the healthy development of AI and safeguard the lawful rights and interests of parties.

Beijing Internet Court Lawsuit Service WeChat Account
Beijing Internet Court WeChat Account