Restricting the AI Influence, China New Draft Rules Focused on Your Emotional Safety

Anuppur, India, December 28, 2025 (GizTimes)– The CAC (Cyberspace Administration of China) has come up with the new draft rules on Saturday, December 2025, that require AI system developers to integrate a warning on the long-term use of the AI tools. The rules are crafted to minimize the development of unhealthy emotional dependencies of humans on the human-like AI by applying a pop-up that clearly reflects the distinction between the AI and humans.
It became essential as AI companions are getting too realistic and human-like, which can affect a person’s mental well-being, as they are too invested in the AI. This moves the burden of digital wellness from the shoulders of the users to the developers. China is the first to put restrictions on these AI systems; the software must intervene in the matter if a user shows signs of addiction or psychological distress.
The Main Key Rules
- The Service providers are required to put an alert after every two hours of use that clearly reflects that they are interacting with a machine, not humans.
- Providers must do the emotional monitoring of every user, and if someone gets recognized with symptoms of extreme emotional distress, they must take active steps to help the user.
- The Rules set red lines against the tools that use manipulative techniques and emotional tactics to trap users’ attention or spread content that is a threat to national security.
China's cyber regulator on Saturday issued draft rules for public comment that would tighten oversight of artificial intelligence services designed to simulate human personalities and engage users in emotional interaction.
The move underscores Beijing's effort to shape the… pic.twitter.com/gu9eg5KepT
— Yahoo News (@YahooNews) December 27, 2025
Following the year there was a rapid growth in the “AI tools” which raised the concern of social isolation among tech ethicists. While the world is in worry about AI taking jobs, China prioritizing the influence of these tools on the users psychology and their real-world social interaction.
The “Interim Measures for Anthropomorphic AI Interaction” sees it as necessary boundary for the users actively involved in the virtualized world.







