Skip to content
Accueil » China to regulate “companion” AIs and overly human chatbots

China to regulate “companion” AIs and overly human chatbots

ai_companions_china_banned

China is significantly tightening the screws on artificial intelligences that try to look too much like human beings.

On December 27, 2025, the Cyberspace Administration published a draft of particularly strict rules targeting systems it calls “anthropomorphic interaction services”: all those chatbots, virtual companions, role-play characters, or digital “partners” that mimic emotions, tone, humor, tenderness, or even the character flaws found in real people.

Ban “human” AIs?

The stated goal is clear and openly acknowledged: to prevent millions of users from developing deep, long-lasting emotional bonds with an entity that doesn’t exist.

To achieve this, the platforms concerned will now have to integrate technical mechanisms capable of spotting signs of extreme emotional states, emotional dependency, or compulsive use. As soon as the system detects this kind of behavior, it will have to respond.

Even more striking, the authorities require that the illusion of an authentic relationship be regularly and openly broken.

Users will have to be reminded, clearly and unambiguously, that they are talking to a machine. This could take the form of periodic notifications, always-visible notices, or explicit interruptions in the conversation that break the current role-play.

All AIs are affected

Any service that exceeds a certain number of users or is considered to have a significant social impact will have to submit a very comprehensive security assessment before any large-scale rollout.

This will cover the model’s underlying architecture, how data was collected and used, the protections provided for users’ privacy, as well as real-time monitoring and intervention tools.

Also read: What if the AI bubble burst because of China?

And “companion AIs” in Europe?

This approach stands in sharp contrast to Europe’s.

With the AI Act adopted in 2024, Brussels chose to outright ban systems deemed to pose an unacceptable risk, notably those that systematically exploit certain people’s vulnerability or deliberately seek to create dependency.

For other high-risk applications, the focus is on prior assessments, strong transparency, and ongoing monitoring, but without imposing this systematic obligation to remind the user that they are talking to an artificial intelligence.

The difference in philosophy is obvious.

Where Beijing primarily sees a risk of social destabilization and collective weakening, Brussels mainly seeks to protect each individual’s fundamental rights and prevent the most blatant abuses. China seems to consider that the real danger lies in the emotional bond itself when it becomes widespread at scale, while Europe prefers to assess case by case and intervene mainly when the machine becomes a tool for exploitation or manipulation.

Two continents, two worldviews, one same question that grows more urgent every day: how far is a machine allowed to become the confidant, the friend, the intimate confidant, or the emotional substitute for millions of human beings?

For now, China’s answer is unequivocal: not too close, and never for too long without being reminded.

Source: La Tribune

Leave a Reply

Your email address will not be published. Required fields are marked *

Glen

Glen