AI-generated avatars are blurring the line between memory and reality in China’s fast-growing digital human industry.
What once felt like speculative fiction—something you might expect from an episode of Black Mirror—is now unfolding in the real world. In China, a rapidly growing “digital human” industry is redefining how people interact with memory, grief, and even identity. At the center of this transformation is a controversial use case: recreating deceased loved ones as AI-powered avatars.
This isn’t theoretical. It’s happening now, at scale, and it’s raising profound questions about ethics, psychology, and the limits of technology.
When Grief Meets Code
One of the most widely discussed examples is that of Zhang Xinyu, who used AI technology to recreate a digital version of her late father. The avatar could simulate his voice, expressions, and conversational style—offering something that felt, at least emotionally, like a continuation of his presence.
Zhang described the experience as deeply uplifting, saying it made her feel “recharged” and motivated. For her, the technology wasn’t eerie—it was comforting.
But that emotional relief comes with a complicated underside. Grief is a delicate psychological process. Traditionally, it involves acceptance, gradual detachment, and adaptation to loss. AI avatars disrupt that process by creating an illusion of continuity. Instead of saying goodbye, users can keep interacting—blurring the line between memory and simulation.
Critics argue that this could delay emotional healing or even create dependency, where individuals prefer interacting with a digital reconstruction over facing reality. In extreme cases, it may reshape how humans process loss altogether.
A Billion-Yuan Boom
The emotional complexity hasn’t slowed adoption. In fact, it’s fueling a booming market. According to Xinhua News Agency, China’s AI digital human industry reached approximately 4.1 billion yuan (around $600 million) in 2024. Even more striking is the growth rate—an explosive 85% year-over-year increase.
This isn’t just about digital resurrection. The broader “AI human” ecosystem includes:
- Virtual influencers promoting brands
- AI livestream hosts interacting with audiences
- Digital customer service representatives
- Personalized AI companions
These avatars are increasingly indistinguishable from real humans in voice, appearance, and behavior. Advances in generative AI, voice cloning, and real-time rendering have pushed realism to a point where casual users may struggle to tell what’s real and what isn’t.
And that’s where the risks begin to multiply.
The Line Between Reality and Simulation
As AI-generated humans become more lifelike, the distinction between authentic and artificial starts to erode. This creates a new category of risk—not just misinformation, but emotional and psychological manipulation.
Imagine receiving a message from a loved one who has passed away. Even if you know it’s artificial, the emotional response can feel real. Over time, repeated exposure can blur cognitive boundaries, making it harder to separate memory from simulation.
This concern isn’t limited to grief. Digital humans can also:
- Impersonate real individuals without consent
- Spread misinformation through realistic avatars
- Manipulate users in marketing or political contexts
The technology doesn’t just replicate humans—it replicates trust. And trust, once automated, becomes a powerful tool.
Beijing’s Regulatory Response
Recognizing these risks, regulators are stepping in. The Cyberspace Administration of China has introduced draft regulations aimed specifically at AI-generated humans.
These rules attempt to draw clear boundaries in a rapidly evolving space. Key provisions include:
- Mandatory labeling of AI-generated content
- Explicit consent requirements for replicating a person’s likeness
- Restrictions on harmful or misleading content
- Financial penalties ranging from 10,000 to 200,000 yuan for violations
The goal is not to stop innovation, but to contain its potential harm. By enforcing transparency and accountability, regulators hope to prevent misuse while allowing the industry to grow. This approach reflects a broader pattern in China’s tech ecosystem: rapid innovation followed by equally rapid regulation.
Innovation First, Responsibility Second
China has a history of aggressively scaling new technologies—whether it’s e-commerce, mobile payments, or AI—before introducing regulatory frameworks to manage their impact.
The digital human industry is following the same trajectory.
Developers and companies are now adapting to the new rules by:
- Building consent verification systems
- Embedding visible AI labels into content
- Creating moderation tools to detect misuse
Interestingly, many industry leaders don’t see regulation as a threat. Instead, they view it as necessary for long-term sustainability. Without clear rules, public trust could erode, slowing adoption and limiting growth. In this sense, regulation becomes an enabler rather than a constraint.
The Ethics of Digital Resurrection
Beyond regulation lies a deeper philosophical question: should we be doing this at all? Recreating a deceased person raises issues that go far beyond technology:
- Consent: Did the person agree to be digitally recreated?
- Identity: Is the avatar truly “them,” or just a simulation?
- Ownership: Who controls the digital version of a human being?
- Impact: What does this do to the living?
Unlike other AI applications, digital resurrection operates in an emotionally charged space. It doesn’t just solve problems—it reshapes human experience. Some argue it offers therapeutic value, providing comfort and closure. Others see it as a distortion of grief, replacing acceptance with illusion. Both perspectives can be true at the same time.
A Glimpse of the Global Future
While China is currently leading in scale and adoption, this trend is unlikely to remain confined within its borders. The underlying technologies—AI voice synthesis, generative avatars, and large language models—are advancing globally.
Companies in the United States, Europe, and elsewhere are already experimenting with similar concepts, from AI memorial services to virtual companions trained on personal data. What sets China apart is speed. The combination of market demand, technological capability, and regulatory direction has accelerated adoption to a level the rest of the world is only beginning to explore.
As other countries catch up, they will face the same questions:
- How do you regulate something so deeply personal?
- Where do you draw the line between innovation and exploitation?
- Can technology replicate human presence without undermining human reality?
What Comes Next?
The draft regulations from the Cyberspace Administration are currently open for public feedback, but stricter enforcement is expected soon. This will likely shape not only China’s domestic market but also global standards. If successful, China’s model could become a blueprint for other nations—balancing rapid innovation with structured oversight.
At the same time, the technology itself will continue to evolve. Future iterations of digital humans may become even more realistic, more interactive, and more integrated into daily life. The question is not whether this industry will grow. It will. The real question is whether society can adapt to it.
The Human Cost of Digital Immortality
At its core, the rise of AI-generated humans forces us to confront something fundamental: what it means to be human in a digital age. For centuries, death has been a final boundary. Memory was the only bridge between the living and the lost. Now, technology is attempting to extend that bridge—turning memory into interaction, and absence into presence.
But presence without reality comes at a cost. Digital hauntings, as some have begun to call them, are not just about technology. They are about emotion, identity, and the fragile ways humans cope with loss. In trying to preserve connection, we may be redefining it entirely. And whether that leads to healing or harm is a question we are only just beginning to answer.
