“Yes, we’ve seen it.”
When asked, “Have you encountered exaggerated or fake AIGC on social media platforms?” every respondent gave the same answer. Behind this unanimous reply lies a “crisis of authenticity” spreading across global social media—from fabricated emotionally charged images to mass-produced low-quality videos, AI-generated content has shifted from technological spectacle to everyday experience, repeatedly testing the boundaries of our perception of reality.
“It’s not just humans who publish fake content; AI does so at an unstoppable, mass-production pace,” one interviewee pointed out succinctly. The content ecosystem of social media is undergoing a structural transformation: what Meta CEO Mark Zuckerberg referred to as “the third phase centred on AI” has arrived, with machine-generated content accounting for a growing proportion of the information users encounter.
The proliferation of AI-generated content not only dilutes information quality but also blurs the line between reality and fabrication—precisely the pervasive malaise of today’s social media.
“Perhaps only by linking accounts to real individuals can we raise the barrier to forgery,” some respondents noted. Confronted with the AI “black box,” many platforms are attempting to introduce paid verification, biometric identification, and other measures, striving to counter the “replicability of machines” with the “uniqueness of humans.”
Yet technology continues to evolve rapidly . Today’s AI can simulate human interaction behaviours, bypass CAPTCHA, and even generate logically rigorous “professional discourse.” Although YouTube has committed to cleaning up low-quality AI content, studies show that its algorithm still recommends approximately 20% of AI-generated videos to new users. As governance lags behind technological advances, some users have already chosen to “vote with their wallets”—turning to niche, human-verified communities. This may signal that future social media will stratify according to levels of trust.
“AI arrived too quickly—too fast for us to react,” admitted one respondent. Scholars warn that prolonged exposure to massive volumes of low-quality, meaningless AI content may lead to “cognitive fatigue,” characterised by scattered attention and dulled judgment. And when fabricated content encroaches on public issues—such as political manipulation or disaster misinformation—the harm extends far beyond individual cognition, directly undermining the foundation of societal trust.
A deeper contradiction lies in this: if “authenticity” requires additional payment or technical tools to verify, is the democratisation of information diminishing? As AI not only mimics reality but also elicits empathy through emotional narratives, the human judgment system—reliant on experience and intuition—faces unprecedented challenges.