The headlines sounded dire. “China will use AI to disrupt elections in the US, South Korea and India, Microsoft warns” one read. “China Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the US,” another claimed.

The headlines were based on a report published earlier this month by Microsoft’s Threat Analysis Center which outlined how a Chinese disinformation campaign was now utilizing artificial technology to inflame divisions and disrupt elections in the US and around the world. The campaign, which has already targeted Taiwan’s elections, uses AI-generated audio and memes designed to grab user attention and boost engagement.

But what these headlines and Microsoft itself failed to adequately convey is that the Chinese government-linked disinformation campaign, known as Spamouflage Dragon or Dragonbridge, has so far been virtually ineffective.

“I would describe China’s disinformation campaigns as Russia 2014. As in, they’re 10 years behind,” says Clint Watts, the general manager of Microsoft’s Threat Analysis Center. “They’re trying lots of different things but their sophistication is still very weak.”

Over the last 24 months, the campaign has switched from pushing predominately pro-China content to more aggressively targeting US politics. While these efforts have been large-scale and across dozens of platforms, they have largely failed to have any real world impact. Still, experts warn that it can take just a single post being amplified by an influential account to change all of that.

“Spamouflage is like throwing spaghetti at the wall, and they are throwing a lot of spaghetti,” says Jack Stubbs, chief information officer at Graphika, a social media analysis company that was among the first to identify the Spamouflage campaign. “The volume and scale of this thing is huge. They’re putting out multiple videos and cartoons every day, amplified across different platforms at a global scale. The vast majority of it, for the time being, appears to be something that doesn’t stick, but that doesn’t mean it won’t stick in the future.”

Since at least 2017, Spamouflage has been ceaselessly spewing out content designed to disrupt major global events, including topics as diverse as the Hong Kong pro-democracy protests, the US presidential elections, and the Israel-Hamas war. Part of a wider multi-billion-dollar influence campaign by the Chinese government, the campaign has used millions of accounts on dozens of internet platforms ranging from X and YouTube to more fringe platforms like Gab, where the campaign has been trying to push pro-China content. It’s also been among the first to adopt cutting edge techniques such as AI-generated profile pictures.

Even with all of these investments, experts say the campaign has largely failed due to a number of factors including issues of cultural context, China’s online partition from the outside world via the Great Firewall, a lack of joined-up thinking between state media and the disinformation campaign, and the use of tactics designed for China’s own heavily controlled online environment.

“That’s been the story of Spamouflage since 2017: They’re massive, they’re everywhere, and nobody looks at them except for researchers,” says Elise Thomas, a senior open source analyst at the Institute for Strategic Dialogue who has tracked the Spamouflage campaign for years.

Share.
Exit mobile version