All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.
The headlines sounded dire: âChina Will Use AI to Disrupt Elections in the US, South Korea and India, Microsoft Warns.â Another claimed, âChina Is Using AI to Sow Disinformation and Stoke Discord Across Asia and the US.â
They were based on a report published earlier this month by Microsoftâs Threat Analysis Center which outlined how a Chinese disinformation campaign was now utilizing artificial technology to inflame divisions and disrupt elections in the US and around the world. The campaign, which has already targeted Taiwanâs elections, uses AI-generated audio and memes designed to grab user attention and boost engagement.
But what these headlines and Microsoft itself failed to adequately convey is that the Chinese-government-linked disinformation campaign, known as Spamouflage Dragon or Dragonbridge, has so far been virtually ineffective.
âI would describe China's disinformation campaigns as Russia 2014. As in, they're 10 years behind,â says Clint Watts, the general manager of Microsoftâs Threat Analysis Center. âThey're trying lots of different things but their sophistication is still very weak.â
Over the past 24 months, the campaign has switched from pushing predominately pro-China content to more aggressively targeting US politics. While these efforts have been large-scale and across dozens of platforms, they have largely failed to have any real world impact. Still, experts warn that it can take just a single post being amplified by an influential account to change all of that.
âSpamouflage is like throwing spaghetti at the wall, and they are throwing a lot of spaghetti,â says Jack Stubbs, chief information officer at Graphika, a social media analysis company that was among the first to identify the Spamouflage campaign. âThe volume and scale of this thing is huge. They're putting out multiple videos and cartoons every day, amplified across different platforms at a global scale. The vast majority of it, for the time being, appears to be something that doesn't stick, but that doesn't mean it won't stick in the future.â
Since at least 2017, Spamouflage has been ceaselessly spewing out content designed to disrupt major global events, including topics as diverse as the Hong Kong pro-democracy protests, the US presidential elections, and Israel and Gaza. Part of a wider multibillion-dollar influence campaign by the Chinese government, the campaign has used millions of accounts on dozens of internet platforms ranging from X and YouTube to more fringe platforms like Gab, where the campaign has been trying to push pro-China content. Itâs also been among the first to adopt cutting-edge techniques such as AI-generated profile pictures.
Even with all of these investments, experts say the campaign has largely failed due to a number of factors including issues of cultural context, Chinaâs online partition from the outside world via the Great Firewall, a lack of joined-up thinking between state media and the disinformation campaign, and the use of tactics designed for Chinaâs own heavily controlled online environment.
âThat's been the story of Spamouflage since 2017: They're massive, they're everywhere, and nobody looks at them except for researchers,â says Elise Thomas, a senior open source analyst at the Institute for Strategic Dialogue who has tracked the Spamouflage campaign for years.
âMost tweets receive either no engagement and very low numbers of views, or are only engaged with by other accounts which appear to be a part of the Spamouflage network,â Thomas wrote in a report for the Institute of Strategic Dialogue about the failed campaign in February.
Over the past five years, the researchers who have been tracking the campaign have watched as it attempted to change tactics, using video, automated voiceovers, and most recently the adoption of AI to create profile images and content designed to inflame existing divisions.
The adoption of AI technologies is also not necessarily an indicator that the campaign is becoming more sophisticatedâjust more efficient.
âThe primary affordance of these Gen AI products is about efficiency and scaling,â says Stubbs. âIt allows more of the same thing with fewer resources. It's cheaper and quicker, but we don't see it as a mark of sophistication. These products are actually incredibly easy to access. Anyone can do so with $5 on a credit card.â
The campaign has also taken place on virtually every social media platform, including Facebook, Reddit, TikTok, and YouTube. Over the years, major platforms have purged their systems of hundreds of thousands of accounts linked to the campaign, including last year when Meta took down what it called âthe largest known cross-platform covert influence operation in the world.â
The US government has also sought to curb the effort. A year ago, the Department of Justice charged 34 officers of the Chinese Ministry of Public Securityâs â912 Special Project Working Groupâ for their involvement in an influence campaign. While the DOJ did not explicitly link the arrests to Spamouflage, a source with knowledge of the event told WIRED that the campaign was â100 percentâ Chinese state-sponsored. The source spoke on the condition of anonymity as they were not authorized to speak publicly about the information.
âA commercial actor would not be doing this,â says Thomas, who also believes the campaign is run by the Chinese government. âThey are more innovative. They would have changed tactics, whereas it's not unusual for a government communications campaign to persist for a really long time despite being useless.â
For the past seven years, however, the content pushed by the Spamouflage campaign has lacked nuance and audience-specific content that successful nation-state disinformation campaigns from countries like Russia, Iran, and Turkey have included.
âThey get the cultural context confused, which is why you'll see them make mistakes,â says Watts. âThey're in the audience talking about things that don't make sense and the audience knows that, so they don't engage with the content. They leave Chinese characters sometimes in their posts.â
Part of this is the result of Chinese citizens being virtually blocked off from the outside world as a result of the Great Firewall, which allows the Chinese government to strictly control what its citizens see and share on the internet. This, experts say, makes it incredibly difficult for those running an influence operation to really grasp how to successfully manipulate audiences outside of China.
âThey're having to adapt strategies that they might have used in closed and tightly controlled platforms like WeChat and Weibo, to operating on the open internet,â says Thomas. âSo you can flood WeChat and Weibo with content if you want to if you are the Chinese government, whereas you can't really flood the open internet. It's kind of like trying to flood the sea.â
Stubbs agrees. âTheir domestic information environment is not one that is real or authentic,â he says. âThey are now being tasked with achieving influence and affecting operational strategic impact in a free and authentic information environment, which is just fundamentally a different place.â
Russian influence campaigns have also tended to coordinate across multiple layers of government spokespeople, state-run media, influencers, and bot accounts on social media. They all push the same message at the same timeâsomething the Spamouflage operators donât do. This was seen recently when the Russian disinformation apparatus was activated to sow division in the US around the Texas border crisis, boosting the extremist-led border convoy and calls for âcivil warâ on state media, influencer Telegram channels, and social media bots all at the same time.
âI think the biggest problem is [the Chinese campaign] doesnât synchronize their efforts,â Watts said. âTheyâre just very linear on whatever their task is, whether itâs overt media or some sort of covert media. Theyâre doing it and theyâre doing it at scale, but itâs not synchronized around their objectives because itâs a very top down effort.â
Some of the content produced by the campaign appeared to have a high number of likes and replies, but closer inspection revealed that those engagements came from other accounts in the Spamouflage network. âIt was a network that was very insular, it was only engaging with itself,â says Thomas.
Watts does not believe Chinaâs disinformation campaigns will have a material impact on the US election, but added that the situation âcan change nearly instantaneously. If the right account stumbles onto [a post by a Chinese bot account] and gives it a voice, suddenly their volume will grow.â
This, Thomas says, has already happened.
A post, written on X by an account Thomas had been tracking that has since been suspended, referenced âMAGA 2024â in their bio. It shared a video from Russian state-run channel RT that alleged President Joe Biden and the CIA had sent a neo-Nazi to fight in Ukraineâa claim that has been debunked by investigative group Bellingcat. Like most Spamouflage posts, the video received little attention initially, but when it was shared by the account of school shooting conspiracist Alex Jones, who has more than 2.2 million followers on the platform, it quickly racked up hundreds of thousands of views.
âWhat is different about these MAGAflage accounts is that real people are looking at them, including Alex Jones. Itâs the most bizarre tweet Iâve ever seen,â Thomas said.
Thomas says the account that was shared by Jones is different from typical Spamouflage accounts, because it was not spewing out automated content, but seeking to organically engage with other users in a way that made them appear to be a real personâreminiscent of what Russian accounts did in the lead-up to the 2016 election.
So far, Thomas says she has found just four of these accounts, which she has dubbed âMAGAflage,â but worries there may be a lot more operating under the radar that will be incredibly difficult to find without access to Xâs backend.
âMy concern is that they will start doing this, or potentially are already doing this, at a really significant scale,â Thomas said. âAnd if that is happening, then I think it will be very difficult to detect, particularly for external researchers. If they start doing it with new accounts that don't have those interesting connections to the Spamouflage network and if you then hypothetically lay on top of that, if they start using large language models to generate text with AI, I think we're in a lot of trouble.â
Stubbs says that Graphika has been tracking Spamouflage accounts that have been attempting to impersonate US voters since before the 2022 midterms, and hasnât yet witnessed real success. And while he believes reporting on these efforts is important, heâs concerned that these high-profile campaigns could obscure the smaller ones.
"We are going to see increasing amounts of public discussion and reporting on campaigns like Spamouflage and Doppelganger from Russia, precisely because we already know about them,â says Stubbs. âBoth those campaigns are examples of activity that is incredibly high scale, but also very easy to detect. [But] I am more concerned and more worried about the things we don't know."