As Russian tanks rolled across the Ukraine border on February 24, Russia's state-controlled or affiliated news organisations flooded social media with Kremlin disinformation narratives.
An army of automated Twitter accounts, or bots, stood ready to promote these posts and game the platform: liking tweets pushing pro-Russian justifications for the war ranging from "denazification", stopping genocide, or shutting down bioweapons labs.
Within a week, Twitter had banned about 100 of these accounts for "coordinated inauthentic activity".
Now, QUT disinformation expert Tim Graham says he has found a much larger number hiding in plain sight.
Some without profile photos have names like "yes_imabot".
More than 800 accounts are almost certainly bot accounts, while thousands more are highly likely to be.
"I had the sense Twitter had mostly solved the worst extent of bot activity on their platform, especially for political bots," Dr Graham said.
"I was shocked to find massive bot networks that are liking this Russian disinformation."
Hundreds with 'perfect' bot score
Dr Graham found the network while investigating a loophole that allows official government accounts to spread disinformation without attracting scrutiny or warning labels.
Twitter has pledged to not recommend or otherwise amplify tweets from Russian state-affiliated media organisations, and to include a warning in every tweet that links to one of their stories: "Stay Informed: This Tweet links to a Russia state-affiliated media website".
But these rules do not apply to official government accounts, such as those of Russian embassies.
After reporting this apparent flaw in Twitter's disinformation response two weeks ago, Dr Graham looked further: Were bots liking these tweets so that they would be seen by more people?
Dr Graham focused on the accounts that had liked tweets from a list of 75 Russian government or embassy accounts. Of these, 16,513 were created in 2021 and 2022.
The chart below shows what day these relatively new accounts were each created.
The massive spike around February 24, the day of the invasion, indicated some were probably bots, but was not conclusive.
Next, Dr Graham deployed a specialised software called Botometer, which uses a machine-learning algorithm to distinguish bot accounts from human ones by looking at the features of a profile, including friends, social network structure, language, and sentiment.
The model gives accounts a score from zero to one, with one showing it's certain the account is a bot.
"When we ran this model and checked the result, there was clearly this huge spike of accounts which had almost a perfect bot score," Dr Graham said.
A further 9,423 had a score of 0.8 or more — meaning they're probably automated, or at least semi-automated.
"It was like bogies on a radar," Dr Graham said.
"It's kind of unprecedented."
Twitter says it's removed 75,000 accounts since war began
For at least five years, Twitter has been cracking down on the malicious use of bots to spread disinformation.
During the 2019/20 Australian bushfires, for instance, bots pushed the debunked theory that arson was the main cause, which saw the hashtag #ArsonEmergency trending at the height of the crisis.
With COVID, bots have pushed misinformation around vaccine safety, as well as debunked theories that the virus was created in a lab, or that the pandemic was being used to cover up the harmful effects of new 5G towers.
Studies have linked these disinformation bots to Russia, China, and Iran.
In the past few years, Twitter has deleted tens of millions of automated accounts.
Dr Graham said he was surprised Twitter hadn't either detected or deleted the bot network he had found.
"Twitter tends to be on the front foot about this because they know how bad it can be for them if it comes out there's widespread platform manipulation," he said.
"It has such a huge amount of resources … it should be able to fairly comprehensively dragnet their entire platform."
In response, Twitter has cast doubt on Dr Graham's research.
A Twitter spokesperson said that research using Botometer could be flawed, and Twitter had access to more data, such as email addresses, to assess whether an account was a bot.
The spokesperson added that external researchers would not be aware of the extent to which Twitter had taken action on accounts in Dr Graham's dataset, such as filtering them from search results.
"We continue to proactively assess for inauthentic behaviour and other violations of our rules.
"When we identify content and accounts that violate the Twitter rules, we’ll take enforcement action."
Suspected bot networks remain active
Twitter appears to have been slow to act on previously identified bot networks spamming pro-Russian hashtags.
In March 2022, it banned more than 100 automated accounts that pushed the pro-Russian hashtag #IStandWithPutin.
But days later, Dr Graham found two large networks of thousands of suspected bot accounts were still pushing the hashtag.
He alerted Twitter, and the media covered the story.
But Twitter appears to have taken little action in the weeks since.
In the networks Dr Graham found, 4,868 had a Botometer score higher than 0.8.
Of those accounts, only 123 have since been suspended.
"Twitter are suspending the super obvious ones, but potentially missing a lot still," Dr Graham said.
"Hundreds of millions of eyeballs have seen this hashtag."
The fact the hashtag was trending in early March shows the power of bot accounts to manipulate platforms, he said.
"If it gets to the trending list, it captures journalists' attention and is certainly going to shape public attention somehow in a seismic way," he said.
A Twitter spokesperson said the company was aware of the attempt at platform manipulation.
"Our teams continue to investigate as part of our ongoing efforts," they said.
What about other platforms?
Facebook, TikTok and other platforms have also seen a stream of Russian propaganda, but don't have the same problem with bots.
Twitter's Application Programming Interface, or API, is more open and flexible than, for instance, Facebook's, which means third parties such as Dr Graham have more access to data generated by the platform.
It also means others have a greater ability to control or automate Twitter accounts.
"Twitter has a much bigger problem than other platforms, especially when it comes to automated activity, specifically because of their API," Dr Graham said.
Since the invasion of Ukraine, Meta, which owns Facebook, has barred Russian state-run media from selling ads on its platforms, and removed networks of accounts spreading misinformation.
But some argue this doesn't go far enough.
Although Facebook has promised to label state-controlled media as part of its efforts against disinformation, a recent study found it was failing to label 91 per cent of posts containing Russian propaganda against Ukraine.
Since 2017, Twitter has barred advertising from state-controlled Russian media outlets Russia Today (RT) and Sputnik.
(And it banned all state-backed news media advertising and political advertising in 2019.)
Ukrainian officials have pleaded with US tech giants to take action against Russian misinformation on their platforms and crack down on disinformation spread by Russian state-affiliated news outlets.
"In 2022, modern technology is perhaps the best answer to the tanks, multiple rocket launchers and missiles,” Ukraine’s minister of digital transformation, Mykhailo Fedorov, said two days after the invasion.
Russia's Twitter bot networks show this same modern technology is also wielded by the side with the tanks.