By Francesco Di Blasi
with the contribution of Mahmoud Javadi
Since the beginning of the war in Ukraine, on February 24, a large number of accounts, whose main goal was to spread pro-Russian disinformation, were detected on Twitter. Many of these profiles are suspected to be bots, but a large part could also be managed by actual human beings that act coordinately to spread false or misleading narratives about the conflict. This is what emerges from an exclusive investigation carried out by the EDMO task force on Ukraine which analyzed EU member States, Switzerland and Great Britain.
Disinformation about the war in Ukraine started circulating in Europe immediately after Russia invaded the country, on February 24, and in a few weeks the topic became extremely popular among conspiracy theorists and their followers (link). In order to analyze this trend and curb the spread of false or misleading news about the war, EDMO placed under observation the Twitter accounts that between March 15 and April 15 have retweeted the posts of the Russian embassies in Europe that spread disinformation or propaganda about the war in Ukraine.
Our analysis used “Botometer”, a free software developed by Indiana University Bloomington that calculates the probability of an account being a bot, giving a score displayed on a 0 to 1 scale, with zero being most human-like and one being the most bot-like. The study found that nearly a third of these accounts are likely to be bots. These Twitter profiles have spread pro-Kremlin narratives, such as the alleged “denazification” of Ukraine or the presence of American bioweapons labs in the country.
The results show that 9% of these accounts are bots and 21% have a high probability of being a bot. Only 29% of the profiles have fully human characteristics and 41% have ambiguous behaviors, probably attributable to a human being, but about which a certain degree of uncertainty remains.
The presence of pro-Russian bots on social media – and on Twitter in particular – is not a new phenomenon. The best-known case concerns the so-called “Internet Research Agency”, an organization (later nicknamed “troll factory”) that in 2016 influenced the US presidential election by favoring candidate Donald Trump in the interest of the Kremlin. In 2018, Twitter found that the phenomenon involved over 50,000 Russia-linked bot accounts.
On March 4, 2022, Twitter banned about 100 accounts that had relaunched the hashtag “#IstandwithPutin” for participating in “coordinated inauthentic behavior.” However, the data we hold suggests that the phenomenon in Europe affects a much larger amount of accounts.
Our survey shows that 73% of the analyzed accounts produced their first tweet after February 24 (considering the last six months of activity), which is the date of the Russian invasion.
On the one hand, this suggests that they might be accounts created specifically at the start of the war. On the other hand, it could also indicate a possible coordination among the activities of a large number of “silent” accounts that have suddenly become active since the beginning of the conflict. In support of these hypotheses there is also the fact that since February 24 there has been an extreme increase in tweets published by the examined Twitter profiles.
This behavior can be traced back to both the activity of bots and that of real accounts coordinated for a specific occasion, in this case, the war. We believe that the activity of a large number of inactive accounts that became extremely active within a day is to be considered suspicious. This could be an action coordinated by a ministry, organization, or association of any kind.
In addition, it is clear that the profiles’ activity follow carefully the developments of the war, and it is not limited to just a casual publication of tweets. On April 3, when international media started talking about the Bucha massacre, there was a very sizable increase in tweets posted which peaked 10 days later, on April 13. Furthermore, the day the crimes committed in Bucha went public coincided with the second-highest increase in “first tweets posted”, i.e. reactivations of silent accounts. An increase of the same type, more contained, but still significant, also occurred on February 24, the first day of the war.
Conclusion
Since the beginning of the war in Ukraine, a large number of Twitter accounts active in the EU, in the United Kingdom, and in Switzerland have started spreading false or misleading news about the conflict, in an effort to support pro-Russian propaganda.
Even though EDMO cannot verify the identity of this disinformation campaign’s architects, an analysis performed by the EDMO task force on Ukraine found that many pro-Russia profiles were inactive before February 24, and became extremely active right after that date. Consequently, these accounts are probably either bots specifically designed to spread propaganda about the war, or the result of a coordinated effort by an interested party.
Photo by Flickr, Andreas Eldh