In Europe’s battle against online disinformation, Twitter is falling behind. On February 9, top European Commission officials reprimanded the company for turning in an incomplete report as part of its obligations as a signatory of the 2022 Code of Practice on Disinformation.
– Věra Jourová, the European Commission’s vice-president for Values and Transparency.
The Code of Practice on Disinformation is a voluntary agreement between online platforms, advertisers, and fact-checkers (among others) operating in the European Union (EU). It was first conceived in 2018 and strengthened in 2022, and aims to combat the spread of disinformation online.
The strengthened 2022 Code achieves this by cutting financial incentives for spreading disinformation, rendering political advertising more transparent, empowering users to spot disinformation and researchers to investigate it, and consistently fact-checking.
Although the Code is voluntary, adhering to it is beneficial in the face of the EU’s new Digital Services Act (DSA) which will come into force for “very large online platforms” – those with more than 45 million users in the EU – as soon as September. This includes Twitter.
The larger the size, the greater the responsibilities of online platforms.
Today is the deadline set in the Digital Services Act for all online platforms and online search engines operating in the EU to publish their number of active monthly users.
— Johannes Bahrke @jb_bax pic.twitter.com/WEvXGsGxDG
— European Commission 🇪🇺 (@EU_Commission) February 17, 2023
As part of the Code, the 34 signatories, including Meta, Google, Microsoft and TikTok, have so far published baseline reports up to January 2023 in the Code’s new Transparency Centre, with the purpose of assessing each company’s practice of the 44 commitments. The next report is due in July.
Every company submitted a full report apart from Twitter, whose report was “short of data, with no information on commitments to empower the fact-checking community” against disinformation according to the EU Commissioner for the Internal market, Thierry Breton.
“It comes as no surprise that the degree of quality vary greatly according to the resources companies have allocated to this project.”
– Thierry Breton
Breton’s comments may stem from the fact that Twitter concluded their Code report by attributing the lack of “granular data” partially to “resource constraints and data limitations.”
Although he has faced an onslaught of criticism since his acquisition of the social media platform for $44 billion, it was actually before Elon Musk bought Twitter that it joined the Code.
Musk, a self-proclaimed free speech advocate, fired approximately half of the company’s employees in November, including many of the platform’s content moderators. Although the moderation task force rushed to reassure users that they were not as hard hit compared to other departments, several moderator resignations followed the mass sackings.
In December, Musk disbanded the platform’s Trust and Safety Council, a group of around 100 external advisors on tackling hate speech and other harmful behaviour on the social media site.
Since then, Twitter has announced plans to withdraw its free API access, a significant blow to the many researchers who rely on the platform’s open-access data to inform their studies on social media.
👋Two years and 27k downloads later, it looks like the Academic API endpoint is going paid…
This represents a great shame, of course, for the academic research community.
Twitter *were* a model of how to enable open, transparent research… https://t.co/6KYyM3hePY pic.twitter.com/cGhk11YSUG
— Christopher Barrie (@cbarrie) February 13, 2023
This once again highlights the concerns raised around non-state actors having exclusive control over public online platforms.
Online disinformation has been prevalent during the various crises of recent years, including the coronavirus pandemic and the climate crisis. Of most recent concern to the European Commission, perhaps, is the Russia-Ukraine war.
As Věra Jourová warns, “Russia is engaged also in a full-blown disinformation war and the platforms need to live up to their responsibilities.”
🇷🇺 engages in a full-blown #disinformation war & platforms need to live up to their responsibilities!
1⃣st reports of the revamped anti-disinformation Code is an important milestone.
We cannot rely on the platforms alone for the quality of information, we need more insight. pic.twitter.com/JtXJsVYtYO
— Věra Jourová (@VeraJourova) February 9, 2023
What is Russia’s “disinformation war”?
Russia’s “disinformation” considerably predates the ongoing war, stretching back to its annexation of Crimea in 2014.
In this year, Russia created a huge number of automated or “fake” online social media accounts that purported to be Western that contained pro-Russian content.
Between 2014 and 2016, the Kremlin created the Internet Research Agency (IRA), or a “troll farm,” which again automated a huge amount of content on social media that aimed to influence the 2016 US election in the favour of Republican candidate, Donald Trump.
On Facebook alone, the IRA was able to reach 126 million users, although it was also present on other social media platforms, including Instagram, Twitter and Tumblr.
What’s more, the IRA’s influence did not solely remain online – through accounts that claimed to be run by American activists, they managed to mobilise hundreds of people in rallies held in major US cities such as Philadelphia and Miami.
Russia’s online influence has continued throughout the 2022 Russia-Ukraine war, and the Code has asked signatories to clarify how they are responding to this.
According to Meta’s report, since the start of the war in early 2022, it had removed three distinct Russian networks in February, August and September of the same year, all of which increased in size and targeted online discourse concerning the war.
The February network was relatively small, with only around 40 accounts, pages and groups, operating out of both Russia and Ukraine, removed from Facebook and Instagram for targeting people in Ukraine. When Meta disrupted this network, it had fewer than 4000 Facebook accounts following one or more of its pages, and fewer than 500 accounts doing the same on Instagram.
The August network was operated out of a troll farm in St. Petersburg, and had just over 1000 Instagram accounts and 45 Facebook accounts, with 49,000 followers. Instead of focusing solely on Ukrainians, this network targeted global public discourse about the war.
Meta stated that the September network was “the largest and most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine.”
This network was primarily on Facebook, where it had 1633 accounts, 703 pages, and one group, although it had a further 29 Instagram accounts. It primarily targeted Germany, although it also operated in France, Italy, Ukraine and the UK as well.
This network had more than 60 websites that impersonated legitimate European news organisations including Der Spiegel, The Guardian, Bild and ANSA.
Meta pointed out this network’s unusual combination of sophistication and brute force. While it had to have significant technical and linguistic investment in its spoofed websites and use of many languages, the network’s social media amplification relied on “crude ads and fake accounts,” of which the majority were detected and removed by automated systems before Meta’s security team began their investigation.
However, despite the size of the operation, the September network had relatively few followers: only 4000 for the Facebook pages, 10 members in the group, and 1500 for the Instagram accounts.
Meta and TikTok have both applied labels to indicate Russian state-controlled media, and Meta has moved content originating from these channels lower down newsfeeds. However, between October and December alone, posts with these labels were viewed more than 202 million times on TikTok.
Meta, TikTok, and Google have also all limited access to Russian state-funded media since the start of the war. In addition, both Meta and Google have rolled out media literacy resources and programmes, but with varying success.
Google limited access by updating their Ads Sensitive Events Policy, which does not allow advertisements that potentially profit from or exploit sensitive events with significant social, cultural or political impact, with specific mention to the Russia-Ukraine war, while Meta prohibited advertisements from Russian accounts anywhere in the world.
Although TikTok does not allow political advertising at all, they noted a sharp increase in political advertising bids following the start of the war.
TikTok’s Code reporting mainly focuses on the final quarter of 2022, in which the app reported it had removed 1292 videos related to the Russia-Ukraine war. Of these, they had removed 1027 proactively –- without being referred by a third party.
They also hired native Ukrainian and Russian speakers to help with content-moderation, limiting access and providing greater clarity around Russian state-controlled media, as did Meta and Google.
Meanwhile, Twitter’s moderation strategy for combating misinformation now appears to mostly consist of a user-led approach through its new feature, community notes, through which it seems to open source fact-checking.
Users can attach notes with helpful context to a post, which other contributors can rate in terms of its usefulness. If the note is rated as helpful by enough contributors from “diverse viewpoints” (determined by how contributors have rated past notes) it is publicly shown.
This community-led approach is quite different to the teams of independent fact-checkers that other social media platforms are putting forward.
As Twitter lets the side down, TikTok puts its best foot forward
Dubbed the “first TikTok war,” the platform’s handling of the Russia-Ukraine war is significant for how it is seen to handle ongoing global matters. However, this report is crucial for TikTok for another reason.
TikTok, which is owned by Chinese parent company ByteDance, has come under international scrutiny over various data security breaches, most notably the tracking of US journalists whose data was accessed from China.
In the US, Director of the Federal Bureau of Investigation (FBI) Christopher Wray, described the social media platform as the “biggest long-term threat to our economic and national security.”
Distrust is not limited to the far side of the Atlantic. In January, TikTok’s CEO, Shou Zi Chew, met with a number of senior officials of the European Commission over concerns about data security within the EU.
Following their meeting, Jourová Tweeted that she expected TikTok to “go the extra mile in respecting EU law and regaining the trust of European regulators.”
I count on #TikTok to fully execute its commitments to go the extra mile in respecting EU law and regaining trust of European regulators.
There cannot be any doubt that data of users in Europe are safe and not exposed to illegal access from third-country authorities. pic.twitter.com/csKdCSOeMi— Věra Jourová (@VeraJourova) January 10, 2023
In the same Twitter thread, she anticipated the first report for the Code, saying “Transparency will be a key element.”
TikTok certainly took the caution seriously and offered a full and thorough report. While Meta often deferred data to be included in the second report, due in July, TikTok’s report is relatively detailed. It consistently emphasises the platform’s non-political role and mission to “inspire creativity and bring joy.”
On the subject of promoting authoritative and trustworthy sources, TikTok emphasised that it was “primarily an entertainment platform,” although individual users may choose to engage with news-type content.
As for demonetisation, TikTok early on insisted that despite its exponential growth, its creator monetisation opportunities were at a relatively early stage of maturity compared to other platforms in the industry.
With the eyes of the European Commission on it, it’s no wonder that TikTok is putting its best foot forward as a signatory of the Code.
Overall, Twitter’s incomplete data does not explicitly indicate that it is not living up to the EU’s expectations of countering disinformation per se, but its lack of assigned manpower is not promising. Musk’s company may find themselves facing the harsher end of the DSA in coming months, without the manpower to handle it.
Meanwhile, the insight this data provides on countering misinformation, particularly in the context of the Russia-Ukraine war as it unfolds, is a valuable resource for understanding not only how disinformation is tackled, but also how it is spread. Twitter’s participation in the Code, however, as well as the contents of its next report, is a matter to be watched closely.
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: People on mobile phones. Featured Photo Credit: Camilo Jiminez.