Last Thursday, The Select Committee investigating the January 6 2021 Insurrection issued subpoenas to Meta (formerly Facebook), Alphabet (Google’s parent company), Twitter and Reddit. This follows requests for information made in August last year. The four major tech companies have been accused of dragging their heels, failing to meet demands to deliver documents that the Committee deem essential to the investigation.
Specifically, Committee members have been requesting documents which reveal communications from key figures in the insurrection, many of whom are prominent Republicans, including Donald Trump. The committee also seeks documents that detail the extent to which the companies attempted to moderate the effects of harmful content, and ways in which they may have exacerbated the spread of violent rhetoric and disinformation. The demands have largely come from democrats.
Republicans responded to the earlier requests for information in 2021 by threatening the companies who chose to comply. Data requests were sent to 35 companies, including phone service providers, to preserve relevant phone records and texts amongst other communication. Republican House Minority Leader Kevin McCarthy suggested that the companies would be breaching federal law by handing over the requested information. Democrats in turn accused McCarthy of impeding the investigation.
Breaking: The Jan. 6 Committee has issued subpoenas to Alphabet, Meta, Reddit, and Twitter for records relating to the spread of misinformation, efforts to overturn the 2020 election, domestic violent extremism, and foreign influence in the 2020 election.
— Kyle Griffin (@kylegriffin1) January 13, 2022
Representative Bennie G. Thompson, of the democratic party, the chairman of the inquiry, made clear that the Committee’s patience was running out with the four major companies over their failure to turn over documents in time.
In open letters to the four companies, Thompson reiterated their roles in the January 6 insurrection
In a letter to Parag Agrawal, Twitter’s CEO, Thompson described how “subscribers reportedly used the platform for communications regarding the planning and execution of the assault on the United States Capitol, and Twitter was reportedly warned about potential violence being planned.” The letter also highlighted the proliferation of election fraud conspiracies, and specifically referenced a Donald Trump tweet encouraging his supporters to attend a “wild” protest on the day of the insurrection.
Reddit has been called out by the Committee for its role in platforming infamous pro-Trump subreddits to flourish, in which glorification of violence, hate speech and harassment were rampant. One of the Subreddits, explicitly mentioned in Thompson’s letter to Reddit CEO Steven Huffman, went on to migrate to a website in which significant planning for the insurrection took place, after Reddit eventually closed it down mid-2020.
Related articles: A Year On From January 6, 2021 | EU Makes Progress on Digital Regulations, Challenging Big Tech
In his letter to Meta, Thompson drew attention to the way Facebook was also used to share messages of “hate, violence, and incitement,” misinformation and election fraud conspiracies. The letter referenced documents leaked by the whistleblower Frances Haugen, a former Facebook employee who exposed the fact that “Facebook disbanded the Civic Integrity team that had focused on risks to election including misinformation, and reduced the application of tools used to restrain the spread of violent content.”
Youtube, owned by Alphabet, has been identified by the Committee as a key facilitator of the disinformation related to election, and the home of extremist content that helped incite the Capitol Hill attack. The letter to Sundar Pichai, CEO of Google/ Alphabet used as an example the fact that “Steve Bannon live-streamed his podcast on YouTube in the days before and after January 6, 2021.” Bannon, a notorious right-wing nationalist and Trump’s former chief strategist, was a key proponent of election fraud claims.
The letter also drew attention to the fact that “live-streams of the attack appeared on YouTube as it was taking place” and that “To this day, YouTube is a platform on which user video spread misinformation about the election.”
Fact-checking organisations have long campaigned for greater accountability from platforms hosting extremist content
Youtube, the day before the subpoenas were announced, came under fire from “The International Fact-Checking Network,” a coalition of 80 fact-checking organisations from around the world. In an open letter to Susan Wojcicki, Youtube’s CEO, YouTube was described as one of “the major conduits of online disinformation and misinformation worldwide.” According to the signatories, “YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves.”
Given that a large proportion of views… come from its own recommendation algorithm, YouTube should make sure it does not actively promote disinformation or recommend content coming from unreliable channels. #YoutubeOpenLetter @factchecknet
READ: https://t.co/hGTEOimrPM pic.twitter.com/9SClhtZA5a
— Rappler (@rapplerdotcom) January 13, 2022
Much of the hesitance towards encouraging greater moderation of extremist or dangerously misleading content has stemmed from fears around empowering sites like Youtube to decide what they consider to be acceptable or not. Those who make this argument warn of implications on free-speech, given that sites like these, although privately owned, have become the main public stage for political expressions.
Those fearing censorship of legitimate political speech may point to examples like Tik Tok’s censorship of Xinjiang related content. These fears have to be balanced however, against the very real and tangible effects of extremist political content online, which has incited violence resulting in loss of life on a staggering scale in places like Myanmar and India. An easy answer to this argument however is to draw a line between content blatantly encouraging hate and violence against vulnerable communities, and content simply expressing a controversial view.
Another important point, as pointed out by The International Fact-Checking Network, is that the question of moderation does not have to be framed “as a false dichotomy of deleting or not deleting content.” The letter argues that “surfacing fact-checked information is more effective than deleting content.” In order to achieve this, the signatories recommend, amongst other measures, that “YouTube’s focus should be on providing context and offering debunks, clearly superimposed on videos or as additional video content.”
Such fact-checking information, whilst effective at counter misinformation with regards to the efficacy of vaccines and pandemic safety measures, may however be less suited to deal with the openly violent and hateful speech that lead to the insurrection, and threatens minorities across the world.
In the case of emotionally charged videos and posts, stirring up hatred and anger, removal of content and banning of users may be a more appropriate response. De-platforming has proven effective in the past, vastly diminishing the scope of influence of figures like Alex Jones and others. If they want to effectively tackle dangerous material online, platforms should focus on responding more efficiently to user reports and calls from concerned parties with regards to hate speech, whilst concurrently promoting clarifying information to counter blatantly erroneous information.
Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com. — Featured Photo: Youtube, owned by Google/ Alphabet, has been criticised for hosting extremist content. Photo Credit: Christian Wiediger