There has been a noticeable increase in antisemitism and hate speech on social media in recent years. Twitter, in particular, has experienced a surge in such content since Elon Musk bought the platform.
Elon Musk, who identifies as a “free speech absolutist,” has made various changes when he took control of Twitter. One of these changes was the dissolving of Twitter’s independent Trust and Safety Council, which provided guidance on dealing with harmful behaviour on the platform.
Now a case was brought against the social media giant by HateAid, a German organisation that advocates for human rights in the digital space, and the European Union of Jewish Students (EUJS) in a Berlin court.
This case could potentially set new standards for online antisemitism and hate speech scrutiny.
The German Case Against the Social Media Platform
The claimants want the German court to clarify “whether users can demand removal of punishable content such as, for example, denials of the Shoah, even when they are not themselves insulted or threatened” and “whether NGOs such as HateAid or EUJS are likewise entitled to demand deletion of punishable content in this way.”
The legal head of HateAid, Josephine Ballon, has stated that the purpose of their legal action is to encourage Twitter to take greater responsibility for the content posted on their platform.
She commented: “Freedom of expression does not just mean the absence of censorship but ensuring that Twitter is a safe space for users who can be free of fear of being attacked or receiving death threats or holocaust denial. If you are a Jewish person on Twitter then the sad reality is that it is neither secure nor safe for you.”
HateAid has reported six antisemitic or racist tweets to Twitter in January this year. Among the tweets, some were explicitly denying the holocaust, others were comparing Covid-19 vaccines to life in the Auschwitz concentration camp; another stated: “blacks should be gassed and sent with space x to Mars.”
Despite clearly violating the company’s moderation policy, Twitter did not remove these tweets.
We are suing @Twitter!
Today, @EUJS and @HateAid announced that we are suing Twitter for neglecting to remove reported hateful content from its platform which seeks incitement of the people.
It is time to hold social media platforms responsible.#TwitterTrial pic.twitter.com/ZqD4tbuLZ3
— EUJS – European Union of Jewish Students (@EUJS) January 25, 2023
Twitter’s moderation policy states: “We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals or groups with abuse based on their perceived membership in a protected category.”
Furthermore, the policy mentions the prohibition of hateful speech in reference to the holocaust as an example.
Related Articles: Twitter in Elon Musk’s Hands: What It Means | Free Speech or Hate Speech: How Will Elon Musk’s Acquisition Impact Twitter? | Introducing Threads: Instagram’s ‘Twitter Killer’| How Misinformation, Disinformation and Hate Speech Threaten Human Progress
Twitter has been made aware of the legal action and has been reported to have taken measures to block the tweets that caused offence.
Twitter Blue: Giving Hate an Even Further Reach
Twitter Blue was reintroduced in November 2022 by Elon Musk. The service lets users subscribe to a blue checkmark which offers early access to new site features, lets users tweet longer texts, lets them edit tweets and reduces the number of ads shown to users that subscribe.
In June 2023, researchers from the Center for Countering Digital Hate (CCDH) revealed that Twitter failed to take action on 99% of hate posts by Twitter Blue subscribers.
Researchers collected tweets promoting hate from 100 Twitter Blue subscribers and reported them using Twitter’s own tools for flagging hateful conduct. 99 of the 100 tweets were not taken down by Twitter.
This, the researchers conclude, suggests that “the platform is allowing them to break its rules with impunity and is even algorithmically boosting their toxic tweets.”
Hate Speech on Twitter: A Continuing Problem
It is not the first time that Twitter has been criticised for its inadequate content moderation efforts and for allowing hate speech to thrive on the platform.
In June this year, Australia issued a legal warning against the tech company for failing to remove hateful speech. The warning included a potential fine of 475,685 US dollars a day if Twitter failed to provide information on what it is doing to prevent hateful content.
“Twitter appears to have dropped the ball on tackling hate,” Julie Inman Grant, the Australian eSafety Commissioner, said. “A third of all complaints about online hate reported to us are now happening on Twitter. We are also aware of reports that the reinstatement of some of these previously banned accounts has emboldened extreme polarisers, peddlers of outrage and hate, including neo-Nazis both in Australia and overseas.”
Also, the United Nations have already called out Musk and other heads of social media platforms in January this year, saying they “must urgently address posts and activities that advocate hatred, and constitute incitement to discrimination, in line with international standards for freedom of expression.”
The German legal action further highlights the need for social media platforms to address hate speech and create safer user environments.
It is hoped that a ruling in the case will provide clarity on the law surrounding online hate speech platforms’ obligations to follow their own policies.
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: Twitter Logo on iPhone Home screen. Featured Photo Credit: Brett Jordan.