Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
AI Twitter misinformation

Elon’s Twitter Ripe for a Misinformation Avalanche

Seeing might not be believing going forward as digital technologies make the fight against misinformation even trickier for embattled social media giants.

Daniel Angus - Professor at the Queensland University of Technology & Director of its Digital Media Research CentrebyDaniel Angus - Professor at the Queensland University of Technology & Director of its Digital Media Research Centre
March 27, 2023
in Society, Tech
0

In a grainy video, Ukrainian President Volodymyr Zelenskyy appears to tell his people to lay down their arms and surrender to Russia. The video — quickly debunked by Zelenskyy — was a deep fake, a digital imitation generated by artificial intelligence (AI) to mimic his voice and facial expressions.

A deepfake of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons was reportedly uploaded to a hacked Ukrainian news website today, per @Shayan86 pic.twitter.com/tXLrYECGY4

— Mikael Thalen (@MikaelThalen) March 16, 2022

High-profile forgeries like this are just the tip of what is likely to be a far bigger iceberg. There is a digital deception arms race underway, in which AI models are being created that can effectively deceive online audiences, while others are being developed to detect the potentially misleading or deceptive content generated by these same models.

With the growing concern regarding AI text plagiarism, one model, Grover, is designed to discern news texts written by a human from articles generated by AI.

As online trickery and misinformation surges, the armour that platforms built against it are being stripped away. Since Elon Musk’s takeover of Twitter, he has trashed the platform’s online safety division and as a result misinformation is back on the rise.

Musk, like others, looks to technological fixes to solve his problems. He’s already signalled a plan for upping use of AI for Twitter’s content moderation. But this isn’t sustainable nor scalable, and is unlikely to be the silver bullet.

Microsoft researcher Tarleton Gillespie suggests: “automated tools are best used to identify the bulk of the cases, leaving the less obvious or more controversial identifications to human reviewers.”

Some human intervention remains in the automated decision-making systems embraced by news platforms but what shows up in newsfeeds is largely driven by algorithms. Similar tools act as important moderation tools to block inappropriate or illegal content.

The key problem remains that technology “fixes” aren’t perfect and mistakes have consequences. Algorithms sometimes can’t catch harmful content fast enough and can be manipulated into amplifying misinformation. Sometimes an overzealous algorithm can also take down legitimate speech.

Beyond its fallibility, there are core questions about whether these algorithms help or hurt society. The technology can better engage people by tailoring news to align with readers’ interests. But to do so, algorithms feed off a trove of personal data, often accrued without a user’s full understanding.

There’s a need to know the nuts and bolts of how an algorithm works — that is opening the “black box.”


Related articles: EU War on Misinformation: Is Twitter Failing to Join the Fight? | Spotify Under Fire For Covid Misinformation |

But, in many cases, knowing what’s inside an algorithmic system would still leave us wanting, particularly without knowing what data and user behaviours and cultures sustain these massive systems.

One way researchers may be able to understand automated systems better is by observing them from the perspective of users, an idea put forward by scholars Bernhard Rieder, from the University of Amsterdam, and Jeanette Hofmann, from the Berlin Social Science Centre.

Australian researchers also have taken up the call, enrolling citizen scientists to donate algorithmically personalised web content and examine how algorithms shape internet searches and how they target advertising. Early results suggest the personalisation of Google Web Search is less profound than we may expect, adding more evidence to debunk the “filter bubble” myth, that we exist in highly personalised content communities.

Instead it may be that search personalisation is more due to how people construct their online search queries.

Last year several AI-powered language and media generation models entered the mainstream. Trained on hundreds of millions of data points (such as images and sentences), these “foundational” AI models can be adapted to specific tasks. For instance, DALL-E 2 is a tool trained on millions of labelled images, linking images to their text captions.

This model is significantly larger and more sophisticated than previous models for the purpose of automatic image labelling, but also allows adaption to tasks like automatic image caption generation and even synthesising new images from text prompts. These models have seen a wave of creative apps and uses spring up, but concerns around artist copyright and their environmental footprint remain.

The ability to create seemingly realistic images or text at scale has also prompted concern from misinformation scholars — these replications can be convincing, especially as technology advances and more data is fed into the machine. Platforms need to be intelligent and nuanced in their approach to these increasingly powerful tools if they want to avoid furthering the AI-fuelled digital deception arms race.

— —

This article was first published by 360info™ on January 16, 2023.


Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: Misinformation on Twitter is rising under the watch of new CEO Elon Musk. Featured Photo Credit: U.S. Air Force photo by Van Ha, Wikimedia Commons.

Tags: AIartificial intelligenceDeep fakeElon MuskmisinformationTwitter
Previous Post

Surprising Early Medical Practices Still Relevant Today 

Next Post

Our World of Water: What We’ve Done — and Haven’t 

Related Posts

ESG News regarding AI datacenters fueling U.S.-led gas power boom, Lukoil selling foreign holdings, England and Wales households paying more for water bills, and Trafigura investing $1 billion in African carbon removal projects.
Business

AI Datacenters Fuel U.S.-Led Gas Power Boom

Today’s ESG Updates U.S.-Led Gas Boom Threatens Climate: Global Energy Monitor reports 2026 could see record new gas plants, many...

byAnastasiia Barmotina
January 30, 2026
Ryanair CEO O'Leary and SpaceX CEO Musk are feuding
Business

Ryanair vs Musk: What’s Behind the Feud?

Everyone loves a public feud. From reality TV to fights in a hockey rink, it’s human nature to crave drama,...

bySarah Perras
January 28, 2026
WEF Report Ranks Environmental Challenges as Greatest Long-Term Threat to Global Stability
Business

WEF Report Ranks Environmental Challenges as Greatest Long-Term Threat to Global Stability

The World Economic Forum’s Global Risks Report 2025 found that environmental risks are deteriorating faster than other threats and challenges.  ...

byBenjamin Clabault
January 28, 2026
The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026
Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth
AI & MACHINE LEARNING

Billionaires Became Richer Than Ever in 2025: Who Are They and What Drove Their Wealth Growth

In 2025, the world’s 500 richest people increased their net worth by $2.2 trillion. Of those 500 individuals, eight billionaires...

bySarah Perras
January 14, 2026
ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.
Business

To Save the Grid, China Forces Industries to Go Off-Network

Today’s ESG Updates China Limits Grid Exports for New Industrial Solar & Wind: China is encouraging companies to store green...

byEge Can Alparslan
January 9, 2026
Is AI Hype in Drug Development About to Turn Into Reality?
AI & MACHINE LEARNING

Is AI Hype in Drug Development About to Turn Into Reality?

The world of drug discovery, long characterised by years of painstaking trial-and-error, is undergoing a seismic transformation. Recent research led...

byDr Nidhi Malhotra - Assistant Professor at the Shiv Nadar Institution of Eminence
January 8, 2026
AI data centres
AI & MACHINE LEARNING

The Cloud We Live In

How AI data centres affect clean energy and water security As the holiday season begins, many of us are engaging...

byAriq Haidar
December 24, 2025
Next Post
Our World of Water: What We’ve Done — and Haven’t 

Our World of Water: What We’ve Done — and Haven’t 

Recent News

The Era of ‘Global Water Bankruptcy’ Has Begun

The Era of ‘Global Water Bankruptcy’ Has Begun

January 30, 2026
ESG news regarding: New Report Urges Urgent Action to Halt PFAS Contamination Across EU, US Proposes New Rule to Force Greater Transparency in Pharmacy Benefit Manager Fees, EU and Brazil Seal Landmark Deal Creating World’s Largest Free Data Flow Zone, Beijing Suspends Import and Use of Sun Pharma Alzheimer’s Treatment

Without Regulation, ‘Forever Chemicals’ Will Cost Europe €440 billion by Mid Century

January 30, 2026
Food Waste in India

India’s Food Waste Is Turning Into an Environmental Time Bomb

January 30, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH