Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter
No Result
View All Result

AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn

It is a truism: AI has immense potential but comes with significant risks. What is AI really capable of and what are governments doing to mitigate the risks?

byAlina Liebholz
June 2, 2023
in AI & MACHINE LEARNING, Society, Tech

The Center for AI Safety released the following statement on its webpage: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

We’ve released a statement on the risk of extinction from AI.

Signatories include:
– Three Turing Award winners
– Authors of the standard textbooks on AI/DL/RL
– CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic
– Many morehttps://t.co/mkJWhCRVwB

— Center for AI Safety (@ai_risks) May 30, 2023

Among the signatories of the statement are Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind, Dario Amodei of Anthropic and the so-called godfathers of AI: Dr Geoffrey and HintonYoshua Bengio.

According to the Center of AI Safety, some of the most significant risks posed by AI include the weaponisation of AI technology, power-hungry behaviour, human dependence on machines, as shown in the Disney movie Wall-E, and the spread of misinformation.

In a recent blog post, OpenAI proposed that the regulation of superintelligence should be similar to that of nuclear energy. “We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts,” the firm wrote.

In March, an open letter, signed by Elon Musk, Apple co-founder Steve Wozniak and a handful of other big names in tech, asked to halt AI developments for six months due to the risks of AI and fears that it could become a threat to humanity. 

The letter, which was published by the Future of Life Institute, received over 31,000 signatures, although some of these are said to have been forged.


Related Articles: Who Is Liable if AI Violates Your Human Rights? | ChatGPT and Me: On the Benefits and Risks of Artificial Intelligence | Artificial Intelligence: How Worried Should We Be? | ‘I am a Machine, With no Soul or Heart’: An Interview With Artificial Intelligence

Furthermore, in a Senate hearing on the oversight of AI earlier this month, OpenAI CEO Sam Altman said: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”

A Distraction From Imminent Risks of AI?

Other AI scientists and experts, however, see these statements as overblown. Some even say it is a distraction from other, more imminent, problems AI poses, such as AI biases, spreading misinformation or invasions of privacy.

In fact, “current AI is nowhere near capable enough for these risks to materialise. As a result, it’s distracted attention away from the near-term harms of AI,” Arvind Narayanan, a computer scientist at Princeton University, told the BBC.

This is absolutely correct.
The most common reaction by AI researchers to these prophecies of doom is face palming. https://t.co/2561GwUvmh

— Yann LeCun (@ylecun) May 4, 2023

New AI products are constantly being released due to the ongoing advancements in the field. Ultimately, it’s crucial to address both potential and current harms.

“Addressing some of the issues today can be useful for addressing many of the later risks tomorrow,” said Dan Hendrycks, Centre for AI Safety Director.. “

In April 2021, the European Union (EU) proposed a bill on rules for artificial intelligence. The bill, expected to be finalised in June 2023, will introduce new transparency and risk-management rules for AI systems while supporting innovation and protecting citizens. 

In a press release regarding the new AI law, the EU stated: “AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).”

“We are on the verge of putting in place landmark legislation that must resist the challenge of time,” said Brando Benifei, member of the EU Parliament, following the vote on the new regulation. 

The US, Canada, the UK, South Korea and many other countries have also produced bills and white papers on AI regulation. Furthermore, the G7 have established a working group on the challenges of AI technology, with their first meeting having taken place on May 30. 


Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: White Robot. Featured Photo Credit: Possessed Photography.

Tags: AIAI GovernanceAI Legislationartificial intelligenceChatGPTRisks of AI
Previous Post

Can Philip Morris, a Tobacco Company, Turn Itself Into an ESG Stock?

Next Post

What ‘Upcycled Food’ Is and Why You Should Try It

Related Posts

Where Will the World’s Electricity Come From in 2030?
Energy

Where Will the World’s Electricity Come From in 2030?

March 2, 2026
The Surprising Route to Energy Security: Scrap Fossil Fuel Subsidies
Energy

The Surprising Route to Energy Security: Scrap Fossil Fuel Subsidies

February 27, 2026
Who Owns the Ocean’s Genetic Wealth?
Politics & Foreign Affairs

Who Owns the Ocean’s Genetic Wealth?

February 27, 2026
Next Post
What ‘Upcycled Food’ Is and Why You Should Try It

What ‘Upcycled Food’ Is and Why You Should Try It

Recent News

ESG News regarding Europe’s role in the Israel, US, Iran conflict, Argentina seeks to weaken glacier protections, Moeve’s new green hydrogen project, and Italy asking EU to suspend carbon market

What the Conflict in Iran Means for Europe

March 2, 2026
How Climate Change Is Altering Frogs’ Love Songs

How Climate Change Is Altering Frogs’ Love Songs

March 2, 2026

Impakter informs you through the ESG news site and empowers your business CSRD compliance and ESG compliance with its Klimado SaaS ESG assessment tool marketplace that can be found on: www.klimado.com

Registered Office Address

Klimado GmbH
Niddastrasse 63,

60329, Frankfurt am Main, Germany


IMPAKTER is a Klimado GmbH website

Impakter is a publication that is identified by the following International Standard Serial Number (ISSN) is the following 2515-9569 (Printed) and 2515-9577 (online – Website).


Office Hours - Monday to Friday

9.30am - 5.00pm CEST


Email

stories [at] impakter.com

By Audience

  • TECH
    • Start-up
    • AI & MACHINE LEARNING
    • Green Tech
  • ENVIRONMENT
    • Biodiversity
    • Energy
    • Circular Economy
    • Climate Change
  • INDUSTRY NEWS
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
    • Editorial Series

ESG/Finance Daily

  • ESG News
  • Sustainable Finance
  • Business

About Us

  • Team
  • Partners
  • Write for Impakter
  • Contact Us
  • Privacy Policy

© 2026 IMPAKTER. All rights reserved.

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2026 IMPAKTER. All rights reserved.