Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
ChatGPT AI letter

The Race Against AI

In an open letter, tech industry leaders warn against fast AI progress and call for a six-month halt in development; Italy meanwhile bans ChatGPT and invites others in the EU to follow suit

Laetitia ExertierbyLaetitia Exertier
April 3, 2023
in Tech
0

On March 29, tech industry leaders, including Elon Musk, CEO of SpaceX, Tesla and Twitter, and Steve Wozniak, co-founder of Apple, signed an open letter, “Pause Giant AI Experiments: An Open Letter,” coordinated by the nonprofit Future of Life Institute, urging for a six-month pause in Artificial Intelligence (AI) development. 

A moratorium of six months could provide the industry with the necessary time to set safety standards for their AI design and manage any potential risks of the new technology, signatories of the letter said. 

The letter stressed that the pace at which AI progressed had dangerous societal implications and safety concerns. The letter begins with a warning: 

 “AI systems with human-competitive intelligence can post profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”

The unknown territory that AI developers are delving into is raising questions from the tech industry worldwide. “Powerful AI systems,” the letter states, “should be developed only once we are confident that their effects will be positive and their risks will be manageable.” 

The letter comes in response to the staggering and fast-paced progress in AI over the past few months. 

Indeed, Microsoft says that ChatGPT’s latest version can solve “novel and difficult tasks” with “human-level performance” in advanced careers such as coding, medicine, law, and even psychology. 

The tools offered by AI could eliminate unpleasant, repetitive tasks from our lives. Yet, in so doing, AI can also threaten many jobs by completing tasks much quicker. 

According to an ongoing study measuring the impact of AI on the Labour Market, most jobs will be changed by GPT in some way, particularly those where “at least one job task can be performed quickly by generative AI,” which account for 80% of jobs. 

However, the limits of machine learning are still unknown. AI developments entail potential danger because of all the things that are still a mystery, especially in the long term.


Related Articles: Elon’s Twitter Ripe for a Misinformation Avalanche | AI vs. Artists: Who Can Claim Creativity? | How Meta’s Failure to Act Upon Human Trafficking Claims Led to Another Lawsuit

AI sceptics are already concerned about cybersecurity, plagiarism, and misinformation. AI tools are already capable of passing medical licensing exams, giving instructions on making bombs, and creating an alter ego for themselves. 

Some concerns were even addressed by OpenAI’s CEO Sam Altman, who admitted that his company’s model, ChatGPT, shared racist, sexist and biased answers. 

Similarly, Stable Diffusion has faced copyright accusations after allegedly stealing art from digital artists. 

Industry leaders are also raising concerns over the pace at which new AI technology is diffused into the world.

Teams tasked with focusing on safely creating AI cannot do their jobs if they are rushed to put out newer versions without having time to consider the societal impact of the product.  

Responses from Industry leaders

As of Monday, April 3, 3123 people have signed the letter, according to the Future of Life Institute.

Massachusetts Institute of Technology physics professor Max Tegmark is one of the organisers of the letter. 

In an interview with the Wall Street Journal, he says that the advances in AI have already progressed far beyond what many experts believed was possible, even as recent as just a few years ago: 

“It is unfortunate to frame this as an arms race. It is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.” 

Despite accumulating a significant number of signatories, the letter will likely not have any effect aside from starting a debate around the topic.

When the letter was made public on Wednesday, Musk tweeted that the developers of the AI technology “will not heed this warning, but at least it was said.”

Leading AGI developers will not heed this warning, but at least it was said

— Elon Musk (@elonmusk) March 29, 2023

Similarly, Mr Mostque, Stability AI’s CEO, tweeted that despite signing the letter, he didn’t agree with the six-month pause. 

“It has no force but will kick off an important discussion that will hopefully bring more transparency & governance to an opaque area.”

So yeah I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.

It has no force but will kick off an important discussion that will hopefully bring more transparency & governance to an opaque area

— Emad (@EMostaque) March 29, 2023

 

Box CEO Aaron Levie shared a similar view in his interview with Axios:

“There are no literal proposals in the actual moratorium. It was just, ‘Let’s now spend the time to get together and work on this issue.’ But it was signed by people that have been working on this issue for the past decade.”

The letter has faced substantial criticism from prominent members of the industry for lacking verification protocols over signatures, thus asserting that Xi Jinping and Meta’s chief AI scientist Yann LeCun signed the letter, despite it not being the case. 

Nope.
I did not sign this letter.
I disagree with its premise. https://t.co/DoXwIZDcOx

— Yann LeCun (@ylecun) March 29, 2023

 

Similarly, experts have spoken out against the Institute for supposedly using their research to support the letter’s claims, warning about imagined apocalyptic scenarios over immediate issues, such as the biases programmed into the machines.

Italy’s ban of ChatGPT

Two days after the letter to pause AI developments became public, Italy’s Privacy Regulator ordered OpenAI to ban access to ChatGPT in Italy because of increasing privacy concerns.

The Italian National Authority for Personal Data Protection said that it would block and investigate OpenAI, effective immediately, for processing the data of Italian users, as it supposedly goes against the EU’s privacy law, the General Data Protection Regulation (GDPR). 

Italy’s privacy watchdog said that the company lacked a legal basis to justify “the mass collection and storage of personal data … to ‘train’ the algorithms” of ChatGPT.”

Italian authorities expressed concern over ChatGPT’s data breach, exposing users’ conversation and payment information last week. 

Similarly, they accused OpenAI of not verifying the age of users, thus exposing “minors to absolutely unsuitable answers compared to their degree of development and self-awareness.”

OpenAI representatives communicated to Politico that despite banning ChatGPT in Italy, they disagree with Italy’s accusations: “we believe we comply with GDPR and other privacy laws”. 

Similar rhetoric seems to be emerging throughout Europe. 

The consumer advocacy group BEUC urged the EU and national authorities to investigate ChatGPT and the way it deals with “consumer protection, data protection and privacy, and public safety.”

Even though OpenAI doesn’t have offices in the EU, its representatives in the European Economic Area have 20 days to communicate their plan to bring ChatGPT into compliance with Europe’s GDPR laws. Otherwise, the company may be forced to give up 4% of its global revenue as a penalty.

Although it is very unlikely that the open letter will result in a moratorium, it has fuelled the debate surrounding the risks associated with AI. As Italy has taken decisive action to stop ChatGPT usage, we may see other EU states following suit. 


Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: Melting Ice in the Antarctic Featured Photo Credit: Jan Van Bizar

Tags: AIChatGPTitalyLetterMusk
Previous Post

It Takes a Village to End Child Marriage

Next Post

The Economist Sustainability Week 2023: UN Chief Says ‘The Private Sector is Pivotal’

Related Posts

ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.
Business

To Save the Grid, China Forces Industries to Go Off-Network

Today’s ESG Updates China Limits Grid Exports for New Industrial Solar & Wind: China is encouraging companies to store green...

byEge Can Alparslan
January 9, 2026
Is AI Hype in Drug Development About to Turn Into Reality?
AI & MACHINE LEARNING

Is AI Hype in Drug Development About to Turn Into Reality?

The world of drug discovery, long characterised by years of painstaking trial-and-error, is undergoing a seismic transformation. Recent research led...

byDr Nidhi Malhotra - Assistant Professor at the Shiv Nadar Institution of Eminence
January 5, 2026
AI data centres
AI & MACHINE LEARNING

The Cloud We Live In

How AI data centres affect clean energy and water security As the holiday season begins, many of us are engaging...

byAriq Haidar
December 24, 2025
A crowded airport terminal with travelers moving through check-in areas during the holiday season.
AI & MACHINE LEARNING

How AI Is Helping Christmas Run More Smoothly

Christmas this year will look familiar on the surface. Gifts will arrive on time, supermarkets will stay stocked, airports will...

byJana Deghidy
December 22, 2025
Can Government Efforts to Regulate AI in the Workplace Make a Difference?
AI & MACHINE LEARNING

Can Government Efforts to Regulate AI in the Workplace Make a Difference?

An overview of AI regulations and laws around the world designed to ensure that the technology benefits individuals and society,...

byRichard Seifman - Former World Bank Senior Health Advisor and U.S. Senior Foreign Service Officer
December 18, 2025
How Climate Change Could Help Foster Peace in Yemen
Climate Change

How Climate Change Could Help Foster Peace in Yemen

Yemen's tragedy is traditionally depicted through the limited perspective of humanitarian need and political divisiveness, but there is a greater...

byTareq Hassan - Executive Director of the Sustainable Development Network Canada (SDNC)
December 17, 2025
PRA cuts 37 reporting templates for UK banks; EU Lawmakers Agree to Slash Sustainability Reporting and Due Diligence Requirements; Projects in fast paced sectors could receive exemptions from environmental impact assessments.
Business

Ease of Reporting Standards for UK Banks

Today’s ESG Updates PRA to Ease Reporting for UK Banks: Prudential Regulation Authority has agreed to remove 37 reporting templates...

byPuja Doshi
December 12, 2025
ESG News regarding Germany’s fast-tracking of infrastructure, rollback of the heating law, digital-only approval reforms, and revised building emissions policy ahead of 2026 elections
Business

Germany Moves to Fast-Track Infrastructure, Retreats From Heating Law

Today’s ESG Updates Germany Overhauls Climate Rules: Germany moves to fast-track major infrastructure projects while replacing its contested heating law...

byJana Deghidy
December 11, 2025
Next Post
The Economist Sustainability Week 2023: UN Chief Says ‘The Private Sector is Pivotal’

The Economist Sustainability Week 2023: UN Chief Says 'The Private Sector is Pivotal'

Recent News

ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.

To Save the Grid, China Forces Industries to Go Off-Network

January 9, 2026
Cleaner Air in Hospitals

How Cleaner Air in Hospitals Can Cut Infections and Climate Impact at the Same Time

January 9, 2026
Search cleanup, key activity to protect your data and tech devices.

A Simple “Search Cleanup” Plan for Busy People

January 9, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH