On March 29, tech industry leaders, including Elon Musk, CEO of SpaceX, Tesla and Twitter, and Steve Wozniak, co-founder of Apple, signed an open letter, “Pause Giant AI Experiments: An Open Letter,” coordinated by the nonprofit Future of Life Institute, urging for a six-month pause in Artificial Intelligence (AI) development.
A moratorium of six months could provide the industry with the necessary time to set safety standards for their AI design and manage any potential risks of the new technology, signatories of the letter said.
The letter stressed that the pace at which AI progressed had dangerous societal implications and safety concerns. The letter begins with a warning:
“AI systems with human-competitive intelligence can post profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.”
The unknown territory that AI developers are delving into is raising questions from the tech industry worldwide. “Powerful AI systems,” the letter states, “should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The letter comes in response to the staggering and fast-paced progress in AI over the past few months.
Indeed, Microsoft says that ChatGPT’s latest version can solve “novel and difficult tasks” with “human-level performance” in advanced careers such as coding, medicine, law, and even psychology.
The tools offered by AI could eliminate unpleasant, repetitive tasks from our lives. Yet, in so doing, AI can also threaten many jobs by completing tasks much quicker.
According to an ongoing study measuring the impact of AI on the Labour Market, most jobs will be changed by GPT in some way, particularly those where “at least one job task can be performed quickly by generative AI,” which account for 80% of jobs.
However, the limits of machine learning are still unknown. AI developments entail potential danger because of all the things that are still a mystery, especially in the long term.
Related Articles: Elon’s Twitter Ripe for a Misinformation Avalanche | AI vs. Artists: Who Can Claim Creativity? | How Meta’s Failure to Act Upon Human Trafficking Claims Led to Another Lawsuit
AI sceptics are already concerned about cybersecurity, plagiarism, and misinformation. AI tools are already capable of passing medical licensing exams, giving instructions on making bombs, and creating an alter ego for themselves.
Some concerns were even addressed by OpenAI’s CEO Sam Altman, who admitted that his company’s model, ChatGPT, shared racist, sexist and biased answers.
Similarly, Stable Diffusion has faced copyright accusations after allegedly stealing art from digital artists.
Industry leaders are also raising concerns over the pace at which new AI technology is diffused into the world.
Teams tasked with focusing on safely creating AI cannot do their jobs if they are rushed to put out newer versions without having time to consider the societal impact of the product.
Responses from Industry leaders
As of Monday, April 3, 3123 people have signed the letter, according to the Future of Life Institute.
Massachusetts Institute of Technology physics professor Max Tegmark is one of the organisers of the letter.
In an interview with the Wall Street Journal, he says that the advances in AI have already progressed far beyond what many experts believed was possible, even as recent as just a few years ago:
“It is unfortunate to frame this as an arms race. It is more of a suicide race. It doesn’t matter who is going to get there first. It just means that humanity as a whole could lose control of its own destiny.”
Despite accumulating a significant number of signatories, the letter will likely not have any effect aside from starting a debate around the topic.
When the letter was made public on Wednesday, Musk tweeted that the developers of the AI technology “will not heed this warning, but at least it was said.”
Leading AGI developers will not heed this warning, but at least it was said
— Elon Musk (@elonmusk) March 29, 2023
Similarly, Mr Mostque, Stability AI’s CEO, tweeted that despite signing the letter, he didn’t agree with the six-month pause.
“It has no force but will kick off an important discussion that will hopefully bring more transparency & governance to an opaque area.”
So yeah I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.
It has no force but will kick off an important discussion that will hopefully bring more transparency & governance to an opaque area
— Emad (@EMostaque) March 29, 2023
Box CEO Aaron Levie shared a similar view in his interview with Axios:
“There are no literal proposals in the actual moratorium. It was just, ‘Let’s now spend the time to get together and work on this issue.’ But it was signed by people that have been working on this issue for the past decade.”
The letter has faced substantial criticism from prominent members of the industry for lacking verification protocols over signatures, thus asserting that Xi Jinping and Meta’s chief AI scientist Yann LeCun signed the letter, despite it not being the case.
Nope.
I did not sign this letter.
I disagree with its premise. https://t.co/DoXwIZDcOx— Yann LeCun (@ylecun) March 29, 2023
Similarly, experts have spoken out against the Institute for supposedly using their research to support the letter’s claims, warning about imagined apocalyptic scenarios over immediate issues, such as the biases programmed into the machines.
Italy’s ban of ChatGPT
Two days after the letter to pause AI developments became public, Italy’s Privacy Regulator ordered OpenAI to ban access to ChatGPT in Italy because of increasing privacy concerns.
The Italian National Authority for Personal Data Protection said that it would block and investigate OpenAI, effective immediately, for processing the data of Italian users, as it supposedly goes against the EU’s privacy law, the General Data Protection Regulation (GDPR).
Italy’s privacy watchdog said that the company lacked a legal basis to justify “the mass collection and storage of personal data … to ‘train’ the algorithms” of ChatGPT.”
Italian authorities expressed concern over ChatGPT’s data breach, exposing users’ conversation and payment information last week.
Similarly, they accused OpenAI of not verifying the age of users, thus exposing “minors to absolutely unsuitable answers compared to their degree of development and self-awareness.”
OpenAI representatives communicated to Politico that despite banning ChatGPT in Italy, they disagree with Italy’s accusations: “we believe we comply with GDPR and other privacy laws”.
Similar rhetoric seems to be emerging throughout Europe.
The consumer advocacy group BEUC urged the EU and national authorities to investigate ChatGPT and the way it deals with “consumer protection, data protection and privacy, and public safety.”
Even though OpenAI doesn’t have offices in the EU, its representatives in the European Economic Area have 20 days to communicate their plan to bring ChatGPT into compliance with Europe’s GDPR laws. Otherwise, the company may be forced to give up 4% of its global revenue as a penalty.
Although it is very unlikely that the open letter will result in a moratorium, it has fuelled the debate surrounding the risks associated with AI. As Italy has taken decisive action to stop ChatGPT usage, we may see other EU states following suit.
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: Melting Ice in the Antarctic Featured Photo Credit: Jan Van Bizar