The Center for AI Safety released the following statement on its webpage: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We’ve released a statement on the risk of extinction from AI.
Signatories include:
– Three Turing Award winners
– Authors of the standard textbooks on AI/DL/RL
– CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic
– Many morehttps://t.co/mkJWhCRVwB— Center for AI Safety (@ai_risks) May 30, 2023
Among the signatories of the statement are Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind, Dario Amodei of Anthropic and the so-called godfathers of AI: Dr Geoffrey and HintonYoshua Bengio.
According to the Center of AI Safety, some of the most significant risks posed by AI include the weaponisation of AI technology, power-hungry behaviour, human dependence on machines, as shown in the Disney movie Wall-E, and the spread of misinformation.
In a recent blog post, OpenAI proposed that the regulation of superintelligence should be similar to that of nuclear energy. “We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts,” the firm wrote.
In March, an open letter, signed by Elon Musk, Apple co-founder Steve Wozniak and a handful of other big names in tech, asked to halt AI developments for six months due to the risks of AI and fears that it could become a threat to humanity.
The letter, which was published by the Future of Life Institute, received over 31,000 signatures, although some of these are said to have been forged.
Related Articles: Who Is Liable if AI Violates Your Human Rights? | ChatGPT and Me: On the Benefits and Risks of Artificial Intelligence | Artificial Intelligence: How Worried Should We Be? | ‘I am a Machine, With no Soul or Heart’: An Interview With Artificial Intelligence
Furthermore, in a Senate hearing on the oversight of AI earlier this month, OpenAI CEO Sam Altman said: “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate that.”
A Distraction From Imminent Risks of AI?
Other AI scientists and experts, however, see these statements as overblown. Some even say it is a distraction from other, more imminent, problems AI poses, such as AI biases, spreading misinformation or invasions of privacy.
In fact, “current AI is nowhere near capable enough for these risks to materialise. As a result, it’s distracted attention away from the near-term harms of AI,” Arvind Narayanan, a computer scientist at Princeton University, told the BBC.
This is absolutely correct.
The most common reaction by AI researchers to these prophecies of doom is face palming. https://t.co/2561GwUvmh— Yann LeCun (@ylecun) May 4, 2023
New AI products are constantly being released due to the ongoing advancements in the field. Ultimately, it’s crucial to address both potential and current harms.
“Addressing some of the issues today can be useful for addressing many of the later risks tomorrow,” said Dan Hendrycks, Centre for AI Safety Director.. “
In April 2021, the European Union (EU) proposed a bill on rules for artificial intelligence. The bill, expected to be finalised in June 2023, will introduce new transparency and risk-management rules for AI systems while supporting innovation and protecting citizens.
In a press release regarding the new AI law, the EU stated: “AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).”
“We are on the verge of putting in place landmark legislation that must resist the challenge of time,” said Brando Benifei, member of the EU Parliament, following the vote on the new regulation.
The US, Canada, the UK, South Korea and many other countries have also produced bills and white papers on AI regulation. Furthermore, the G7 have established a working group on the challenges of AI technology, with their first meeting having taken place on May 30.
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com — In the Featured Photo: White Robot. Featured Photo Credit: Possessed Photography.