The European Parliament passed the AI legislation in March this year, but its publication in the European Commission’s official journal in July made it official: On August 1, 2024, the European Union (EU) made history with the AI Act, its groundbreaking legislation to regulate the development and use of artificial intelligence (AI).
This landmark legislation marks the first time any region in the world has taken such a bold step to address the ethical, societal, and technological implications of AI.
As we all know, the negative implications of AI are many and the sums invested in AI are huge. A quick look at the current news cycle reveals both ends of the spectrum of AI impacts: On one hand, today a report came out that Meta shares jumped on the company’s strong ad growth bolstered by — you guessed it — AI. On the other hand, and it happened just three days ago, Elon Musk got once again embroiled in one of his tasteless jokes, sharing on X (now his personal platform and playground) an AI-generated deep fake of a Kamala Harris campaign ad.
This manipulated video featured a voice similar to hers claiming to be the “ultimate diversity hire” and “a deep state puppet.” Musk didn’t disclose that this was generated by artificial intelligence, thus triggering a wave of backlash. More importantly, his performance raised concerns about the power of artificial intelligence to mislead just three months away from the US elections.
Could AI help destroy democracy? Can the new European AI legislation help solve this and other problems and set a model for the world to follow?
While the EU’s decision to enact AI legislation was a result of careful deliberation by policymakers, it is worth recalling that it was never exclusively their doing. European concerns focused on AI’s potential impact on jobs, privacy, and security and the European Commission responded three years ago, proposing the AI Act in 2021. Among the various citizen associations and NGOs that played a crucial role in pushing for regulations, you find the European Digital Rights (EDRi), Civil Liberties Union for Europe (Liberties), Human Rights Watch, Amnesty International and AlgorithmWatch.
What the new AI Act does
Beyond setting out the expected series of “good principles” that AI systems must adhere to, such as transparency, accountability, fairness, and respect for human dignity, the EU legislation does something entirely new: It is the world’s first legal framework that regulates AI models according to the level of risk they pose: no risk, minimal risk, high risk, and prohibited AI systems. It assigns its rules to every company using AI systems based on four levels of risk which in turn determines what timelines apply to them.
Certain practices will be entirely banned in the EU as of February 2025, such as manipulating a user’s decision-making or expanding facial recognition databases through internet scraping.
Other AI systems that are determined to be high-risk, such as AIs that collect biometrics and AI used for critical infrastructure or employment decisions, will have to follow the strictest regulations. These companies will have to show their AI training datasets and provide proof of human oversight, among other requirements.
However, according to Thomas Regnier, a spokesperson for the European Commission, about 85% of AI companies fall under the second category of “minimal risk,” with very little regulation required.
Unsurprisingly, one key aspect of the EU’s AI legislation is its focus on deepfakes — defined as synthetic media that use AI to create realistic images, videos, or audio of people saying or doing things they never did. The EU’s legislation includes specific provisions to address deepfakes, including a requirement for platforms to label deepfake content and provide users with tools to identify and report it.
In addition to the provisions mentioned above, the EU’s AI legislation also includes a number of other important measures. For example, the legislation establishes a new European AI Agency, the European Artificial Intelligence Office, which will be responsible for overseeing the implementation of the legislation and providing guidance to businesses and consumers. The legislation also includes a number of measures to support research and development in AI, as well as to promote public awareness of AI and its potential impact on society.
Help for the business community: The AI Pact
The EU’s AI legislation is likely to significantly impact the development and use of AI in the EU and beyond. And as it is unquestionably a complex and far-reaching piece of legislation, it will challenge the business community, especially those larger corporations that rely heavily on AI.
But help is coming in the form of the AI Pact. A spokesperson for the European Commission told Euronews that some 700 companies have shown interest in joining the AI Pact, which is a preparatory commitment to help businesses prepare for the incoming AI Act. That was up from the 550 companies when the first call for interest was launched last November.
Industry commitments will be formalized this autumn: With the Pact, the EU Commission aims to let businesses anticipate the AI Act with voluntary commitments and share ideas through workshops organised by the ad-hoc AI office within the European Commission — one of the new arrangements already in place. Its aim is to supervise general-purpose artificial intelligence systems integrated into other high-risk systems, flanked by an advisory forum for stakeholders, including representatives from industry, SMEs, start-ups, civil society, and academia.
As explained in an informative EU News article detailing the new regulations, several requirements will need to be in place before new AI models can be released to the market:
“To account for the wide range of tasks that artificial intelligence systems can perform – generation of video, text, images, side-language conversation, computation, or computer code generation – and the rapid expansion of their capabilities, the ‘high-impact’ foundation models (a type of generative artificial intelligence trained on a broad spectrum of generalized, label-free data) will have to comply with several transparency requirements before being released to the market, from drafting technical documentation to complying with EU copyright law to disseminating detailed summaries of the content used for training.”
In all this, smaller companies are not forgotten. For example:
“to support innovation, sandboxes (test environments in computing) of artificial intelligence regulation will allow the creation of a controlled environment to develop, test, and validate innovative systems even under real-world conditions. To alleviate the administrative burden for smaller companies and protect them from the pressures of dominant market players, the Regulation provides “limited and clearly specified” support actions and exemptions.”
Finally, the AI Act has some teeth: Sanctions are levied for transgression. A guilty company will have to pay either a percentage of annual global turnover in the previous financial year or a predetermined amount (whichever is higher): 35 million euros or 7% for violations of prohibited applications, 15 million euros or 3% for breaches of the law’s obligations, 7.5 million euros or 1.5% for providing incorrect information.
Related Articles: How AI Can Help Us Spot Its Own Fakes | AI for Good: When AI Is the ‘Only Viable Solution’ | When You Can’t Trust Your Eyes Anymore | AI Is Set to Change Fertility Treatment Forever | Imagining an Ethical Place for AI in Environmental Governance | Can Artificial Intelligence Help Us Speak to Animals? | AI’s Threat to Humanity Rivals Pandemics and Nuclear War, Industry Leaders Warn | Governments’ Role in Regulating and Using AI for the SDGs | The Challenges Ahead for Generative AI
All good…except for some serious shortcomings
While the legislation represents a significant step forward in addressing the ethical and societal implications of AI, it falls short in several key areas and has been heavily criticized. Concerns have been raised that AI Act’s strict regulations, particularly on high-risk AI systems, could stifle innovation by increasing compliance costs and slowing down development.
Among the more vocal critics: Clifford Chance, a global law firm that has published analyses highlighting concerns about the broad definitions and potential overreach of the regulation; The Center for Data Innovation, a think tank, that has argued for a more risk-based approach and greater emphasis on promoting innovation alongside regulation; Access Now, a digital rights organization that has advocated for stronger safeguards for fundamental rights and greater transparency in AI systems; and The Future of Life Institute, a non-profit organization, that has called for broader consideration of the long-term risks and potential negative impacts of AI.
To summarize, the main shortcomings critics have identified include:
- Lack of Clear Definitions: The legislation fails to provide clear and concise definitions of key terms such as “AI” and “high-risk AI.” This ambiguity could lead to inconsistent interpretations and enforcement.
- Exemptions for Certain AI Applications: The legislation exempts certain AI applications from its scope, such as AI used for military purposes or by law enforcement agencies. This could undermine the legislation’s effectiveness and create loopholes that could be exploited.
- Insufficient Provisions for Algorithmic Transparency and Accountability: The legislation does not require AI developers to disclose the inner workings of their algorithms or provide mechanisms for users to challenge algorithmic decisions. This lack of transparency and accountability could lead to biased or discriminatory AI systems.
- Limited Focus on Social and Ethical Impact: The legislation primarily focuses on technical requirements and risk management, but it does not adequately address the broader social and ethical implications of AI. This includes issues such as job displacement, algorithmic bias, and the potential for AI to be used for malicious purposes.
In short, the new AI Act is a positive step towards regulating the development and use of AI. However, the legislation’s shortcomings need to be addressed to ensure that it effectively protects citizens’ rights and promotes the responsible development of AI.
Editor’s Note: The opinions expressed here by Impakter.com columnists are their own, not those of Impakter.com — In the Cover Photo: European Union flags. Cover Photo Credit: Rawpixel.