An overview of AI regulations and laws around the world designed to ensure that the technology benefits individuals and society, including workers.
According to investment bank Goldman Sachs, Artificial Intelligence (AI) could replace the equivalent of 300 million full-time jobs. In the US and Europe, it could eliminate a quarter of all work tasks, with two-thirds of jobs in the U.S. and Europe “exposed to some degree of AI automation,” and around a quarter of all jobs potentially performed entirely by AI.
Another study, by the McKinsey Global Institute, reports that, by 2030, at least 14% of employees globally could need to change their careers due to digitization, robotics, and AI advancements. Whether the projections are accurate in terms of scale or not, the impact of AI on the workforce will be significant.
Governments worldwide are beginning to craft laws and regulations to address workforce disruptions caused by AI. These measures range from employment protection and retraining programs to ethical guidelines and restrictions on the use of AI in the workplace. Some jurisdictions are actively developing policies to mitigate job displacement and safeguard workers’ rights.
But, as is often the case, public sector efforts lag behind the dynamic private sector use of this new technology with respect to its workforce: the private sector sees AI as an efficient, productivity-enhancing, and cost-reducing option.
The Interface of AI and the Workforce
AI tools can and will mean automating tasks once thought to be uniquely human. This set of developments will take the form of job displacement in creative industries, customer services, and administrative tasks; economic inequality if productivity gains do not translate into increased wages; and/or possible bias and discrimination in hiring and workplace monitoring.
The question, then, is: What can be done to guide AI to benefit individuals and society, including workers?
Challenges in Public Sector Responses
Drafting AI laws or regulations is a difficult and complex process. Once adopted, the next challenge is enforcing adherence. Those responsible for drafting such rules have to thread the needle between protection objectives and the avoidance of overreach and the stifling of innovation.
Looking at how this might be done, the International Labor Organization (ILO) created a set of guidelines for national AI policies that do so, while upholding fundamental labor rights, preventing worker exploitation, and looking toward productive dialogue among employers, workers (and their union representatives), and governments.
Navigating these competing objectives is not easy, and various jurisdictions have had mixed results.
Examples of AI Laws and Regulations
The European Union
The European Union (EU) Artificial Intelligence Act (EU AI Act) of 2025 is a risk-based approach to AI regulation, with prohibited practices and stringent rules for high-risk AI systems, such as those used for hiring or performance monitoring, which require data governance, risk management, and bias assessment.
The law has not been fully tested yet because various provisions have staggered enforcement dates, with some rules for high-risk systems taking effect in August 2026 (compliance has already faced tests and challenges by EU members).
Italy
Italy’s AI law (Law no. 132/2025) provides for comprehensive AI regulation, making Italy the first EU country to enact such legislation. The law focuses on human oversight, data protection, and specific sectors such as healthcare, while also introducing criminal penalties for misuse, including the creation of “deepfakes.”
Key provisions include rules for AI in healthcare that keep final decisions with doctors, copyright adjustments for AI-assisted works, and a requirement for parental consent for users under 14 years of age.
Japan
Japan’s AI Promotion Act enables the government to investigate serious violations and, in cases of misuse, publicly disclose the companies involved. It allows for the protection of personal information and of copyright, but lacks direct penalties under the new law itself (Japan’s approach combines non-binding, “soft law” guidelines with sector-specific “hard law” reforms).
United States
Colorado: The Colorado AI Act of 2024, which is to come into effect Feb. 1, 2026, places a “duty of reasonable care” on developers and deployers of “high-risk” AI systems to prevent discrimination in key areas like employment, housing, insurance, and lending. However, since passage, there have been efforts to slow down the regulations from taking effect.
New York State: The Responsible AI Safety and Education Act (“RAISE”) was passed in June 2024. It establishes reporting, disclosure, and other requirements for large developers of frontier AI models. The bill was passed by the legislators, but the New York State Governor has yet to sign it into law.
New York City: New York City passed an anti-AI bias law, in effect since 2023, which requires employers to conduct annual independent bias audits of automated employment decision tools (AEDTs) used in hiring and promotion, public disclosure of such audit results, and advance notice to candidates and employees about the use of these tools. It has been tested, and some analyses suggest that it has not been entirely successful.
A reflection of the Trump administration’s view of AI is that it is now drafting an executive order that would direct the Justice Department to sue states that pass laws regulating artificial intelligence.
Related Articles
Here is a list of articles selected by our Editorial Board that have gained significant interest from the public:
AI From a Private Sector Perspective
Private-sector attitudes about regulation in general are not new, and certainly not unique to AI. As seen by many businesses, adoption of AI laws and rules creates additional hurdles, but, nevertheless, as having both positive and negative effects:
- Businesses are driven by profit, and many will adopt AI quickly to gain a competitive edge.
- By the time any AI law or regulation is implemented, technology may have evolved, creating an uncertain regulatory environment.
- Companies may seek to “upskill” and “retrain” existing workers to use AI, rather than banning the technology in the workplace.
Future Prospects
AI legislation or regulation is in its early days, with the jury still out on whether it can be effective in achieving public-sector goals. There are signs that public discussion has begun to shift from “anti-AI” to considering ways to ensure responsible AI development and deployment with labor protections built in from the start.
To do so will require policymakers, worker advocates, and civil rights organizations to work actively together to implement measures that ensure fairness, transparency, and human oversight in the AI-driven workforce.
Editor’s Note: The opinions expressed here by the authors are their own, not those of impakter.com — In the Cover Photo: Strikers during the Dressmakers’ Strike of 1933 gather in the streets and urge unionization. Cover Photo Credit: Wikimedia Commons.










