Ever since ChatGPT strutted onto the scene in 2022, public interest in AI has surged, and with it, all sorts of theories around its transformative applications in reshaping industries and altering the way we work. Yet, despite how cliché the fear of machines coming for us may seem, the current shockwaves AI is sending across the globe appear to be targeting some groups more than others.
LinkedIn’s recent Future of Work Report narrowed down the impact of artificial intelligence on jobs worldwide. The report sheds light on the expansion of AI-related skills and the increasing prevalence of AI in job listings on the platform. The stats in the report speak volumes: the share of job postings mentioning GPT or ChatGPT has soared by 21 times since November 2022. The report also highlights that a substantial 47% of US executives firmly believe that incorporating generative AI will usher in a new era of enhanced productivity.
If that is not enough, there seems to be a growing appetite for AI expertise. In the last year alone, the number of US businesses with a Head of AI on their team jumped by 10%. At the same time, the number of individuals occupying these roles has increased by a whopping 2.7 times over the past five years.
Vulnerable Communities Affected by AI
However, as AI’s reach extends, the social ripple effect it is generating is gradually panning out to be a replay of an old tune. A report by the McKinsey Global Institute found that, although nearly a third of working hours in the United States could be automated by 2030, women are 1.5 times more likely to have their jobs replaced by AI than men.
Sectors such as food services, customer service, sales, and office support are expected to witness the most significant losses. These just so happen to be fields where women and other marginalized groups are overrepresented. Additionally, women of color in particular, as well as those without college degrees, face a higher risk of job displacement.
These predictions are compounded by how marginalized communities are already being exploited by AI at present. At the beginning of 2023, news broke out about OpenAI, the company behind ChatGPT, paying content moderators between $1.32 and $2 an hour. Time Magazine shared comments it received from an OpenAI spokesperson on the subject:
Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.
Content moderation is one of the most common entry-level artificial intelligence jobs out there. Moderators repetitively review posts, videos, and images and flag them if they do not align with what their platform deems appropriate. These professionals often report high rates of anxiety, stress, and PTSD-like symptoms due to the nature of the content they often review. Furthermore, content moderators’ pay is often a sliver of the generous budgets allocated to the highest-paid jobs in artificial intelligence, such as those of engineers and analysts. A U.S.-based Facebook moderator earns a mere $28,800 a year, according to The Verge.
RELATED ARTICLES: ChatGPT and Me: On the Benefits and Dangers of Artificial Intelligence | Who Is Liable if AI Violates Your Human Rights? | AI vs. Artists: Who Can Claim Creativity?
So it seems that, to curb discriminatory content being generated by AI, the same run-of-the-mill exploitive labor conditions are being employed and innovations as disruptive as AI still uphold existing bias in some way or another.
Unmasking Bias in AI
In the realm of technology, where algorithms seemingly operate neutrally and objectively, it’s disheartening to discover that bias can permeate even the most advanced systems. The concept of bias in generative AI training exposes the uncomfortable truth that the algorithms we create are far from immune to the world around us.
At its core, generative AI relies on vast amounts of data to learn patterns and generate content. However, the data itself might be filled with harmful stereotypes and, when fed into AI systems, can lead to AI-generated content thatt reflects and amplifies them. This is particularly concerning when it comes to gender and race, as AI-generated content can inadvertently endanger the lives of many in marginalized communities.
But how harmful can it be? Well, in 2017, an Amazon AI recruiting tool was proven to give preference to men’s resumés over women’s. In 2021, an AI-based software used for crime prevention was shown to target Black and Latino communities, and in August of 2023, a similar crime-prevention software led to the wrongful arrest of a pregnant woman due to a faulty facial recognition program.
Considering that 99% of Fortune 500 companies are already using AI in their recruitment processes and that 430 airports in the US are implementing facial recognition as another security measure, it’s likely that events such as these will become more prevalent as AI technology advances. What is clear is that AI is already impacting the lives of many — and making some much harder.
Looking Ahead: The Future of Equitable AI
Recent advancements have taken place on the regulatory front. The emergence of the EU’s AI Act, a pioneering effort to establish comprehensive rules for AI technology, signifies a step towards holding AI systems accountable for their potential biases and adverse effects on society. This landmark legislation, which emphasizes transparency, safety, and fundamental rights, has the potential to set a global precedent for ethical AI development.
However, on top of regulation, a fundamental shift must occur to ensure AI does not widen existing bias. Addressing the unique ways new technologies might impact the lives of marginalized communities is a necessary start. Additionally, actively seeking out and incorporating the work being done in AI ethics can only help further advancements.
In a world where technological progress accelerates at a breathtaking pace, the journey of AI’s impact on society is marked by both promise and pitfalls. As the new tech rears its ugly head, accountability and responsibility have to be the main concerns to remedy current and forthcoming impacts on society’s most vulnerable groups.
Editor’s Note: The opinions expressed here by the authors are their own, not those of Impakter.com. — In the Featured Photo: Robot toy. Featured Photo Credit: Unsplash