Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
LAW-less: should we ban killer robots?

LAW-less: should we ban killer robots?

Katrina Wesencraft - Editor-in-Chief of theGISTbyKatrina Wesencraft - Editor-in-Chief of theGIST
November 19, 2018
in Politics & Foreign Affairs, Society, Tech
1

The term ‘artificial intelligence’ was first coined in 1955 by Professor John McCarthy prior to the famous Dartmouth conference of 1956. The task of developing software that could mimic human behaviour was more complicated than he first imagined, and progress in the field was slow due to the laborious programming required.

In the last ten years we have seen an AI explosion, with advances in machine learning techniques and huge improvements in computing power prompting massive investment from big tech firms. Today, AI is everywhere and affects everything from how you shop online to how you receive medical treatment. It can make your daily routine easier, your work more productive, and can even unlock your phone using your face. Many of us already have virtual assistants (think Siri, Cortana or Alexa), and it isn’t a massive leap to envision humans will live alongside intelligent machines within our lifetime. For decades, the idea of a future with robotic servants has permeated popular culture, but human control is usually the key to this fantasy. For some, a robot uprising has become a genuine fear.

In the photo: A graphic demonstration of the improvement of machine learning Photo Credit: Franck V

A fundamental aspect of AI is that machines possess the ability to make their own decisions, however the training of AI has traditionally been carried out under close human supervision. Algorithms are trained with carefully selected training data; they make decisions more quickly and with fewer errors than we can, but essentially the data you provide ensures that the machines make the decisions that you want them to make. The application of ‘deep learning’ may change that. Since the 1950s, programmers have attempted to simulate the human brain using a simplified network of virtual neurons. However, it is only recent advances in computer power that have enabled machines to train themselves using complex neural networks without human supervision.

Neural networks are still not reaching anywhere near the complexity of the human brain but, despite this, many experts believe that this form of deep learning will be the key to developing machines that think just like humans. Google’s AI system AlphaGo recently made headlines when it defeated Ke Jie, the Go world champion. This ancient strategy game is believed to be the most complex game ever devised. For comparison, when playing a game of chess, you will typically have 35 moves to choose from per turn – in Go this number is almost 200. This achievement represents a significant leap forward as, in the ‘90s, AI experts predicted that it could take at least 100 years until a computer could beat a human at Go. With AlphaGo, Google engineers have used neural networks to create the first AI displaying something akin to intuition. However, the feature that roboticists are trying to capture is autonomy – the ability to make an informed decision, free from external pressures or influence – although as it stands, even autonomous robots are only capable of making simple decisions within a controlled environment.

While AI can now outperform humans in quantitative data analysis and repetitive actions, we still have the advantage when it comes to judgement and reasoning. Science fiction has taught us to fear a robot uprising, often with humanoid robots that walk, talk, and think just like us. What if they refuse to obey orders? This is particularly concerning if those robots are armed and dangerous.

In the photo: artificial intelligence developments have managed to get closer to human looks and behaviour Photo Credit:  Franck V

Killer Robots

In August 2017, the founders of 116 robotics and AI companies, most notably Elon Musk (Tesla, SpaceX, and OpenAI) and Mustafa Suleyman (Google DeepMind), signed an open letter calling for the United Nations to ban military robots known as lethal autonomous weapons (LAWs). As it stands, there is still no definition of fully autonomous weapon that is internationally agreed upon, however the International Committee of the Red Cross stipulate that LAWs are machines with the ability to acquire, track, select and attack targets independent of human influence. Also calling for a total ban on LAWs is The Campaign to Stop Killer Robots, an international advocacy group formed by multiple NGOs, who believe that allowing machines to make life and death decisions crosses a fundamental moral line. According to their website, 22 countries already support international ban and the list is growing.

Despite growing concerns, the US, Israeli, Chinese, and Russian governments are all ploughing money into the development of LAWs. Lethal autonomous weapons may sound like science fiction but the desire to create weapons that detonate independently of human control is far from new. Since the 13th century, landmines have been used to destroy enemy combatants, and while they are unsupervised, they aren’t autonomous by modern standards. Landmines detonate indiscriminately (typically in response to pressure), rather than as an active decision made by the device.

Developing LAWs for offensive operations is desirable as governments look to increase their military capabilities and reduce the risk to personnel. However, campaigners are worried that this potential risk reduction will lower the threshold for entering into armed conflict. There is also concern that when fully autonomous robots are placed in a battle environment where they are required to adapt to sudden and unexpected changes, their behavioural response may be highly unpredictable. Current autonomous weapons tend to be used for defensive, rather than offensive purposes, and are limited to attacking military hardware rather than personnel. The Israeli Harpy is one such lethal autonomous weapon, armed with a high- explosive warhead. Marketed as a ‘fire and forget’ autonomous weapon, once launched it loiters around a target area then identifies and attacks enemy radar systems without human input (however, its attack mission can be overridden). It is believed that these LAWs, known as loitering munitions, are already being used by at least 14 different nations. NATO suspect that drones capable of functioning without any human supervision are not currently in operation due to political sensitivity rather than any technological limitations.

The US is already developing autonomous drones that take orders from other drones. Department of Defence documents reveal that this ‘swarm system’ of nano drones is called PERDIX. The drones can be released from, and can act as an extension of, a manned aircraft but they can also function with a high degree of autonomy. These autonomous weapons have learned the desired response to a series of scenarios, but what if they continued to learn? Perhaps one day, advances in machine learning techniques will lead to the development of weapons that are capable of adapting their behaviour. With all the political caginess, it’s difficult to say for certain that this technology isn’t already in development. Greg Allen from the Center for New American Security thinks that a full ban on LAWs is unlikely as the advantages gained by developing these weapons are too tempting. Yale Law School’s Rebecca Crootof has stated that she believes rather than calling for a total ban, it would be more productive to campaign for new regulatory legislation. The Geneva Convention currently restricts the actions of human soldiers, perhaps this should be adapted to apply to robot soldiers too.

In the photo: human influence in self-commanded aircraft Photo Credit: Skeeze

An Ethical Minefield

Many have expressed concern that, as robots become increasingly more human-like in their decision-making, their decisions must be based on human morals and laws. It has been 75 years since Isaac Asimov first wrote of a future with android servants, and he devised three rules which still play a key role in today’s conversation surrounding the ethics of creating intelligent machines:

1) A robot may not injure a human being or, through inaction, allow a human being to come to harm
2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Asimov correctly predicted the development of autonomous robots, and while robots making the conscious decision to obey human-imposed laws may be far-fetched, experts have called for the laws to be followed by programmers. The impending development of LAWs is causing machine ethicists to reconsider the First Law, as Asimov’s principles do not take into account the possibility that we would develop robots specifically to injure and kill other humans. These three rules have formed the basis of the principles of robotics published by the Engineering and Physical Sciences Research Council (EPSRC). Their updated version of Asimov’s laws redirects the responsibility from robots to roboticists. The most notable amendment is to the first law, which conveniently states that robots should not be designed to kill humans ‘except in the interests of national security’. It is worth pointing out the UK government has previously stated that they are opposed to a ban on LAWs. However, following the open letter from Musk and co., the Ministry of Defence has clarified that any autonomous weapons developed by the UK will always operate under human supervision. I don’t find this particularly reassuring.

Creating ethical robots is just as hard as you would imagine, and creating a moral code requires the programmer to consider countless exceptions and contradictions to each rule. Even Asimov’s relatively simple laws illustrate this problem. Morality is also highly subjective, and humans probably aren’t the best moral teachers. If the training data supplied for machine learning is biased, then you will get a biased robot. This is particularly concerning when considering LAWs, as it will be possible for governments to develop weapons that are inherently racist (either by accident or on purpose). Perhaps it is not a robot rebellion that we fear, but what governments and individuals will be able to achieve by abusing this technology. In November, the Russian government made it clear that they would ignore a
UN ban on LAWs under the pretence that it would harm the development of civilian AI technologies.

The Greater of Two Evils

As machines become faster, stronger, and smarter than we are, the need for control becomes more critical. However, some experts believe that when it comes to LAWs, we shouldn’t waste time tackling these particular ethical issues. The current debate around banning LAWs often assumes that such weapons will be operating free from oversight and that humans will be absolved from any blame for their actions. Due to international law and restrictions on appropriate military force, many feel it is unlikely that we will ever see robots fighting in conflicts without close human supervision.

In the photo: general opinions about the importance of maintaining focused on human benefits Photo Credit:  Annie Bolin

Some ethicists are concerned that the language being used in the debate confuses the features of the technology with potential consequences of its misuse. It is unlikely that we will find ourselves in a scenario where humans are absolved of blame – LAWs will have programmers, manufacturers, and overseers. The EPSRC principles attempt to highlight this by stressing that robots are manufactured products, and that there must be a designated person legally responsible for their actions. Though this is assuming that in the future, robots will still be programmed by humans.

Autonomous drones can already follow and take orders from other drones, AI can program superior AI, and robots can create their own languages. It’s beginning to look like the robot uprising could occur sooner than we think. Some seek comfort in the belief that robots will follow our instructions. Others believe that legislation, bans, and limits on autonomy are the way forward. But is a robot rebellion really the most pressing threat? Perhaps we should be more concerned about governments ignoring international law, and using these robots as weapons of terror. It is easy to imagine that in this scenario, the people responsible may wash their hands of any wrongdoing and blame the robots. Or hackers. Those who support a total ban on the development of LAWs must hope that it will not be possible to abuse this technology if it does not exist in the first place. However, it’s possible that we have already let the genie out of the bottle. It is very difficult to ban the development of something that has already been developed.


EDITORS NOTE: THE OPINIONS EXPRESSED HERE BY IMPAKTER.COM COLUMNISTS ARE THEIR OWN, NOT THOSE OF IMPAKTER.COM  FEATURED PHOTO CREDIT:  Martin Sanchez
Tags: AIAircraftartificial intelligencedebateDronesopinionrobotstechnological revolutionWeapons
Previous Post

Living your Best Life: Six Apps for a Healthier Lifestyle

Next Post

Copenhagen Wants to Inspire Cities to Take Climate Action

Related Posts

ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.
Business

To Save the Grid, China Forces Industries to Go Off-Network

Today’s ESG Updates China Limits Grid Exports for New Industrial Solar & Wind: China is encouraging companies to store green...

byEge Can Alparslan
January 9, 2026
Is AI Hype in Drug Development About to Turn Into Reality?
AI & MACHINE LEARNING

Is AI Hype in Drug Development About to Turn Into Reality?

The world of drug discovery, long characterised by years of painstaking trial-and-error, is undergoing a seismic transformation. Recent research led...

byDr Nidhi Malhotra - Assistant Professor at the Shiv Nadar Institution of Eminence
January 5, 2026
AI data centres
AI & MACHINE LEARNING

The Cloud We Live In

How AI data centres affect clean energy and water security As the holiday season begins, many of us are engaging...

byAriq Haidar
December 24, 2025
A crowded airport terminal with travelers moving through check-in areas during the holiday season.
AI & MACHINE LEARNING

How AI Is Helping Christmas Run More Smoothly

Christmas this year will look familiar on the surface. Gifts will arrive on time, supermarkets will stay stocked, airports will...

byJana Deghidy
December 22, 2025
Can Government Efforts to Regulate AI in the Workplace Make a Difference?
AI & MACHINE LEARNING

Can Government Efforts to Regulate AI in the Workplace Make a Difference?

An overview of AI regulations and laws around the world designed to ensure that the technology benefits individuals and society,...

byRichard Seifman - Former World Bank Senior Health Advisor and U.S. Senior Foreign Service Officer
December 18, 2025
How Climate Change Could Help Foster Peace in Yemen
Climate Change

How Climate Change Could Help Foster Peace in Yemen

Yemen's tragedy is traditionally depicted through the limited perspective of humanitarian need and political divisiveness, but there is a greater...

byTareq Hassan - Executive Director of the Sustainable Development Network Canada (SDNC)
December 17, 2025
PRA cuts 37 reporting templates for UK banks; EU Lawmakers Agree to Slash Sustainability Reporting and Due Diligence Requirements; Projects in fast paced sectors could receive exemptions from environmental impact assessments.
Business

Ease of Reporting Standards for UK Banks

Today’s ESG Updates PRA to Ease Reporting for UK Banks: Prudential Regulation Authority has agreed to remove 37 reporting templates...

byPuja Doshi
December 12, 2025
AI energy
Editors' Picks

For a Solution to AI’s Energy Crisis, Look at the Human Brain

As Artificial Intelligence (AI) races ahead, its capacities and limitations are now being computed by those at the forefront of...

byDr. Subhrajit Mukherjee - Head of the Optoelectronic Materials and Device (OEMD) Lab at Shiv Nadar Institution of Eminence
November 28, 2025
Next Post
Copenhagen Wants to Inspire Cities to Take Climate Action

Copenhagen Wants to Inspire Cities to Take Climate Action

Please login to join discussion

Recent News

ESG News regarding China restricting industrial renewable exports, UN warning that US climate treaty exit harms economy, UK firms lowering wage forecasts despite inflation, Meta partnering with TerraPower for new nuclear reactors.

To Save the Grid, China Forces Industries to Go Off-Network

January 9, 2026
Cleaner Air in Hospitals

How Cleaner Air in Hospitals Can Cut Infections and Climate Impact at the Same Time

January 9, 2026
Search cleanup, key activity to protect your data and tech devices.

A Simple “Search Cleanup” Plan for Busy People

January 9, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Global Leaders
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH