This article was published on February 14, 2021

Why it doesn’t make sense to ban autonomous weapons


Why it doesn’t make sense to ban autonomous weapons Image by: pxhere

In May 2019, the Defense Advanced Research Projects Agency (DARPA) declared, “No AI currently exists that can outduel a human strapped into a fighter jet in a high-speed, high-G dogfight.”

Fast forward to August 2020, which saw an AI built by Heron Systems flawlessly beat top fighter pilots 5 to 0 at DARPA’s AlphaDogFight Trials. Time and time again Heron’s AI outmaneuvered human pilots as it pushed the boundaries of g-forces with unconventional tactics, lightning-fast decision-making, and deadly accuracy.

The former US Defense Secretary Mark Esper announced in September that the Air Combat Evolution (ACE) Program will deliver AI to the cockpit by 2024. They are very clear that the goal is to “assist” pilots rather than to “replace” them. It is difficult to imagine, however, in the heat of battle against other AI-enabled platforms how a human could reliably be kept in the loop when humans are simply not fast enough.

On Tuesday, January 26, the National Security Commission on Artificial Intelligence met, recommending not to ban AI for such applications. In fact, Vice Chairman Robert Work stated that AI could make fewer mistakes than human counterparts. The Commission’s recommendations, which are expected to be delivered to Congress in March, are in direct opposition with The Campaign to Stop Killer Robots, a coalition of 30 countries and numerous non-governmental organizations which have been advocating against autonomous weapons since 2013.

There are seemingly plenty of sound reasons to support a ban on autonomous weapon systems, including destabilizing military advantage. The problem is AI development cannot be stopped. Unlike visible nuclear enrichment facilities and material restrictions, AI development is much less visible and thus nearly impossible to police. Further, the same AI advancements used to transform smart cities can easily be utilized to increase the effectiveness of military systems. In other words, this technology will be available to aggressively postured countries that will embrace it towards achieving military dominance whether we like it or not.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

So, we know these AI systems are coming. We also know that no one can guarantee that humans remain in the loop in the heat of battle  —  and as Robert Work argues, we may not even want to. Whether seen as a deterrence model or fueling a security dilemma, the reality is that the AI arms race has already begun.

[Read: How Polestar is using blockchain to increase transparency]

“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” — Elon Musk

Like most technology innovations whose possible unintended consequences start to give us pause, the answer is almost never to ban but rather to ensure that its use is “acceptable” and “protected.” As Elon Musk suggests, we should indeed be very careful.

Acceptable use

Just like facial recognition, which is also under immense scrutiny with increased bans across the U.S., it is not the technology that is the problem — it is its acceptable use. We must define the circumstances where such systems can be used and where they cannot. For example, no modern-day police agency would ever get away with showing a victim a single suspect photograph and asking, “is this the person you saw?” It is similarly unacceptable to use facial recognition to blindly identify potential suspects (not to mention the bias of such technologies across different ethnicities, which goes well beyond AI training data limitations to the camera sensors themselves).

automated license plate reader
An Automated License Plate Reader (ALPR) equipped police car (Adobe Stock)

Another technology that suffered from early misuse is automated license plate readers (ALPRs). ALPRs were not only useful for identifying target vehicles of interest (e.g., expired registrations, suspended drivers, even arrest warrants) but the database of license plates and their geographic locations turned out to be quite useful for locating suspect vehicles following a crime. It was quickly determined that this practice was offside as it violated civil liberties and we now have formal policies in place for data retention and acceptable use.

Both of these AI innovations are examples of incredibly useful but controversial technologies that need to be balanced with well-thought-out acceptable use policies (AUPs) that respect issues of explainability, bias, privacy, and civil liberties.

Protection

Unfortunately, defining AUPs may soon be seen as the “easy” part as it only requires us to be more mindful to consider and formalize which circumstances are appropriate and which are not, although we need to move much faster in doing so. The most difficult consideration with the adoption of AI is ensuring that we are protected from the inherent dangers of such systems, which are not yet widely known today — that AI is hackable.

AI is susceptible to adversarial data poisoning and model evasion attacks that can be used to influence the behavior of automated decision-making systems. Such attacks cannot be prevented using traditional cybersecurity techniques because the inputs to the AI, both during model training and model deployment time, fall outside the organization’s cybersecurity perimeter. Further, there is a wide gap in the necessary skillsets that are required to protect these systems because cybersecurity and deep learning are often mutually exclusive niche skills. Deep learning experts typically do not have an eye for how malicious actors think and cybersecurity experts typically do not have the deep knowledge about AI to understand the potential vulnerabilities.

main battle tank
Images are subject to data poisoning attacks that are only visible during AI model training (Adobe Stock)

As but one example, consider the task for training an Automated Target Recognition (ATR) system to identify tanks. The first step in this task is to curate thousands of training images to teach the AI what to look for. A malicious actor that understands how AI works can embed hidden images that are nearly invisible to data scientists but completely flip to a new unseen image when resized to the input training dimension during model development. In this case, the above image of a tank can be poisoned to completely flip to a school bus during model training time. The resulting ATR is being trained to recognize both tanks and school buses as threat targets. Remember the difficulty of keeping humans in the loop?

school bus
An example of a hidden image that is only seen during AI training time (Adobe Stock)

Many will dismiss this example as either unlikely or even impossible but recall that neither the AI experts nor the cybersecurity experts understand the complete problem. Even if data supply chains are secure, breaches and insider threats happen daily, and this is just one example of literally an unknown number of possible attack vectors. If we’ve learned anything it’s that all systems are hackable given a motivated malicious actor with enough compute power  —  and AI was never created with security in mind.

It does not make sense to ban AI weapons systems as they are already here. We cannot police the development, and we cannot guarantee that humans remain in the loop as these are the realities of AI innovation. Instead, we must define when it is acceptable to use such technology and, further, that we take all measurable action to protect such technologies from adversarial attacks that are no doubt being developed by malicious and state actors.

This article was originally published by James Stewart on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top