This article was published on August 13, 2020

How to protect AI systems against image-scaling attacks


How to protect AI systems against image-scaling attacks Image by: Unsplash

We usually don’t expect the image of a teacup to turn into a cat when we zoom out. But in the world of artificial intelligence research, strange things can happen. Researchers at Germany’s Technische Universität Braunschweig have shown that carefully modifying the pixel values of digital photos can turn them into a completely different image when they are downscaled.

What’s concerning is the implications these modifications can have for AI algorithms.

adversarial image-scaling attack cat-dog
Adversarial image-scaling attacks exploit image-resizing algorithms to change the appearance of an image when it is downscaled.

Malicious actors can use this image-scaling technique as a launchpad for adversarial attacks against machine learning models, the artificial intelligence algorithms used in computer vision tasks such as facial recognition and object detection. Adversarial machine learning is a class of data manipulation techniques that cause changes in the behavior of AI algorithms while going unnoticed to humans.

In a paper presented at this year’s Usenix Security Symposium, the TU Braunschweig researchers provide an in-depth review of staging and preventing adversarial image-scaling attacks against machine learning systems. Their findings are a reminder that we have yet to discover many of the hidden facets — and threats — of the AI algorithms that are becoming increasingly prominent in our daily lives.

[Read: What is adversarial machine learning?]

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Adversarial image-scaling

When trained on many examples, machine learning models create mathematical representations of the similarities between different classes. For instance, if you train a machine-learning algorithm to tell the difference between cats and dogs, it will try to create a statistical model that can tell whether the pixels in a new image are more like those found in the dog or cat images. (The details vary between different types of machine learning algorithms, but the basic idea is the same.)

The problem is, the way these AI algorithms learn to tell the difference between different objects is different from how human vision works. Most adversarial attacks exploit this difference to create small modifications that remain imperceptible to the human eye while changing the output of the machine learning system. For instance, in the following image, adding a carefully crafted layer of noise will cause a well-known deep learning algorithm to mistake the panda for a gibbon. To the human eye, both the right and left images appear to be the same panda.

artificial intelligence adversarial example panda
Adding a layer of noise to the panda image on the left turns it into an adversarial example

But while classic adversarial attacks exploit peculiarities in the inner workings of the AI algorithm, image-scaling attacks focus on the preprocessing stage of the machine learning pipeline (we’ll get to this in a bit). This is why the researchers have titled their paper “Adversarial preprocessing.”

“While a large body of research has studied attacks against learning algorithms, vulnerabilities in the preprocessing for machine learning have received little attention so far,” the researchers write in their paper.

How image-scaling attacks compromise machine learning models

Every image-processing machine learning algorithm has a set of requirements for its input data. These requirements usually include a specific size for the image (e.g., 299 x 299 pixels), but other factors such as the number of color channels (RGB, grayscale) and depth of color (1 bit, 8 bits, etc.) might also be involved.

Whether you’re training a machine learning model or using it for inference (classification, object detection, etc.), you’ll need to preprocess your image to fit the AI’s input requirements. Based on the requirements we just saw, we can presume that preprocessing usually requires scaling the image to the right size. And, as is usually the case with software, when malicious hackers know how a program (or at least a part of it) works, they’ll try to find ways to exploit it to their advantage.

alexnet cnn architecture
Most image-processing machine learning models require the input image to be downscaled to a specific size.

This is where the image-scaling attack comes into play.

The key idea behind the image-scaling attack is to exploit the way resizing algorithms work to change the appearance of the input image during the preprocessing stage. And as it happens, most machine learning and deep learning libraries use a few well-known and -documented scaling algorithms. Most of these algorithms are the same ones you find in image-editing apps such as Photoshop, such as nearest neighbor and bilinear interpolation. This makes it much easier to an attacker to design an exploit that works on many machine learning algorithms at the same time.

When images are scaled down, each pixel in the resized image is a combination of the values from a block of pixels in the source image. The mathematical function that performs the transformation is called a “kernel.” However, not all the pixels in the source block contribute equally in the kernel (otherwise, the resized image would become too blurry). In most algorithms, the kernel gives a greater weight to the pixels that are closer to the middle of the source block.

image resizing kernels
Image resizing kernels pay more attribute a higher weight to pixels that are closer to the middel.

In adversarial preprocessing, the attacker takes an image and makes modifications to the pixel values at the right locations. When the image goes through the scaling algorithm, it morphs into the target image. And finally, the machine learning processes the modified image.

So, basically, what you see is the source image. But what the machine learning model sees is the target image.

The key challenge in the image scaling attack is to apply the modifications in a way that remain imperceptible to the human eye while producing the desired result when downscaled.

adversarial image-scaling attack teacup-cat
An adversarial image-scaling attack hides the pixels of the target image in the source image in a way that a resize will result in the target image.

When attacking a machine learning model, the attacker must know the type of resizing algorithm used and the size of the kernel. Given that most machine learning libraries have few scaling options, the researchers have discovered that an attacker only needs a few attempts to find the correct setup.

In comments to TechTalks, Pin-Yu Chen, chief scientist at IBM Research (not among the authors of the paper), compared adversarial image-scaling to steganography, where a message (here the downscaled image) is embedded in the source image, and can only be decoded by the down-scaling algorithm.

“I’m curious to see whether the attack can be agnostic to scaling algorithms as well,” says Chen, who is the author of several papers on adversarial machine learning. “But based on the success of universal perturbations (to different algorithms), I think a universal image scaling attack is also plausible.”

Real-world examples of image-scaling attacks

There are basically two scenarios for image-scaling attacks against machine learning algorithms. One type of attack is to create adversarial examples that cause false predictions in a trained machine learning algorithm. For instance, as shown in the examples above, an image-scaling attack can cause a machine-learning algorithm to classify a cat as a dog or a teapot as a cat.

But perhaps the greater threat of image-scaling is “data poisoning” attacks, the AI researchers point out in their paper.

Data poisoning is a type of adversarial attack staged during the training phase when a machine learning model tunes its parameters to the pixels of thousands and millions of images. If an attacker has access and can tamper with the data set used in the training, she’ll be able to cause the machine learning model to train on adversarial examples. This creates a backdoor in the AI algorithm that the attacker can later use

For instance, consider a company that is creating a facial recognition system to control access in locations where it handles sensitive material. To do this, the company’s engineers are training a convolutional neural network to detect the faces of authorized employees. While the team is collecting the training dataset, a malicious employee slips a few tampered images that hide the mugshot of an unauthorized personnel.

After training the neural network, the engineers test the AI to make sure that it correctly detects the authorized employees. They also check a few random images to make sure the AI algorithm doesn’t mistakenly give access to a non-authorized person. But unless they explicitly check the machine learning model on the face of the person included in the adversarial attack, they won’t find out its nasty secret.

adversarial image scaling facial recognition
Adversarial image-scaling attacks can hide a target face in a source image without alerting a human observer.

Here’s another example: Say you’re training a neural network on images of stop signs for later use in a self-driving car. A malicious actor can poison the training data to include patched images of stop signs. These are called “adversarial patches.” After training, the neural network will associate any sign with that patch with the target class. So, for instance, it can cause a self-driving car to treat some random sign as a stop sign —or worse, misclassify and bypass a real stop sign.

adversarial image-scaling stop sign
In this image-scaling attack, an adversarial patch is embedded in the picture of a stop sign.

Image-scaling attacks vs other adversarial machine learning techniques

In their paper, the researchers of TU Braunschweig emphasize that image scaling attacks are an especially serious threat to AI because most computer vision machine learning models use one of a few popular image scaling algorithms. This makes image-scaling attacks “model agnostic,” which means they are insensitive to the type of AI algorithm they target, and a single attack scheme can apply to a whole range of machine learning algorithms.

In contrast, classic adversarial examples are designed for each machine learning model. And if the targeted model undergoes a slight change, the attack may no longer be valid.

“Compared to white-box adversarial attacks that require complete knowledge and full transparency of the target model, image-scaling attack requires less information (only needs to know what scaling algorithm is used in the target system), so it is a more practical (gray-box) attack in terms of attacker’s knowledge,” Chen said. “However, it is still less practical than black-box adversarial attacks that require no knowledge [of the target machine learning model].”

Black-box adversarial attacks are advanced techniques that develop adversarial perturbations by just observing the output values of a machine learning model.

Chen acknowledges that image-scaling attack is indeed an efficient way of generating adversarial examples. But he adds that not every machine learning system has a scaling operation. “This type of attack is limited to image-based models with scaling operations, whereas adversarial examples can exist in other image models without scaling and other data modalities,” he says. Adversarial machine learning also applies to audio and text data.

Protecting machine learning models against image-scaling attacks

magnifying glass data

On the bright side, the simplicity of adversarial image-scaling also makes it possible to better examine attack schemes and develop techniques that can protect machine learning systems.

“While attacks against learning algorithms are still hard to analyze due to the complexity of learning models, the well-defined structure of scaling algorithms enables us to fully analyze scaling attacks and develop effective defenses,” the TU Braunschweig researchers write.

In their paper, the researchers provide several methods to thwart adversarial image-scaling attacks, including scaling algorithms that smooth out the weights of kernels and image reconstruction filters that can cancel the effect of tampered pixel values.

image-scaling median filter
Image reconstruction techniques reduce the probability of adversarial image scaling attacks by replacing sensitive pixels with values from their surrounding area.

“Our work provides novel insights into the security of preprocessing in machine learning,” the researchers write. “We believe that further work is necessary to identify and rule out other vulnerabilities in the different stages of data processing to strengthen the security of learning-based systems.”

Making machine learning algorithms robust against adversarial attacks has become an active area of research in recent years. IBM’s Chen says, “In addition to attacking, adversarial examples have also been used for model training to strengthen model robustness. For the purpose of adversarial training (training with adversarial examples), having different types of adversarial attacks are beneficial.”

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with