Skip to content

Data manipulation in machine learning: what’s happening, and what we can do

Machine learning has quickly become a powerful tool for making predictions, automating decisions, and solving complex problems. It can perform increasingly advanced image and speech recognition, predictive analysis, and even medical diagnoses.

But the success of machine learning (ML) heavily depends on the quality and integrity of the data used for training its models. Manipulating ML datasets can have significant consequences, including weak performance, unethical bias, and unreliable predictions.

Today, we’ll outline some common types of data manipulation in machine learning. We’ll also offer a suggestion to help you protect the integrity of your ML datasets on-prem and in the cloud.

What is data manipulation in machine learning?

Data manipulation involves intentionally altering data within machine learning datasets in order to compromise the ML model’s accuracy or security. Data manipulation can involve many different processes, including:

  • adding false data
  • deleting accurate data
  • changing data values
  • introducing noise
  • injecting adversarial examples
  • and more

The goal of data manipulation is to exploit vulnerabilities in ML algorithms and produce weak, misleading, or even harmful results. With machine learning playing a role in everything from our Netflix recommendations to our medical imaging and diagnoses, it’s clear that manipulation can have serious real-world effects.

Types of data manipulation in machine learning

There are many different ways to manipulate the data that’s used for machine learning, deep learning, and neural networks. Although they have different goals, these methods may overlap with each other. Below, we’ll explore a few of them.

1. Model poisoning

Model poisoning, also known as data poisoning, is a technique that manipulates the outcome of machine learning models by injecting malicious data into them. The goal is to get the ML model to accept flawed or incorrect inputs in order to compromise availability and undermine integrity.

Model poisoning may take place over several weeks or months, depending on the model’s training cycle. The malicious data points are often carefully crafted, and the infiltration is often very subtle, which makes it hard to detect and correct in time.

One real-world example involves spammer groups attempting to throw off Gmail’s filters by reporting massive amounts of spam emails as not spam, effectively trying to retrain the model so it would classify spam incorrectly.

2. Adversarial examples

Adversarial examples are malicious inputs that are specifically designed to deceive machine learning models. They are created by making small, often imperceptible, changes to the original data points in a way that can cause the model to misclassify them. Adversarial examples can be used to trick machine learning models into making incorrect predictions — for instance, identifying pictures of horses as cows or interpreting stop signs as yield signs.

3. Extraction attacks

In this kind of incident, an adversary observes a large quantity of inputs and outputs of an AI system until they can recreate that system for themselves. From there, they may use that model to sabotage the original system.

For instance, one CSO article gives the example of an email protection system that could be used to create a copycat system for the purpose of engineering more successful spam emails.

4. Building backdoors

Some research suggests that backdoors can be built into systems using carefully crafted datasets. Work done at the University of Fribourg in 2018 outlines how scholars drastically changed the behavior of a neural network by editing just one single pixel of each of the images in its training set. 

The researchers pointed out that this kind of manipulation is very achievable in the kinds of large, publicly available datasets that are used for training. “By providing a huge, useful — but slightly manipulated — dataset,” the team writes, “one could tempt many users in research and industry to use this dataset.”

Protecting ML data from manipulation

Machine learning data manipulation can be difficult to undo. If it’s not detected, data poisoning can take place over a number of training cycles and become thoroughly ingrained in a model, ultimately ruining its outputs

Luckily, there are several steps that you can take to mitigate the impact of malicious data tampering.

  • Data verification. First, make sure you have robust data verification methods in place to detect potentially manipulated data. Data source validation and auditing can help you avoid including malicious data points in your ML training.
  • Adversarial training. Then, consider including adversarial training in your model. This involves proactively inputting adversarial examples and teaching your model to recognize and defend against them.
  • Model monitoring. Additionally, make sure you’re performing model monitoring to help detect unusual behavior. This includes checking for data input, data loss, data drift, and more.
  • Data protection technology. Finally, consider an advanced data protection solution with the ability to secure machine learning datasets in a range of storage locations.

Machine learning dataset protection with ShardSecure

ShardSecure provides advanced file-level encryption for machine learning datasets with no performance hit and no agents. Our technology can detect when data is altered and reconstruct that data to its earlier state in real-time, preventing ML dataset poisoning and manipulation.

With strong data resilience, data privacy, and support for cross-border regulatory compliance, ShardSecure’s Data Control Platform keeps critical data safe — regardless of where it’s stored.

To learn more about our technology, check out our other resources.

Sources

What Is Machine Learning and What Can It Do? | h2o.ai

CT-GAN: Malicious Tampering of 3D Medical Imagery Using Deep Learning | USENIX 

Top 7 Artificial intelligence Data Security Threats to AI and ML | Data Science Central

What Is Data Poisoning? Attacks that Corrupt Machine Learning Models | CSO Online

Adversarial Machine Learning Explained: How Attackers Disrupt AI and ML systems | CSO Online

Are You Tampering With My Data? | Computer Vision Foundation

Structured Verification of Machine Learning Models in Industrial Settings | Big Data

A Comprehensive Guide on How To Monitor Your Models in Production | Neptune AI