Data Poisoning Attacks

Data Poisoning Attacks: A Novel Attack Vector within AI

Data Poisoning Attacks

A Novel Attack Vector within AI

Recent advancements in the field of generative AI for images have paved the way for an unprecedented world of creativity, yet they are not without significant legal and security challenges.

With the rise of AI-powered image generators such as DALL-E, MidJourney, and Stable Diffusion, many artists are concerned about their ability to imitate their styles, enabling others to mimic them simply by providing a description like “a painting in the style of Christopher Wool.” This widespread use of image exploitation on the web indeed raises legal questions related to intellectual property. In February 2023, Getty Images sued the developers of Stability AI, the publisher of Stable Diffusion, in the High Court of London to protect its copyrights.

In response, a new tool, known as Nightshade, has emerged, offering artists a means to preserve the integrity of their creations against unauthorized use of their works in AI training datasets. This new technology allows artists to make invisible modifications to the pixels of their works, disrupting the “text2image” function of AIs in a chaotic and unpredictable manner. Designed to “poison the data” of generative AIs, it renders the results useless or distorted.

Glaze, another tool, works to conceal an artist’s unique style to prevent its replication. Glaze was developed to preserve artists’ personal styles. By delicately altering image pixels, Glaze prevents AIs from imitating the artistic style.

As technology advances, it’s likely that a technological race will emerge between the creators of these protection tools and developers of large-scale generative AIs.

However, with these promising advancements, there is also a potential risk of malicious use of data poisoning techniques. An artificial intelligence poisoning attack occurs when training data is intentionally altered, leading to erroneous decisions by the AI model. These alterations, often subtle, create biases that affect the outputs and decision-making of the model. Thus, the attacker aims to manipulate the behavior of the AI system according to their own intentions.

Understanding Different Types of AI Data Corruption Attacks

Attackers with access to training data can corrupt an AI system. It is therefore essential to understand these different types of attacks.

💣 Stealthy Attacks: Attackers introduce incorrectly labeled or malicious data into the training set to disrupt the model’s behavior.

👉 More information: Stealthy Poisoning Attack on Certified Robustness for NeurIPS 2020 | IBM Research.

💣 Label Poisoning: Attackers insert erroneous or malicious data into the training set to influence the model’s behavior during inference.

👉 More information:[2002.11497] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping (arxiv.org)

💣 Training Data Poisoning: During training data poisoning, attackers alter a significant portion of the training data to influence the AI model’s learning process. These deceptive or malicious examples enable the attacker to bias the model’s decision-making in favor of a specific outcome.

👉 More information: Mitigating Poisoning Attacks on Machine Learning Models | Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security

💣Model Inversion Attack: A model inversion attack is a method used by attackers to deduce sensitive information from machine learning models. The attacker leverages the model’s predictions and auxiliary information to reconstruct the original input data. This type of attack is particularly concerning in scenarios where the model has been trained on sensitive data, such as medical records or personal identifiers.

Safeguarding AI Models: Implementing Robust Security Measures and Data Preprocessing Techniques

While researchers emphasize that a large number of altered samples would be needed to cause significant damage to AI models, it is clear that security measures must be implemented to prevent any misuse of these innovative tools.

Data cleansing and preprocessing techniques should be implemented to filter potential attacks and ensure the integrity of data sources. Anomaly detection methods should be used to monitor incoming data and identify suspicious patterns.

In this context, model architectures should also be designed robustly with built-in defenses against malicious inputs. This involves continuously monitoring model performance and identifying abnormal patterns indicating a possible data breach.

Stringent security measures should be implemented to protect training data from unauthorized manipulation and verify inputs to ensure data and source integrity.

The use of secure training environments and training pipeline management protocols is essential to ensure resilience against attacks. For instance, OpenAI provides users of the API with a list of best practices in their documentation.

Cybersecurity is becoming a major concern in the realm of art and AI, highlighting the need to develop robust strategies to protect against data poisoning attacks and unauthorized replication.

Jean-François SCHOONHEERE
CEO & founder Stroople