What is AI poisoning?

AI poisoning also known as data poisoning, involves a deliberate and malicious contamination of data to compromise the performance of AI and ML systems. Attackers may inject false, misleading, or manipulated data into the training process to degrade model accuracy, introduce biases, or cause targeted misbehavior in specific scenarios.

Learn more about how to protect against AI poisoning here.

Keep updated

Don’t miss a beat from your favourite identity geeks