are an increasingly worrisome threat to the performance of artificial intelligence applications. If an attacker can introduce nearly invisible alterations to image, video, speech, and other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.
Imagine how such attacks could undermine AI-powered autonomous vehicles ability to recognize obstacles, content filters’ effectiveness in blocking disturbing images, or in access systems’ ability to deter unauthorized entry.
Some people argue that adversarial threats stem from “” in the neural net technology that powers today’s AI. After all, it’s well-understood that many machine learning algorithms——are vulnerable to adversarial attacks. However, you could just as easily argue that this problem calls attention to weaknesses in enterprise processes for building, training, deploying, and evaluating AI models.
None of these issues are news to . There is even a focused right now on fending off adversarial AI.