Adversarial AI and Mitigation Methods

Adversarial Artificial Intelligence or Adversarial AI is the new buzzword on Capitol Hill.  In the past few weeks there have been hearings on Adversarial AI in the House and the Senate, multiple articles written on the subject and even calls for a commission to investigate the threat.  However, there have been very few details about the specific threat or how to mitigate it.  As a cybersecurity company that designs Machine Learning products for user authentication TELEGRID has a unique perspective on this subject.

Secure Your “AI Supply Chain”TM

Machine Learning, like any other piece of software, suffers from garbage in, garbage out.  Take the classic example of an image classifier that is designed to identify a bus.  What happens if you start to feed it pictures of giraffes and label them buses?  I mean they are both yellow and black right?  Turns out the machine will start to look at pictures of giraffes and call them buses.

Mislabeled data is one of the biggest Adversarial AI attacks.  By feeding in incorrect data adversaries can trick machines into falsely classifying images.  How many false positives must an operator see before they stop paying attention?

Messing up an algorithm is concerning but tricking it to perform an action is very dangerous.  This is called an enchanting attack and was highlighted in a recent post by Google and UC Berkley researchers.  These researchers manipulated data to force a Reinforcement Learning (RL) algorithm to purposefully lose a video game.  Imagine if an adversary could use this method to cause a robotic tank to purposefully fire on its own forces.

To mitigate this threat we need to actively focus on our AI supply chain like we do with our hardware supply chain.  Before you buy a Machine Learning product ask the company where its training data comes from.  Is it crowdsourced, meaning anyone can label the data and put in a Trojan horse?  If it is built by a team of professionals what country are they located in?  We need to remember that labelled data is to Machine Learning what microchips are to hardware.

Click to Subscribe

Know Your Algorithms

While the bus/giraffe example is a little simplistic the truth is that we often do not know what machine learning is focusing on.  In a study at UCI, students were asked to use Machine Learning to differentiate between a wolf and a husky.  When they pulled back the covers they realized that it was the snow in the background that was actually the main classifier.  Another study found that when trying to identify traffic lights it was actually the arm of the traffic light separating the sky from the ground that was the main classifier.  So if a picture of the horizon was passed into the algorithm it would also return the term traffic light.

To mitigate this threat we must know our algorithms.  Indeed the Defense Advanced Research Projects Agency (DARPA) has started to do a lot of work in this area.  By understanding what it is that AI is using to make its decisions we can not only decide if the decision has merit, but also how Adversarial AI can manipulate it.

Admiral Michael Rogers, the director of the NSA, made an interesting related point at a hearing before the Intelligence Committee.  “With the power of machine learning, artificial intelligence and big-data analytics, data concentrations now increasingly are targets of attraction to a whole host of actors.”  While the simplest reaction to this comment is to secure all data that is not always practicable.  However, if we know what patterns our AI is looking for, we will know what data must be protected.

At the moment Adversarial AI is confined to experiments where the researchers control the data, the algorithm and the RL reward.  Despite that, there is enough research to be concerned and it is justifiable for our leadership to ask questions.  In my opinion though we should not be looking at our enemies but rather ourselves.  Adversarial AI can be mitigated but first we must take the time to better understand our own AI by understanding its data inputs and the algorithms that use that data.

Click to Subscribe

Eric Sharret is Vice President of Business Development at TELEGRID.  TELEGRID has unique expertise in secure authentication, PKI, Multi-Factor Authentication, and secure embedded systems.

 

Disclaimer: The opinions expressed here do not represent those of TELEGRID Technologies, Inc.  The Company will not be held liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use.  All information is provided on an as-is basis.