Securing Machine Learning

Securing Machine Learning

The advancements in Artificial Intelligence and machine learning have produced a drastic change in the organization of many systems, services, and the society itself. Machine learning has become one of the main components for many systems and applications that leverage the huge amount of data collected from many different sources, including people, sensors, cameras or IoT devices, to cite some. For example, in systems and computer security, machine learning has become the basis for almost all non-signature-based detection methods, including anomalies, intrusions, malware or spam, among others.

The advancements in Artificial Intelligence and machine learning have produced a drastic change in the organization of many systems, services, and the society itself. Machine learning has become one of the main components for many systems and applications that leverage the huge amount of data collected from many different sources, including people, sensors, cameras or IoT devices, to cite some. For example, in systems and computer security, machine learning has become the basis for almost all non-signature-based detection methods, including anomalies, intrusions, malware or spam, among others. Machine learning brings important benefits in terms of new functionality, personalization and optimization of resources. However, it has been shown that machine learning algorithms are vulnerable and can be the objective of attackers, who may gain a significant advantage by exploiting the vulnerabilities of the learning algorithms. These vulnerabilities can allow attackers to compromise critical systems, evade detection or deliberately manipulate the behaviour of different systems and applications. For example, in 2019, researchers at Tencent Keen Security Lab published a report showing how to deceive Tesla’s autopilot self-driving software and make the car automatically switch into an oncoming traffic lane just by placing a few small stickers on the road. It is clear that the consequences of these attacks against machine learning systems can be catastrophic in some applications.

The vulnerabilities of machine learning

Attacks against machine learning systems are possible either during the design and training of the learning algorithms or at run-time when the system is deployed. We can broadly differentiate three different types of attacks:

Evasion attacks:

At run-time, when a machine learning system is deployed, attackers can look for its blind spots and weaknesses to produce intentional errors in the system. These attacks are often referred to as evasion attacks. Many learning algorithms are vulnerable to adversarial examples: inputs indistinguishable from genuine data points that are designed to produce errors. Adversarial examples show that many learning algorithms are not robust to small changes in their inputs and attackers can easily exploit this vulnerability. As the perturbation needed to create successful adversarial examples is very small, it is very difficult to automatically distinguish between malicious and benign examples.

Poisoning attacks:

Many machine learning systems rely on data collected from different untrusted data sources like humans, machines, sensors, or IoT devices that can be compromised. Data cleaning or curation is not always possible, and then, the learning algorithms are trained using untrusted data. This provides cyber criminals an opportunity to compromise the integrity of machine learning systems by performing poisoning attacks. In these scenarios, attackers can inject malicious data into the training dataset used by the learning algorithm to subvert the learning process and damage the system. The attacker can aim to reduce the overall system performance or to produce specific types of errors over particular sets of instances. Data poisoning is one of the emerging and most relevant threats in systems that aim to learn and adapt to new circumstances and new contexts.

Backdoors:

As in traditional computer security, machine learning systems are also vulnerable to backdoors or Trojan attacks, which compromise the integrity of the learning algorithms. This can happen when the data used to train the learning algorithms is untrusted (as in poisoning attacks) or when the machine learning system deployed cannot be trusted e.g. because it has been trained using untrusted software. In a backdoor attack, the adversary creates a maliciously trained model which has a good performance when evaluated in normal circumstances, but which behaves badly when tested on specific attacker-chosen inputs. Typically, backdoors are activated through a trigger which consists in a specific pattern that, when added to a genuine input, produces the desired incorrect behaviour of the machine learning system.

Securing machine learning malware detectors

As machine learning is becoming a valuable tool for malware detection, attackers start to include capabilities in their malware to evade being detected by machine learning algorithms. It is therefore important to investigate these evasion attacks and develop techniques capable of mitigating them. Machine learning for malware detection must be robust to such attacks to remain a useful tool and become more widely adopted.

Much of the existing research on the security and robustness of machine learning has focused on computer vision applications and the adversarial examples typically considered are specific to this application domain. For example, the attacker’s capabilities to manipulate genuine data and craft adversarial examples are measured according to some distance between the pixels in the original image and the adversarial one. However, in the malware domain, attacks have different limitations: they must ensure that the modifications to the software preserve its malicious functionality whilst including the artefacts necessary for evasion.

In Concordia, we are currently investigating, in collaboration with the Research Institute Cyber Defence (CODE), the vulnerabilities and security properties of machine learning algorithms typically used for malware detection. We aim to understand how attackers can craft functional software capable of evading machine-learning-based malware detectors and how to mitigate the effect of possible attacks to enhance the robustness of these systems.

(By Dr Luis Muñoz-González – Research Associate at Imperial College London
https://www.doc.ic.ac.uk/~lmunozgo/,
Prof Emil C Lupu – Professor of Computer Systems at Imperial College London
https://www.imperial.ac.uk/people/e.c.lupu
http://rissgroup.org/)