DISTANT - Defending against Implementation attackS Through mAchiNe learning Technique
Chip cards such as bank cards or cards used in public transport are sought-after targets for attackers who often use side-channels to attack them. DISTANT is focused on the development of automated frameworks capable of assessing various potential side-channel and fault injection threats, originating from various sources. Such systems will provide security assessors, and especially companies producing chips for security applications, with an opportunity to detect potential weaknesses early and to adequately balance product security and implementation friendliness.
At the beginning of 2023, we spoke with dr. Stjepan Picek. Stjepan Picek is associate professor in the Digital Security (DiS) group at Radboud University. He studies the intersection of cybersecurity and artificial intelligence. Combining these fields, he specializes in implementation attacks (side-channel analysis, fault injection) and machine learning security. He also conducts research in the field of evolutionary algorithms. In the research, he collaborates with PhD student Azade Rezaeezade from TU Delft, where Stjepan himself is also still active as an assistant professor. The project spans the period from March 2021 to March 2025.
Attacks and security
Attacks that do not target the cryptographic algorithm itself, but rather its implementation are particularly devastating, and the best-known examples are so-called side-channel and fault injection attacks. To counter such attacks, security assessors use the same techniques as the attackers and look for potential weaknesses to "fix" before deployment. Unfortunately, due to the ingenuity of the attackers on the one hand and the usually short time available to security evaluators (and the human error factor) on the other, this is not a fair contest. Therefore, researchers are looking for ways to make security evaluations more reliable and faster. "Machine-learning-based techniques appear to be a suitable candidate for this, although the challenge is far from solved," Stjepan said.
Five steps (five workpackages)
Step 1 (WP1) focuses on designing frameworks: new evaluations are being developed for different techniques. "The side-channel community for machine learning is very active. Since 2016, more than 200 papers have been published on the topic. So we need to evaluate different techniques first, see how they perform. That way, we can decide which techniques are actually the most efficient, because of course you can attack in many ways. So in the first work package we ask ourselves what are good techniques, what are good settings, what are the right parameters?"
WP2 takes an extensive look at attacks and the AI techniques used. By understanding them better, better countermeasures (defence) can be developed. "A very important part of our effort now is to explain the capability of machine learning. To understand why machine learning broke some target, and what it broke, especially in relation to countermeasures."
Step 3 (WP3) turns to fault injection, which is an active type of attack where you're not just listening to the communication, but actually trying to insert a signal that causes faults. "It's not easy and it's often difficult-to-reproduce process. We work on different techniques based on AI, mostly based on evolutionary computation and deep learning. The question we try to answer is how can we inject a fault as efficiently as possible, or can we predict if what we inject will result in a fault?"
In WP4, we experiment with neuro-evolution involving the improved deployment of attacks. "Tuning neural networks turns out to be very complicated so we are now interested in how to make it efficient. There's still a lot to explore here, it's challenging but very valuable."
"In WP 5 we actually switch the perspectives and look at whether and how we can reverse engineer existing machine learning models. Therefore, we are also exploring how to design countermeasures to protect machine learning."
Challenges
The hardest part was to identify relevant cases from the more than 200 existing publications. Especially since different research groups work with different datasets and with different goals. "For one group, an attack can be very simple. For the other group, the same attack can be very complicated because the dataset is different. We have to keep an open mind and test a lot of different things to really understand what works best. But since the domain is very active, that means we have to test many, many things."
The exploitability of AI also remains a challenge. "For many publications, showing that you have broken the target is enough. But when you want to implement better countermeasures, you also have to have a good understanding of what's wrong with the old countermeasures."
The end goal
Stjepan is already looking ahead to April 2025, to what DISTANT may accomplish then. "First, we should provide the community with a framework to perform different types of attacks. Second, we aim to make new relevant datasets available and finally we share valuable techniques for exploitability."
The research fills the gap between what is known in academia about physical attacks and what is needed in industry to prevent such attacks. Ultimately, these frameworks can also be a useful tool to mitigate other types of threats.
Gerelateerde projects
Bekijk hieronder de Projecten gerelateerd aan het thema DISTANT - Defending against Implementation attackS Through mAchiNe learning Technique