Explainable and Safe AI

Modern as well as traditional AI solutions appear to be susceptible to subtle external disturbances. For example, deep neural networks with manipulations of input that are not perceptible to humans can be put on the wrong footing.

This plays a role, among other things, in image-processing AI, as present in autonomously moving vehicles. Minuscule noise injections in inputs can lead to incorrect recognition of road signs in such applications, with all the consequences that entails. In addition, machine learning models can be read 'remotely' by provoking input/output behaviour, or manipulated from the outside, a real danger because more and more AI is becoming available as a (cloud) service.

Securing AI is therefore a subject that is becoming increasingly important on the research agenda of tech companies such as Google, Facebook and Tesla, and academia. Measures to guarantee privacy with AI protect the users of AI systems. Both types of security (the AI itself, and the privacy of users) add extra opacity to AI systems, however, through - sometimes destructive, non-reversible - encryption of data or algorithms.

At the same time, the demand for interpretability is increasing in the AI world, for example in the context of operator-intensive applications such as defence. Also offering personal data to the current 'human-aware' AI calls for explicable systems, to increase the willingness to share such information with AI systems. This project investigates the effects on interpretability of AI security, and thus focuses on 'interpretable, secure AI'. In addition, it addresses the interpretability of attacks on AI.

This call is closed

The projects have been awarded, at the bottom of this page the current projects.

Newsletter

Sign up for our newsletter and stay informed of the latest news Commit2data

Related projects

View the projects related to the theme VWData