SPARTA Programs

Secure and reliable AI systems for citizen

Challenges & Objectives

With the real-world applications of AI came the realization that its security requires immediate attention. Malicious users, called 'Adversaries' in the AI world can skillfully influence the inputs fed to the AI algorithms in a way that changes the classification or regression results. Apart from security, another aspect that requires attention is the explainability of ML and ML-based decision systems. Many researchers and systems architects are now using deeplearning capabilities to solve AI/ML tasks. However, in most cases, the results are provided by algorithms without any justification. Finally, current AI applications in sensitive areas influence decisions that affect the citizen in a variety of ways. AI can replicate the biases and prejudices of people designing the system and gathering the data if one is not careful. The SAFAIR Programme on security, explainability, and fairness of AI/ML systems.

Action & Impacts

The role of the SAFAIR program lies in conducting a thorough analysis of the threats and risks for AI. This is followed by providing mechanisms and tools to counter the deteriorating effects of recognized dangers in a variety of critical AI applications, making them safe and secure from the possibility of being compromised. By the same token, the program's impact on explainability will help AI users gain valuable insight into how the algorithms perform their tasks, which is particularly useful in domains where AI has already exceeded human performance. Finally, the work on fairness provides mechanisms and tools to ensure that the models created with AI methods do not rely on a skewed or prejudiced view of the situation they deal with.

The evaluation of machine learning models robustness in adversarial settings is not a trivial task. If the model is robust to a particular kind of attack, it is not a sufficient measure of its robustness. In order to have confidence in the predictions made by the model, one needs to check its robustness against a variety of attack techniques.
The SAFAIR AI Contest aimed to evaluate the robustness of a defence technique by means of a two-player game. The participants can either register in the Attack or Defence tracks. These attack and defence teams are then continuously pitted against each other. This encourages the creation of more robust deep learning models as well as to find adversarial attack methods which can effectively fool the target system across a variety of defence techniques.
The contest started on 01.03.21 and ended on 15.05.21. To learn more, please visit theĀ contest website

Demonstrator

Read more on SAFAIR