Secure and reliable AI systems for citizen
Challenges & Objectives
With the real-world applications of AI came the realization that its security requires immediate attention. Malicious users, called 'Adversaries' in the AI world can skillfully influence the inputs fed to the AI algorithms in a way that changes the classification or regression results. Apart from security, another aspect that requires attention is the explainability of ML and ML-based decision systems. Many researchers and systems architects are now using deeplearning capabilities to solve AI/ML tasks. However, in most cases, the results are provided by algorithms without any justification. Finally, current AI applications in sensitive areas influence decisions that affect the citizen in a variety of ways. AI can replicate the biases and prejudices of people designing the system and gathering the data if one is not careful. The SAFAIR Programme on security, explainability, and fairness of AI/ML systems.
Action & Impacts
The role of the SAFAIR program lies in conducting a thorough analysis of the threats and risks for AI. This is followed by providing mechanisms and tools to counter the deteriorating effects of recognized dangers in a variety of critical AI applications, making them safe and secure from the possibility of being compromised. By the same token, the program's impact on explainability will help AI users gain valuable insight into how the algorithms perform their tasks, which is particularly useful in domains where AI has already exceeded human performance. Finally, the work on fairness provides mechanisms and tools to ensure that the models created with AI methods do not rely on a skewed or prejudiced view of the situation they deal with.