All accepted publications from SPARTA partners under its funding.
Achieving Explainability of Intrusion Detection System by Hybrid Oracle-Explainer Approach
M. Szczepanski, M. Choras, M. Pawlicki, R. Kozik.Abstract
With the progressing development and ubiquitousness of Artificial Intelligence (AI) observed in last decade, the need for creating methods which are explainable and/or interpretable for humans has become a pressing matter. The ability to understand how a system makes a decision is necessary to help develop trust, settle issues of fairness and perform the debugging of a model. Although there are many different techniques allowing to get insights into models’ inner workings, they often come with a trade off in the form of decreased accuracy. In the context of cybersecurity, where a single false negative can lead to a breach and compromise of the whole system, such a price is unacceptable. Therefore, there is a need for a solution which allows for the maximum possible model performance, and at the same time delivers human understandable interpretations. The hybrid approaches to Explainable Artificial Intelligence (XAI) have the potential to achieve this goal. In this work, we present the fundamental concepts and a prototype of a system using such an architecture.