eXplainable Artificial Intelligence

eXplainable Artificial Intelligence

A novel classification system

The success and widespread application of Artificial Intelligence (AI) in several fields, spanning from medicine to autonomous transports. Despite this, most of the predictive models beloging to the AI field lack intepretability and transparency because of the non-linearity and complexity of their underlying structures. This can hinder the acceptance of these models by the wider public as people do not trust what they cannot understand. Thus, eXplainable Artificial Intelligence (XAI) has become a vital sub-field of AI. Its ultimate goals is building a unified approach to learn predictive models that possess high accuracy and a high degree of explainability. A plethora of XAI methods, proposing different strategies to reach this goal, have been developed so far. This has led to the need to organise this abudance of knowledge into a comprehensive system capable to structure at high-level the current state-of-the-art in XAI.

As part of my PhD project in XAI, I carried out a thorough literature review that evolved into a journal paper, currently under peer-review (check out its pre-print version). In this study, I propose a new hierarchy that organises XAI in four main categories mirroring the research activities currently performed by scholars: developing new XAI methods, based on a set of notions and attributes linked to the teoretical construct of explainability, evaluating their efficacy by proposing formal, objective metrics or human-centred evaluation approaches involving end-users and model designers and, eventually, organising all these methods in reviews. The following sunburst chart shows the number of studies for each category. Some studies are classified under more than one category and they are counted more than once, thus the numeric values of the inner layers are affected by multiple counting.

The proposed hirearchy is depicted in the explandable tree below. Click on the blue circles to expand each branch into its sub-branches. The leaves represent scientific articles and, by clicking on them, you are redirected to the original study. The full list of the scientific articles can be examined here.

Other resources

GITHUB repositories:

Online articles on XAI:

XAI researchers, online version of scientific papers and dedicated websites:

Books on XAI: