Proceedings of the

The 33rd European Safety and Reliability Conference (ESREL 2023)
3 – 8 September 2023, Southampton, UK

Exploiting Explanations to Detect Misclassifications of Deep Learning Models in Power Grid Visual Inspection

Giovanni Floreale1,a, Piero Baraldi1,b, Enrico Zio1,2,c and Olga Fink3

1Energy Department, Politecnico di Milano, Milano, Italy.

2Mines Paris-PSL University, CRC, Sophia Antipolis, France /EADDRESS/
3Intelligent Maintenance and Operations Systems, EPFL, Lausanne, Switzerland.


In the context of automatic visual inspection of infrastructures by drones, Deep Learning (DL) models are used to automatically process images for fault diagnostics. While explainable Artificial Intelligence (AI) algorithms can provide explanations to assess whether the DL models focus on relevant and meaningful parts of the input, the task of examining all the explanations by domain experts can become exceedingly tedious, especially when dealing with a large number of captured images. In this work, we propose a novel framework to identify misclassifications of DL models by automatically processing the related explanations. The proposed framework comprises a supervised DL classifier, an explainable AI method and an anomaly detection algorithm that can distinguish between explanations generated by correctly classified images and those generated by misclassifications.

Keywords: Black-box, Explainable AI, Explanations post-processing, Fault diagnostics.

Download PDF