Explainable 3D convolutional neural network using GMM encoding

Abstract
The aim of this paper is to propose a novel method to explain, interpret, and support the decision-making process of deep Convolutional Neural Network (CNN). This is achieved by analysing neuron activations of trained 3D-CNN on selected layers via Gaussian Mixture Model (GMM) and custom binary encoding of both training and test images based on their activation’s affiliation to computed GMM components. Based on the similarity of encoded image representations, the system is able to retrieve most activation-wise similar atlas (training) images for given test image and therefore support and clarify its decision. Possible uses of this method include mainly Computer-Aided Diagnosis (CAD) systems working with medical imaging data such as magnetic resonance (MRI) or computed tomography (CT) scans. Network’s decision interpretation in the form of similar domain examples (images) is natural to the work-flow of the system’s operating medical personnel.
Martin StanoWanda Benesova, and Lukas S. Martak “Explainable 3D convolutional neural network using GMM encoding”, Proc. SPIE 11433, Twelfth International Conference on Machine Vision (ICMV 2019), 114331U (31 January 2020); https://doi.org/10.1117/12.2557314