RT info:eu-repo/semantics/article T1 Interpretable surrogate models to approximate the predictions of convolutional neural networks in glaucoma diagnosis A1 Sigut Saavedra, José Francisco A1 Fumero, Francisco A1 Arnay, Rafael A1 Estévez, José A1 Díaz Alemán, Tinguaro K1 Glaucoma diagnosis K1 Interpretable surrogate models AB Deep learning systems, especially in critical fields like medicine, suffer from a significant drawback,their black box nature, which lacks mechanisms for explaining or interpreting their decisions. Inthis regard, our research aims to evaluate the use of surrogate models for interpretingconvolutional neural network (CNN) decisions in glaucoma diagnosis. Our approach is novel inthat we approximate the original model with an interpretable one and also change the inputfeatures, replacing pixels with tabular geometric features of the optic disc, cup, and neuroretinalrim. We trained CNNs with two types of images: original images of the optic nerve head andsimplified images showing only the disc and cup contours on a uniform background. Decisiontrees were used as surrogate models due to their simplicity and visualization properties, whilesaliency maps were calculated for some images for comparison. The experiments carried out with1271 images of healthy subjects and 721 images of glaucomatous eyes demonstrate that decisiontrees can closely approximate the predictions of neural networks trained on simplified contourimages, with R-squared values near 0.9 for VGG19, Resnet50, InceptionV3 and Xceptionarchitectures. Saliency maps proved difficult to interpret and showed inconsistent results acrossarchitectures, in contrast to the decision trees. Additionally, some decision trees trained assurrogate models outperformed a decision tree trained on the actual outcomes withoutsurrogation. Decision trees may be a more interpretable alternative to saliency methods. Moreover,the fact that we matched the performance of a decision tree without surrogation to that obtainedby decision trees using knowledge distillation from neural networks is a great advantage sincedecision trees are inherently interpretable. Therefore, based on our findings, we think thisapproach would be the most recommendable choice for specialists as a diagnostic tool. YR 2023 FD 2023 LK http://riull.ull.es/xmlui/handle/915/35092 UL http://riull.ull.es/xmlui/handle/915/35092 LA en DS Repositorio institucional de la Universidad de La Laguna RD 02-dic-2024