Publication date: Feb 12, 2025
Despite the outstanding performance of deep learning (DL) models, their interpretability remains a challenging topic. In this study, we address the transparency of DL models in medical image analysis by introducing a novel interpretability method using projected gradient descent (PGD) to generate adversarial examples. We use adversarial generation to analyze images. By introducing perturbations that cause misclassification, we identify key features influencing the model decisions. This method is tested on Brain Tumor, Eye Disease, and COVID-19 datasets using six common convolutional neural networks (CNN) models. We selected the top-performing models for interpretability analysis. DenseNet121 achieved an AUC of 1. 00 on Brain Tumor; InceptionV3, 0. 99 on Eye Disease; and ResNet101, 1. 00 on COVID-19. To test their robustness, we performed an adversarial attack. The p-values from t-tests comparing original and adversarial loss distributions were all < 0. 05. This indicates that the adversarial perturbations significantly increased the loss, confirming successful adversarial generation. Our approach offers a distinct solution to bridge the gap between the capabilities of artificial intelligence and its practical use in clinical settings, providing a more intuitive understanding for radiologists. Our code is available at https://anonymous. 4open. science/r/EAMAPG.
Concepts | Keywords |
---|---|
4open | Deep learning |
Cnn | Explainable artificial intelligence |
Models | Projected gradient descent |
Science | |
Tumor |
Semantics
Type | Source | Name |
---|---|---|
drug | DRUGBANK | Flunarizine |
disease | MESH | Brain Tumor |
disease | MESH | Eye Disease |
disease | MESH | COVID-19 |