Call between 8 a.m. and 4 p.m.
Mail us for support
Laboratory address
Aleksandra Medvedeva 4
Niš, Serbia
Advancing healthcare through technology
Call between 8 a.m. and 4 p.m.
Mail us for support
Laboratory address
Nikolić, Mina; Janković, Dragan; Stanimirović, Aleksandar; Stoimenov, Leonid
The Integration of Explainable AI Methods for the Classification of Medical Image Data Conference
Institute of Electrical and Electronics Engineers Inc., 2024.
Abstract | Links | BibTeX | Tags: Deep neural networks; Semantic Segmentation; Convolutional neural network; Explainability; Explainable artificial intelligence (XAI); Grad-CAM; Image data; Images classification; Interpretability; Neural network architecture; Semantic objects; Semantic segmentation; Convolutional neural networks
@conference{Nikoli\'{c}2024,
title = {The Integration of Explainable AI Methods for the Classification of Medical Image Data},
author = {Mina Nikoli\'{c} and Dragan Jankovi\'{c} and Aleksandar Stanimirovi\'{c} and Leonid Stoimenov},
url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85204098634\&doi=10.1109%2fIcETRAN62308.2024.10645095\&partnerID=40\&md5=33dbe2d660b36c05838539535731bc69},
doi = {10.1109/IcETRAN62308.2024.10645095},
year = {2024},
date = {2024-01-01},
journal = {Proceedings - 2024 11th International Conference on Electrical, Electronic and Computing Engineering, IcETRAN 2024},
publisher = {Institute of Electrical and Electronics Engineers Inc.},
abstract = {Deep convolutional neural network architectures have in recent years been widely used for enhancing various Computer vision tasks, such as Image classification, Semantic Segmentation and Object detection. With great advancements in terms of quality of the obtained results, the path was paved for using these kinds of neural networks in the medical domain. But, when working with sensitive matters involving human lives, there is a need to consider the interpretability and explainability of these models and not just the typical evaluation metrics for the given task. To do such a thing, tools such as LIME and PyTorch Grad-CAM can be used, among many others. The integration of Explainable AI (XAI) methods proposed in this paper aims to enable the paradigm of XAI to be used in medical image classification tasks with the standardized MedMNIST dataset. By doing such an integration, a deeper analysis regarding the quality of the model can be enabled. In that way, instances that were misclassified can be visually examined and used to paint a clearer picture of the complete model's decision-making process. © 2024 IEEE.},
keywords = {Deep neural networks; Semantic Segmentation; Convolutional neural network; Explainability; Explainable artificial intelligence (XAI); Grad-CAM; Image data; Images classification; Interpretability; Neural network architecture; Semantic objects; Semantic segmentation; Convolutional neural networks},
pubstate = {published},
tppubtype = {conference}
}