Maksims Ivanovs, Roberts Kadiķis, Kaspars Ozols. Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognition Letters, 150(), 228 - 234 pp. 2021.
Bibtex citation:
Bibtex citation:
@article{11084_2021,
author = {Maksims Ivanovs and Roberts Kadiķis and Kaspars Ozols},
title = {Perturbation-based methods for explaining deep neural networks: A survey},
journal = {Pattern Recognition Letters},
volume = {150},
pages = {228 - 234},
year = {2021}
}
author = {Maksims Ivanovs and Roberts Kadiķis and Kaspars Ozols},
title = {Perturbation-based methods for explaining deep neural networks: A survey},
journal = {Pattern Recognition Letters},
volume = {150},
pages = {228 - 234},
year = {2021}
}
Abstract: Deep neural networks (DNNs) have achieved state-of-the-art results in a broad range of tasks, in particular the ones dealing with the perceptual data. However, full-scale application of DNNs in safety-critical areas is hindered by their black box-like nature, which makes their inner workings nontransparent. As a response to the black box problem, the field of explainable artificial intelligence (XAI) has recently emerged and is currently rapidly growing. The present survey is concerned with perturbation-based XAI methods, which allow to explore DNN models by perturbing their input and observing changes in the output. We present an overview of the most recent research focusing on the differences and similarities in the applications of perturbation-based methods to different data types, from extensively studied perturbations of images to the just emerging research on perturbations of video, natural language, software code, and reinforcement learning entities.
Full text: 1-s2.0-S0167865521002440-main