Open Access
ARTICLE
Cascading Class Activation Mapping: A Counterfactual Reasoning-Based Explainable Method for Comprehensive Feature Discovery
1 Department of Statistics, Institute of Applied Statistics, Jeonbuk National University, Jeonju, Republic of Korea
2 Network Control Department, KT Corporation, Seoul, Republic of Korea
* Corresponding Author: Guebin Choi. Email:
#These authors contributed equally to this work
(This article belongs to the Special Issue: Machine Learning and Deep Learning-Based Pattern Recognition)
Computer Modeling in Engineering & Sciences 2026, 146(2), 37 https://doi.org/10.32604/cmes.2026.077714
Received 15 December 2025; Accepted 26 January 2026; Issue published 26 February 2026
Abstract
Most Convolutional Neural Network (CNN) interpretation techniques visualize only the dominant cues that the model relies on, but there is no guarantee that these represent all the evidence the model uses for classification. This limitation becomes critical when hidden secondary cues—potentially more meaningful than the visualized ones—remain undiscovered. This study introduces CasCAM (Cascaded Class Activation Mapping) to address this fundamental limitation through counterfactual reasoning. By asking “if this dominant cue were absent, what other evidence would the model use?”, CasCAM progressively masks the most salient features and systematically uncovers the hierarchy of classification evidence hidden beneath them. Experimental results demonstrate that CasCAM effectively discovers the full spectrum of reasoning evidence and can be universally applied with nine existing interpretation methods.Keywords
Cite This Article
Copyright © 2026 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools