Challenges the standard use of bilinear/bicubic interpolation for upsampling saliency maps, proving it creates spurious importance regions and proposing a mass-redistribution alternative.
arXiv · March 18, 2026 · 2603.16067
The Takeaway
Practitioners relying on Grad-CAM or similar heatmaps for model interpretability are likely seeing artifacts of interpolation; this method ensures semantic boundaries govern how attribution flows, leading to more faithful explanations.
From the abstract
Attribution methods in explainable AI rely on upsampling techniques that were designed for natural images, not saliency maps. Standard bilinear and bicubic interpolation systematically corrupts attribution signals through aliasing, ringing, and boundary bleeding, producing spurious high-importance regions that misrepresent model reasoning. We identify that the core issue is treating attribution upsampling as an interpolation problem that operates in isolation from the model's reasoning, rather t