When working with AI, doctors find it useful to see a region-of-interest, or an indication of which parts of the image the algorithm relies on most strongly while reaching a conclusion. This enables them to ‘see through the algorithm’s eyes’. Such area markers or heatmaps provide clinical users with visual cues that could make it clearer whether to accept or reject a chest x-ray finding detected by AI.
Algorithm interpretability is an active area of deep learning research — and a focus area for Qure.ai. The current visualization methods can be broadly classified into 2 categories: perturbation-based visualizations and backpropagation-based visualizations. We’ve experimented with these methods and put together a blog series on how they work with medical images, using chest X-rays as an example.
Read Part 1 and Part 2 of the blog post series, or the Auntminnie article about our RSNA presentation.