G2 - ICAM-reg: Interpretable Classification and Regression with Feature Attribution for Mapping Neurological Phenotypes in Individual Scans

Cher Bass, Mariana da Silva, Carole H. Sudre, Logan Zane John Williams, Petru-Daniel Tudosiu, Fidel Alfaro-Almagro, Sean P. Fitzgibbon, Matthew Glasser, Stephen M. Smith, Emma Claire Robinson

Show abstract - Show schedule - PDF - Reviews

Feature attribution (FA), or the assignment of class-relevance to different locations in an image, is important for many classification and regression problems but is particularly crucial within the neuroscience domain, where accurate mechanistic models of behaviours, or disease, require knowledge of all features discriminative of a trait. At the same time, predicting class relevance from brain images is challenging as phenotypes are typically heterogeneous, and changes occur against a background of significant natural variation. Here, we present an extension of the ICAM framework for creating prediction specific FA maps through image-to-image translation.
Hide abstract

Thursday 8th July
G1-9 (short): Interpretability and Explainable AI - 16:45 - 17:30 (UTC+2)
Hide schedule

Can't display slides, your browser doesn't support embedding PDFs.

Download slides