I4 - Scopeformer: n-CNN-ViT hybrid model for Intracranial hemorrhage subtypes classification

Yassine Barhoumi, Ghulam Rasool

Show abstract - Show schedule - PDF - Reviews

We propose a feature generator backbone composed of an ensemble of convolutional neural networks (CNNs) to improve the recently emerging Vision Transformer (ViT) models. We tackled the RSNA intracranial hemorrhage classification problem, i.e., identifying various hemorrhage types from computed tomography (CT) slices. We show that by gradually stacking several feature maps extracted using multiple Xception CNNs, we can develop a feature-rich input for the ViT model. Our approach allowed the ViT model to pay attention to relevant features at multiple levels. Moreover, pretraining the ā€nā€ CNNs using various paradigms leads to a diverse feature set and further improves the performance of the proposed n-CNN-ViT. We achieved a test accuracy of 98.04% with a weighted logarithmic loss value of 0.0708. The proposed architecture is modular and scalable in both the number of CNNs used for feature extraction and the size of the ViT.
Hide abstract

Friday 9th July
I4-12 (short): Transfer Learning and Domain Adaptation - 13:45 - 14:30 (UTC+2)
Hide schedule


Can't display slides, your browser doesn't support embedding PDFs.

Download slides