D12 - MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

Hari Sowrirajan, Jingbo Yang, Andrew Y. Ng, Pranav Rajpurkar

Show abstract - Show schedule - Proceedings - PDF - Reviews

Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pretrained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCo-CXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.
Hide abstract

Wednesday 7th July
D4-12 (short): Detection and Diagnosis 1 - 16:45 - 17:30 (UTC+2)
Hide schedule


Can't display slides, your browser doesn't support embedding PDFs.

Download slides