Abstract

Background: Despite recent significant progress in the development of automatic sleep staging methods, building a good model still remains a big challenge for sleep studies with a small cohort due to the data-variability and data-inefficiency issues. This work presents a deep transfer learning approach to overcome these issues and enable transferring knowledge from a large dataset to a small cohort for automatic sleep staging. Methods: We start from a generic end-to-end deep learning framework for sequence-to-sequence sleep staging and derive two networks as the means for transfer learning. The networks are first trained in the source domain (i.e. the large database). The pretrained networks are then finetuned in the target domain (i.e. the small cohort) to complete knowledge transfer. We employ the Montreal Archive of Sleep Studies (MASS) database consisting of 200 subjects as the source domain and study deep transfer learning on three different target domains: the Sleep Cassette subset and the Sleep Telemetry subset of the Sleep-EDF Expanded database, and the Surrey-cEEGrid database. The target domains are purposely adopted to cover different degrees of data mismatch to the source domains. Results: Our experimental results show significant performance improvement on automatic sleep staging on the target domains achieved with the proposed deep transfer learning approach. Conclusions: These results suggest the efficacy of the proposed approach in addressing the above-mentioned data-variability and data-inefficiency issues. Significance: As a consequence, it would enable one to improve the quality of automatic sleep staging models when the amount of data is relatively small.11The source code and the pretrained models are published at https://github.com/pquochuy/sleep_transfer_learning.

Original languageEnglish
Article number9181436
JournalIEEE Transactions on Biomedical Engineering
Volume68
Issue number6
Pages (from-to)1787-1798
Number of pages12
ISSN0018-9294
DOIs
Publication statusPublished - 06.2021

Funding

The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. We would like to thank Dr. Kaare Mikkelsen for sharing the Surrey-cEEGrid database. Manuscript received June 12, 2020; revised August 2, 2020; accepted August 26, 2020. Date of publication August 31, 2020; date of current version May 20, 2021. This research received funding from the Flemish Government (AI Research Program). Maarten De Vos is affiliated to Leuven.AI - KU Leuven institute for AI, B-3000, Leuven, Belgium. (Corresponding author: Huy Phan.) Huy Phan is with the School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4FZ, U.K. (e-mail: [email protected]). Oliver Y. Chén is with the Institute of Biomedical Engineering, University of Oxford. Philipp Koch and Alfred Mertins are with the Institute for Signal Processing, University of Lübeck. Zongqing Lu is with the Department of Computer Science, Peking University. Ian McLoughlin is with the Singapore Institute of Technology. Maarten De Vos is with the Department of Electrical Engineering and the Department of Development and Regeneration. Digital Object Identifier 10.1109/TBME.2020.3020381 1The source code and the pretrained models are published at https://github. com/pquochuy/sleep_transfer_learning

Fingerprint

Dive into the research topics of 'Towards More Accurate Automatic Sleep Staging via Deep Transfer Learning'. Together they form a unique fingerprint.

Cite this