Performance evaluation of lightweight convolutional neural networks on retinal lesion segmentation

Abstract

In addition to the recent development of deep learning-based, automatic detection systems for diabetic retinopathy (DR), efforts are being made to integrate those systems into mobile detection devices running on the edge requiring lightweight algorithms. Moreover, to enable clinical deployment it is important to enhance the transparency of the deep learning systems usually being black-box models and hence giving no insights into its reasoning. By providing precise segmentation masks for lesions being related to the severity of DR, a good intuition about the decision making of the diagnosing system can be given. Hence, to enable transparent mobile DR detection devices simultaneously segmenting disease-related lesions and running on the edge, lightweight models capable to produce fine-grained segmentation masks are required contradicting the generally high complexity of fully convolutional architectures used for image segmentation. In this paper, we evaluate both the runtime and segmentation performance of several lightweight fully convolutional networks for DR related lesion segmentation and assess its potential to extend mobile DR-grading systems for improved transparency. To this end, the U2-Net is downscaled to reduce the computational load by reducing feature size and applying depthwise separable convolutions and evaluated using deep model ensembling as well as single- and multi-task inference to improve performance and further reduce memory cost. Experimental results using the U2-Net-S† ensemble show good segmentation performance while maintaining a small memory footprint as well as reasonable inference speed and thus indicate a promising first step towards a holistic mobile diagnostic system providing both precise lesion segmentation and DR-grading.
Translated title of the contributionLeistungsanalyse von leichtgewichtigen neuronalen Faltungsnetzen zur Segmentierung von Netzhautläsionen
Original languageEnglish
Title of host publicationMedical Imaging 2022: Computer-Aided Diagnosis
EditorsKhan M. Iftekharuddin, Karen Drukker, Maciej A. Mazurowski, Hongbing Lu, Chisako Muramatsu, Ravi K. Samala
PublisherSPIE
Publication date2022
DOIs
Publication statusPublished - 2022

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being
  2. SDG 4 - Quality Education
    SDG 4 Quality Education
  3. SDG 9 - Industry, Innovation, and Infrastructure
    SDG 9 Industry, Innovation, and Infrastructure
  4. SDG 11 - Sustainable Cities and Communities
    SDG 11 Sustainable Cities and Communities
  5. SDG 12 - Responsible Consumption and Production
    SDG 12 Responsible Consumption and Production
  6. SDG 14 - Life Below Water
    SDG 14 Life Below Water
  7. SDG 15 - Life on Land
    SDG 15 Life on Land

Research Areas and Centers

  • Centers: Center for Open Innovation in Connected Health (COPICOH)
  • Research Area: Intelligent Systems
  • Centers: Center for Artificial Intelligence Luebeck (ZKIL)

DFG Research Classification Scheme

  • 4.43-05 Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing

Fingerprint

Dive into the research topics of 'Performance evaluation of lightweight convolutional neural networks on retinal lesion segmentation'. Together they form a unique fingerprint.

Cite this