Automatic surgical skill assessment in robotic surgery based on video data is essential in facilitating faster learning curves for trainees while relieving expert surgeons from the time- and cost-intensive feedback process. Recent years have shown several advancements in this area by utilizing deep learning. While current research focuses on novel architectures, the influence of video-preprocessing on their performance remains unknown. In this work, we present the first investigation on the influence of video-preprocessing on deep learning-based surgical skill assessment. Thus, we integrated four preprocessing modules, i.e. Deblurring, Segmentbased Sampling, Optical Flow and the Combination of all of them, into skill assessment on the JIGSAWS dataset using a well-established network architecture. Despite all single preprocessing steps showing no clear improvement, the Combination of all steps showed higher median performance and lower variance. Furthermore, we performed frame-wise investigations on the influence of optical flow artifacts and their reduction in the combined setting. Our results highlight the potential of well-calibrated video-preprocessing for automatic surgical skill assessment.
Research Areas and Centers
- Academic Focus: Biomedical Engineering
- Research Area: Intelligent Systems
- Centers: Center for Artificial Intelligence Luebeck (ZKIL)
DFG Research Classification Scheme
- 205-25 General and Visceral Surgery
- 205-32 Medical Physics, Biomedical Engineering