Rendering of ultrasound volumes on augmented reality glasses

Wimonsiri Khiosapjaroen

Abstract

Ultrasound (US) imaging is a non-invasive and non-ionizing radiation technology commonly used in clinics to diagnose and evaluate medical conditions. Three dimensional (3D) US, also known as volumetric US, gives more anatomical information than two dimensional (2D) US. Volume rendering of a 3D image improves interpretation of the volumetric data since this technique produces 2D images showing internal structures according to the intensity of each voxel. Augmented reality (AR) glasses provide a more intuitive visualization of 3D content when comparing to standard 2D screens. Previous research reported the use of HoloLens (first generation) to display US volume renderings close to the US probe by means of an AR marker attached to the US probe. The US volumes were acquired with the US system Vivid 7 (GE Healthcare), tranferred to a computer for converting the data into Cartesian space and then sent to HoloLens. However, this solution did not include tools for resizing and rotating the rendered volume, and modifying the transfer functions. To our knowledge, there are no studies rendering US data from Philips Epiq7 US station on AR devices. The aim of this thesis was to develop a system that receives 3D US data from the Philips Epiq7 US station and processes the volume in order to generate volume renderings (specifically, stereoscopic images for a 3D perception) that are visualized on AR glasses (specifically, Microsoft HoloLens first generation). Visualization Toolkit (VTK), an open-source software, was used with Qt, an open-source framework for designing user interface, to generate stereoscopic images and adjust the opacity transfer function and the color transfer function of volume renderings. The stereoscopic images were compressed to DXT1 file format. Volume renderings are computed based on the pose of the AR glasses and the HoloLens users can translate those images using gestures. Google gRPC server-client was used to interchange data between the computer and the AR glasses. The evaluation of the camera transformation first in Unity and, after that, in HoloLens showed that the system can generate right volume renderings based on the movements of the HoloLens user. The total latency from a request was sent until the images were displayed on the HoloLens was 90.58 ± 31.19 ms (mean ± standard deviation). This latency is below the threshold of 100 ms for an instantaneous perception of the displayed volumes. However, the latency was too high to maintain a high hologram stability (the rendered volume in this case) since the operating system on HoloLens should receive a new rendered image every 16 milliseconds.This might result in lagging when moving the HoloLens too fast. The latency for reading an US volume from the US station and updating the data in VTK was 1.73 ± 0.01 seconds. Therefore, updating the incoming volume from the US station in VTK
in real-time is not suitable. Future research will focus on evaluation other approaches for overcoming these limitations (for example, testing if HoloLens can receive US volumes directly from the Philips Epiq7 US station using its propietary network protocol and generate volume renderings). Other interesting features would be resizing/rotating/cropping the volume, applying some filtering on the US volumes to improve the volume renderings and being able to modify the opacity/color transfer functions with gestures/voice commands available on the recently available HoloLens second generation.
Original languageEnglish
QualificationMaster of Science
Awarding Institution
  • Department of Computer Science and Engineering
Supervisors/Advisors
  • Garcia Vazquez, Veronica, Supervisor
  • Buzug, Thorsten, Supervisor
Publication statusPublished - 31.07.2020
Externally publishedYes

Fingerprint

Dive into the research topics of 'Rendering of ultrasound volumes on augmented reality glasses'. Together they form a unique fingerprint.

Cite this