Date: April 2024
Source: Laryngo-Rhino-Otologie. 103(S 02): S172 DOI: 10.1055/s-0044-1784545.
Objective: Using grading systems, the severity of facial palsy is typically classified through static 2D images. These approaches fail to capture crucial facial attributes, such as the depth of the nasolabial fold. We present a novel technique that uses 3D video recordings to overcome this limitation. Our method automatically characterizes the facial structure, calculates volumetric disparities between the affected and contralateral side, and includes an intuitive visualization
Materials and Methods: 35 patients (mean age 51 years, min. 25, max. 72; 7 male, 28 female) with unilateral chronic synkinetic facial palsy were enrolled. We utilized the 30Hz 3dMDface.t System (3dMD LCC, Georgia, USA) to record their facial movements while they mimicked happy facial expressions four times. Each recording lasted 6.5 seconds, with a total of 140 videos.
Results: We found a difference in volume between the neutral and happy expressions: 11.7±9.1 mm3 and 13.73±10.0 mm3, respectively. This suggests that there is a higher level of asymmetry during movements. Our process is fully automatic without human intervention, highlights the impacted areas, and emphasizes the differences between the affected and contralateral side.
Discussion: Our data-driven method allows healthcare professionals to track and visualize patients’ volumetric changes automatically, facilitating personalized treatments. It mitigates the risk of human biases in therapeutic evaluations and effectively transitions from static 2D images to dynamic 4D assessments of facial palsy state.
Article: An automatic, objective method to measure and visualize volumetric changes in patients with facial palsy during 3D video recordings.
Authors: Tim Büchner, Sven Sickert, Gerd Fabian Volk, Joachim Denzler, Orlando Guntinas-Lichius. Friedrich-Schiller-Universität Jena; Universitätsklinikum Jena.