Date: August 2024.
Source: 2024 Machine Learning for Health Care (MLHC) Conference, Toronto, ON, Canada. [Paper ID: 118]
Submission Track: Clinical Abstract Track
Objective: Craniosynostosis is the premature fusion of cranial sutures leading to craniofacial deformity. It is a rare condition with an estimated prevalence of 5.9 per 10,000 live births. Diagnosis prior to 3-4 months of age allows for less invasive endoscopic interventions. Currently, patients are seen first come first serve where most referrals are non-synostosis. This can lead to delays in interventions for patients who receive a diagnosis resulting in more invasive treatment options. Using a three-dimensional (3d) photogrammetric image capture system (3dMD) we have trained a model to predict three synostosis diagnoses, plagiocephaly and patients with normal head shapes, in the hopes of accelerating access to care for patients with likely diagnoses.
Materials and Methods
Data collection: 3d scans were taken of patients less than one year old visiting the clinic in the Plastic and Reconstructive Surgery Clinical Department at The Hospital for Sick Children in Toronto, Canada. Patients with a diagnosis of sagittal, metopic, or unicoronal synostosis, and patients with plagiocephaly or normal head shapes (without a craniofacial diagnosis) were collected. All patients were fitted with a stocking cap prior to capturing a stereo photogram. Each scan was manually cropped to below the mandible, and other anomalous captured points outside of the patient’s head were removed. Only scans before any cranial interventions (including helmeting) were used. This resulted in a dataset of 856 scans over 715 patients. For each scan, only the vertex information was kept, and all surface information was removed. The vertices were all normalized such that the origin is at the central point and the largest vector has a length of 1. The normalized point clouds were then cropped from the brow to the nape to remove any facial features.
Model training: A Dynamic Graph Convolutional Neural Network (DGCNN) model was trained using fivefold cross validation. To convert the normalized scans to graphs that can be convolved upon a random sample of m vertices is selected prior to prediction. After sampling m points a set of edges between points is created using the k nearest neighbours algorithm. This random sampling process is repeated each time the scan is seen in the training set. For model training and evaluation fivefold cross-validation was used and each patient was randomly assigned to one of the five folds. At training time three data augmentations were applied, first the plane to crop away the facial features was randomly shifted, between 0 and 0.15 in normalized Euclidean distance, along the plane’s normal vector. This resulted in further removal of the lower portions of the head. Then with probability of 50% a random point was selected and all points within 0.15 by Euclidean distance to that point were prevented from being sampled. Adam optimizer was used with the OneCycle1 learning rate scheduler, starting at learning rate of 0.0001, and maximizing at a learning rate of 0.01.
Results: Our model achieved an accuracy of 75.4% with mean AUROC of 92%. This resulted in the five classes sagittal, metopic, unicoronal, plagiocephaly, and normal achieving precision of 87.3%, 62.9%, 72.0%, 84.4%, and 64.3%, respectively. The model also achieved recall of 80.1%, 84.2%, 79.8%, 71.1%, and 67.3%, respectively. The rate of non-accelerated synostosis patients was evaluated by binarizing the 5 classes to synostosis and non-synostosis. Considering synostosis the positive class, results in an accuracy of 84.7%, precision of 80.7%, and recall of 88.9%.
Conclusions: We trained a DGCNN model on a cohort of patients visiting the Clinic at The Hospital for Sick Children. Our model achieved strong performance, which may be useful in accelerating visit times for patients with a high certainty of a diagnosis, increasing frequency of less invasive endoscopic interventions. Collecting 3d images still requires an initial visit to the hospital, so we aim to supplement our data collection with images captured from mobile devices. Evaluating performance on a more portable data capture method may reduce unnecessary visits to the Craniofacial clinic.

Article: Craniosynostosis Classification using Dynamic Graph Neural Network On 3D Photographs.
Authors: John Phillips, Jaryd Hunter, Noah Stancati, Rakshita Kathuria, Sam Osia, Melissa Roy, Barbara Sokolowski, Pouria Mashouri, Michael Brudno, Devin Singh. The Hospital for Sick Children, Toronto, Canada.