Abstract
Objective : This study aimed to develop deep learning (DL)
models for differentiating between eosinophilic chronic rhinosinusitis
(ECRS) and non-eosinophilic chronic rhinosinusitis (NECRS) on
preoperative computed tomography (CT).
Methods : A total of 878 chronic rhinosinusitis (CRS) patients
undergoing nasal endoscopic surgery were included. Axial spiral CT
images were pre-processed and used to build the dataset. Two semantic
segmentation models based on U-net and Deeplabv3 were trained to segment
sinus area in CT images. All patient images were segmented using the
better-performing segmentation model and used for training and
validation of the transferred efficientnet_b0, resnet50,
inception_resnet_v2, and Xception neural networks. Additionally, we
evaluated the performances of the models trained using each image and
each patient as a unit. The precision of each model was assessed based
on the receiver operating characteristic curve. Further, we analyzed the
confusion matrix, accuracy, and interpretability of each model.
Results : The Dice coefficients of U-net and Deeplabv3 were
0.953 and 0.961, respectively. The average area under the curve and mean
accuracy values of the four networks were 0.848 and 0.762 for models
trained using a single image as a unit, while the corresponding values
for models trained using each patient as a unit were 0.853 and 0.893,
respectively. The generated Grad-Cams showed good interpretability.
Conclusion : Combining semantic segmentation with classification
networks could effectively distinguish between patients with ECRS and
NECRS based on preoperative sinus CT images. Furthermore, labeling each
patient to build a dataset for classification may be more reliable than
labeling each medical image.
Keywords: deep learning; eosinophil; computed tomography;
rhinosinusitis; differentiation