You are viewing the site in preview mode

Skip to main content

TableĀ 3 Diagnosis performance of CNN models and different thoracic surgeons in the validation cohort

From: A deep learning model combining circulating tumor cells and radiological features in the multi-classification of mediastinal lesions in comparison with thoracic surgeons: a large-scale retrospective study

Ratings

AUC (95%CI)

Sensitivity (95% CI)

Specificity (95% CI)

Accuracy (95% CI)

PPV (95% CI)

NPV (95% CI)

Benign classification

CNN model

ā€ƒMonomodal CNN model

0.710(0.664–0.756)

0.702(0.676–0.728)

0.738(0.716–0.762)

0.779(0.743–0.815)

0.692(0.671–0.713)

0.723(0.695–0.751)

ā€ƒDMFN model

0.941(0.901–0.982)

0.809(0.785–0.833)

0.929(0.894–0.964)

0.927(0.910–0.944)

0.885(0.837–0.932)

0.939(0.884–0.993)

Thoracic surgeons

ā€ƒResident physicians

0.52(0.475–0.566)

0.513(0.502–0.524)

0.558(0.515–0.606)

0.69(0.666–0.723)

0.573(0.527–0.619)

0.538(0.818–0.561)

ā€ƒAttending physicians

0.688(0.649–0.714)

0.607(0.546–0.677)

0.72(0.682–0.763)

0.737(0.696–0.784)

0.693(0.635–0.754)

0.704(0.672–0.734)

ā€ƒChief physicians

0.802(0.742–0.863)

0.761(0.720–0.803)

0.813(0.760–0.866)

0.830(0.805–0.856)

0.822(0.784–0.861)

0.811(0.752–0.872)

Management decision

ā€ƒResident physicians

0.636(0.568–0.711)

0.627(0.578–0.664)

0.684(0.657–0.707)

0.711(0.685–0.734)

0.643(0.626–0.630)

0.661(0.624–0.707)

ā€ƒAttending physicians

0.739(0.709–0.772)

0.705(0.685–0.733)

0.768(0.631–0.804)

0.839(0.801–0.870)

0.723(0.694–0.758)

0.718(0.689–0.742)

ā€ƒChief physicians

0.921(0.882–0.946)

0.916(0.885–0.946)

0.928(0.895–0.953)

0.907(0.883–0.935)

0.919(0.881–0.942)

0.882(0.854–0.908)

  1. AUC Area under the ROC curve, 95%CI 95% confidence interval, CNN Convolutional neural network, DMFN Deep multimodal fusion network, PPV Positive predictive value, NPV Negative predictive value