This article was originally published here
Neuroinformatics. 2022 May 27. doi: 10.1007/s12021-022-09587-2. Online ahead of print.
Automated classification of amyloid-PET images can support clinical assessment and increase diagnostic confidence. Three automated approaches using global thresholds derived from receiver operating characteristics (ROC) analysis, machine learning (ML) algorithms with regional SUVr values, and a deep learning (DL) network with input d 3D image were compared under various conditions: number of training data, radiotracers and cohorts. 276 [11C]GDP and 209 [18F]AV45 PET images from the ADNI database and our local cohort were used. Overall mean and maximum r SUV thresholds were derived using ROC analysis. 68 ML models were constructed using the regional SUVr values and a DL network was trained with the classifications of two visual assessments – the manufacturer’s recommendations (gray scale) and with the region scaling of visually guided reference (rainbow scale). ML-based classification achieved similar accuracy to ROC classification, but had better convergence between training and unseen data, with a smaller number of training data. Naïve Bayes obtained the best results among the 68 ML algorithms. Classification with maximum SUVr thresholds yielded greater accuracy than with average SUVr thresholds, especially for cohorts with more focal uptake. DL networks can support accurate classification of definite cases, but perform poorly for equivocal cases. Scaling the normalized image intensity to the rainbow scale and improving the agreement between raters. Grayscale better detects focal accumulation, thus classifying more amyloid-positive scans. All three approaches generally achieved greater accuracy when trained with rainbow-scale classification. ML yielded similar accuracy to ROC, but with better convergence between training and unseen data, and further work could lead to even more accurate ML methods.
PMID:35622223 | DOI:10.1007/s12021-022-09587-2