Abstract
Classifying SPECT images requires a preprocessing step which normalizes the images using a normalization region. The choice of the normalization region is not standard, and using different normalization regions introduces normalization region-dependent variability. This paper mathematically analyzes the effect of the normalization region to show that normalized-classification is exactly equivalent to a subspace separation of the half rays of the images under multiplicative equivalence. Using this geometry, a new self-normalized classification strategy is proposed. This strategy eliminates the normalizing region altogether. The theory is used to classify DaTscan images of 365 Parkinson’s disease (PD) subjects and 208 healthy control (HC) subjects from the Parkinson’s Progression Marker Initiative (PPMI). The theory is also used to understand PD progression from baseline to year 4.
Index Terms—: Image Classification, Machine Learning, PET/SPECT, DaTscan, Parkinson’s Disease
I. Introduction
Clinical SPECT (and PET) images are often normalized before classification [1], [2]. Typically, a normalization region with nonspecific tracer binding is chosen, and its mean μ is used to calculate the binding potential (BP), defined as BP(v) = (I(v) − μ)/μ = (I(v)/μ) − 1 for every voxel v in an image I [3]. All subsequent image classification is done using BP rather than the original image. Many different normalization regions are used in practice, however, they are not all equivalent [4], [5], [6], [7], [8]; changing the normalization region changes the downstream results.
Given this dependence on the normalization region, one may ask whether SPECT/PET images can be classified without choosing a normalization region. The goal of this paper is to show that this is possible, without sacrificing accuracy. We begin in Section III by mathematically analyzing how normalization affects classification. Such a theory has not yet appeared in the literature. Based on this analysis, in Section III-D we propose a new classification strategy which does not require a normalization region. Instead, the classification is self-normalizing. A potential pitfall of not using a normalization region is the possible loss of classification accuracy. In Section IV, we show that there is no loss of classification accuracy when self-normalizing classification is used with real-world data.
For real-world data, we use SPECT images of Parkinson’s disease (PD). Imaging PD with [123]I-Ioflupane, commonly called DaTscan imaging, measures the concentration of the dopamine transporter (DaT) protein. Dopaminergic neuronal loss in PD is visible as loss of signal in the putamen and the caudate in DaTscan images (see Fig. 1). The occipital lobe usually serves as the normalization region [9], [10], [11], although the cerebellum [12], and the whole brain except the striatum [10] are also used. As mentioned above, different normalization regions influence the ability of BP (which is called the striatal binding ratio (SBR) in PD DaTscans) to classify PD vs. healthy controls (HC) [5].
We carried out two classification experiments on PD DaTscan images to validate our theory. First, we classify DaTscan images of PD from HC subjects. Second, we classify longitudinal DaTscan images of PD patients obtained 4 years apart. As explained in Section IV, the latter provides insight into how PD progresses during this interval. For each classification problem, we compare the classification accuracy between the classical normalization and our proposed self-normalization, using both linear and non-linear classifiers.
II. Background and Literature Review
A. PET/SPECT Normalization
PET/SPECT imaging can be categorized into dynamic and static imaging. Dynamic imaging acquires images at multiple time points after tracer injection and is mainly used in research. Static imaging acquires a single image over a fixed period hence is suitable for clinical applications. Analyzing clinical (static) PET/SPECT images typically requires a preprocessing step which normalizes its intensity [1], [2]. Normalization is required because the amount of radioligand reaching the brain depends on multiple factors, including age, sex, and medication. This results in an unknown scaling factor for each image. Standard normalization selects a normalization region to calculate the BP, thereby eliminating this scaling factor, as mentioned before. Note that in PD DaTscans, the BP is called the striatal binding ratio (SBR) since dopamine transporters mainly exist in the striatum.
In PD DaTscans, the occipital lobe is usually selected as the normalization region since it contains few dopamine transporters [9], [10], [11] and most of the binding of the radiotracer in this region is non-specific. Other choices for the normalization region include the cerebellum [12] and the whole brain except the striatum [10]. Different normalization regions alter the predictive power of BP to classify PD, up to a difference of 0.147 in terms of area under the curve [5].
For PET/SPECT imaging using other tracers, e.g. [18 F]fluorodeoxiglucose for visualizing glucose (FDG-PET), the impact of the normalization regions is also significant [7], [8].
Besides the BP normalization mentioned above, other normalization techniques have also been proposed. For DaTscans, these include analyzing the distribution of intensity values in the whole brain except the striatum [13], and minimizing the squared error within a selected region between a template and the linear transformed image [1].
B. DaTscan Classification
Classification of DaTscan images into PD and HC has been achieved using standard machine learning techniques [14], [15], [10], [11] as well as deep learning methods [16]. Early classification work tends to use a small proprietary datasets and perform their own registration [17], [14], while later work has shifted to using a public dataset: the Parkinson’s Progression Marker Initiative (PPMI) dataset which provides already registered DaTscans [15], [11], [18].
Support vector machine (SVM) and logistic regression are probably the most popular PD vs. HC classifiers, appearing in most of the non-deep-learning studies [14], [15], [10], [11], [18], [19]. Graph-based transductive learning was introduced for classifying multi-modality neurodegenerative image data in [20]. Recently, convolutional neural networks (CNN) have been applied to classifying DaTscan images for PD diagnosis [21], [16], [22]. Most of these methods calculate the SBR using a normalization region as a pre-processing step before classification [15], [11], [18].
Other studies use geometric image features including the length and volume of the segmented striatum [23], shape fitting coefficients [24], [25], isosurfaces [16], and intensity summary statistics [22].
III. Normalization and Classification
We now turn to explaining the effect of normalization on classification. The effect is most easily explained using linear classification, and we stick to linear classification for most of this section. However, non-linear classification is also addressed.
To begin, note that images differing by a multiplicative factor, such as I2 = αI1 for α > 0, give the same BP. The −1 term in the BP simply adds a fixed constant to every voxel, and has no effect on classification. We ignore this term.
A. Multiplicative Equivalence
Let Ω be the set of voxels in an image, with the total number of voxels being d. Any nonzero image I defined on Ω is an element of . SPECT/PET images have non-negative voxels, i.e. these images lie in the non-negative orthant of with the origin removed. Geometrically speaking, the set of all images related by positive scalar multiples is a half-ray passing through the origin of (see Fig. 2(a)). We denote the half ray of the image I by [I]. We also denote the set of all half-rays passing through the non-negative orthant of (with the origin removed) as . Classifying images under multiplicative equivalence means partitioning into disjoint subsets.
B. Normalized-Classification
A normalization region is a set of voxels N ⊂ Ω. All images restricted to N form a subspace of , which we denote as ΠN. The projection operator projects an image onto this subspace by setting values outside N to zero.
Let denote an image with all 1’s. Then πN(1) is the image containing all 1’s in the voxels N and 0’s everywhere else. For any image I, the mean value in the normalization region is , where |N| is the number of voxels in N. Denoting by 1N, the mean value in the normalization region is simply and the normalized image is . Normalization can be interpreted geometrically after noting that the mean value of in the normalization region is always 1, since . Thus the geometry of normalization is explained as follows (see Fig. 2(b)): From the image I construct the half-ray [I]. Then the normalized image is the intersection of the half ray [I] with , the d−1 dimensional affine subspace defined by .
Once the image is normalized, it is classified using voxels in another region C ⊂ Ω. We assume that N ∩C = ∅, i.e. the normalization and classification voxels are disjoint. Similar to N above, images restricted to C form the subspace ΠC of . Because N and C are disjoint, we have ΠC ⊥ ΠN.
A linear classifier chooses a unit norm vector w ∈ ΠC and a real number b to define a decision boundary . We refer to linear classification using the normalized image as normalized-classification, and its boundary as the normalized-classification boundary. This boundary has an alternate description. First consider the equation wTx − b = 0 for . Because w ≠ 0, this equation describes a d − 1 dimensional affine subspace of . Call it (see Fig. 2(b)). Then the normalized-classification boundary is the intersection of and (see Fig. 2(b)), which is the set of all satisfying
(1) |
Because w and 1N are linearly independent, is a d−2 dimensional affine subspace of .
C. From Normalized-Classification to Subspace Classification
A slight shift in point-of-view shows that normalized-classification is really just a classification of half-rays by subspaces. The key idea here is to take (see Fig. 2(c)). Because is a span of a set of points, is a subspace of . It has properties that are described below. Proofs of all properties can be found in the Appendix.
Claim 1:
The subspace has dimension d−1 and is the set of all satisfying (w − b1N)Tx = 0.
Because is a subspace, any half-ray through the origin of stays exactly on one side of it, or is contained in it. Thus:
Claim 2:
Every normalized-classification of images is completely equivalent to a classification of the half-rays by the subspace (w − b1N)Tx = 0.
In other words, the classification of normalized images is exactly the same as the classification of half-rays [I] by (see Fig. 2(c)). We call the subspace , a classification subspace, and classification of half-rays it achieves subspace classification. Recalling that a normalized-classification boundary is determined by the pair w, b, further analysis shows that
Claim 3:
Every distinct normalized-classification boundary pair w, b gives a unique classification subspace .
In other words, normalized-classification of images is completely equivalent to subspace classification of half-rays.
Next, we turn to ask: What happens to normalized-classification if the normalization region is changed? Specifically, suppose that the classification region C remains fixed, but we have two different normalization regions N ≠ N′, with corresponding normalized-classification boundaries given by w, b and w′, b′ respectively. Because normalized-classification is equivalent to subspace classification of , we ask when the corresponding classification subspaces , are the same? The result is:
Claim 4:
if and only if w = w′ and b = b′ = 0.
Claim 4 shows that changing the normalization region can give the same classification subspace only in very special cases where b = b′ = 0. Classification with b = 0 rarely happens in real-world classification. Claim 4 explains why changing the normalization region changes the classification.
D. Self-normalized Classification
The above analysis strongly suggests how the dependence on the normalization region can be eliminated altogether: Take the intersection of each half-ray and the unit sphere in (see Fig. 2(d)) and then classify the intersection points with a subspace. There is no normalization region involved in this; the data normalizes itself — it is self-normalizing.
This idea can be pushed further in two ways: First, we need not classify the intersection points on the sphere with only a subspace. We can use any d−1 dimensional affine subspace. Second, we can apply the same idea to the image restricted to the classification voxels C. That is, we take only intensities of voxels in C, project them to the unit sphere, and classify them using an affine subspace. This too eliminates the normalization region.
E. Non-linear Classification
The above ideas extend easily to nonlinear classification. To see how, first note that the normalization step is the same as before. It corresponds to moving the image I to the intersection of the half-ray [I] and . However, the classification boundary, which was the affine subspace in Fig. 2, is no longer linear. Suppose it is a d−2 dimensional sub-manifold of , which we will call (see Fig. 2(e)). Every gives a half-ray [x] and this half ray intersects the unit sphere at a single point. Because is a d−2 dimensional submanifold, the set of all half rays of points in is a d − 1 dimensional submanifold of , and this manifold intersects the unit sphere transversely to give a d − 2 dimensional submanifold of the unit sphere. Denote this submanifold as (see Fig. 2(e)). It is straightforward to see that the partition (i.e. classification) of by is identical to the partition of by . The converse is also straightforward: By simply reversing the argument it is easy to see that any classification boundary, which is a d − 2 dimensional submanifold of the part of the unit sphere in the non-negative orthant of , induces a d−2 dimensinal submanifold as a classification boundary in . Thus, we have
Claim 5:
Every non-linear classification boundary in which is a d − 2 dimensional submanifold is equivalent to a classification boundary on the unit sphere which is also a d−2 dimensional submanifold. The converse is also true for all decision boundaries that are d − 2 dimensional submanifolds in the non-negative part of the unit sphere.
The theory developed so far suggests that normalized classification and self-normalized classification are equivalent. In other words, there should be no loss of classification accuracy with self-normalized classification. Whether this holds in practice is addressed in the next Section using the PPMI DaTscan dataset. Because this is a PD DaTscan dataset, we refer to BP as SBR from now on.
IV. Numerical Results
A. PPMI Data
The PPMI dataset contains 449 early-stage PD subjects and 210 HC subjects. The PD subjects have scans at baseline, and at approximately 1, 2, 4, and 5 years from baseline, with missing scans. Most of the HC subjects have only a single scan. The images have a size of 109 × 91 × 91 voxels, with 2 mm3 voxels. The images are already registered by PPMI to the Montreal Neurological Institute (MNI) atlas. However, following the procedure in [26], we found and removed some misregistered images. This left us with 365 PD subjects (ages: 62.6 ± 9.8 years, male/female: 237/128) and 208 HC subjects (ages: 60.6 ± 11.2 years, male/female: 135/73) to analyze. All PD subjects had baseline images, but only 136 PD subjects had images at year 4.
Because PD affects the two brain hemispheres asymmetrically, the DaTscan images were flipped around the mid-plane so that the more affected side was on the right. For normalized-classification, we used the occipital lobe as the normalization region (N) and the striatum as the classification region (C). The striatum mask was derived by applying Otsu’s threshold [27] on the mean HC image to remove the background and then applying it again on the remaining voxels to remove the nonspecific binding voxels. The occipital lobe mask was taken from [26]. All masks were restricted to the 29–55th slices.
B. Experimental Setup
The ultimate goal of our experiments is to verify the classification accuracy of the self-normalized classification of Section III-D. However, an important subgoal is to determine whether self-normalized classification provided useful information about voxels that are important to classification. This can be achieved by analyzing the weights for linear classifiers and the saliency maps [28] for non-linear classifiers.
We carried out two classifications: 1) HC vs. PD using only the baseline images, 2) Baseline PDs vs. PDs at year 4. The PD vs. HC classification has obvious clinical importance. The baseline vs year 4 classification is not clinically useful on its own, but because it identifies voxels where the disease progresses from baseline to year 4, it provides a simple disease progression footprint.
In each classification problem, we used three normalization strategies:
classical normalization using SBR with occipital lobe normalization (referred to as SBR from now on),
self-normalization via a projection of all voxels from the striatum and the occipital lobe onto the unit sphere (referred to as S + O),
self-normalization via a projection of voxels only from the striatum to the unit sphere (referred to as S).
To avoid numerical underflow problems, the radius of the sphere used in self-normalization was set to , where d is the dimension (number of voxels) of the region being projected. The dimension d for the above three strategies are d = 9948, 43077, and 9948 respectively.
Along with each normalization, we used three classifiers, two of which were linear: logistic regression (LR) and SVM, and one of which was nonlinear: convolutional neural network (CNN). The linear classifiers had a sparsity constraint as implemented in the fitclinear function in Matlab with a sparsa optimizer [29]. The nonlinear classifier was implemented in PyTorch with 2 convolutional/pooling layers (kernel size 5 × 5 × 5, 6 and 16 feature maps respectively) followed by 2 linear layers (120 hidden nodes). We used the ReLu activation function after the convolutional/linear layers. For the CNN, voxels were inserted into a 3D image cube padded by zeros. The combination of normalization method with the classifier led to 3 × 3 = 9 classification experiments, whose results are reported below.
For every classification task, we randomly split the data set 100 times into a training set (80%) and a test set (20%). The sparsity parameter for the linear classifiers was chosen using 10-fold cross validation on the 100 training sets. Fig. 3 shows the cross validation results of PD vs. HC. Similar curves (not shown) were obtained for PD baseline vs. PD year 4 classification. Because CNN training is computationally expensive, we did not use cross validation to set the hyperparameters for the nonlinear classifier. Instead, the learning rate was set manually to 0.01 and the number of epochs was set to 500. We used a scheduler which reduced the learning rate by half after half of the epochs. Furthermore, 10% of the training set was kept as the validation set and the remaining was used to train the neural network. We pick the network parameters with the highest accuracy (over the epochs) on the validation set for test.
C. Classification Accuracy
The classification accuracies of the 100 training/test sets for PD vs. HC and PD baseline vs. PD year 4 are shown in Fig. 4. For PD vs. HC, classical SBR (SBR), the self-normalized striatum plus occipital lobe voxels (S + O), and the self-normalized striatum voxels (S) lead to almost identical results for the test set (see Fig. 4(a) and Table I). The classification accuracies for the test set are quite high (the classification accuracies for the training set are similar). In fact, the classification accuracies in Table I are noticeably higher than those reported in the literature that use the same PPMI data (typical reported accuracies are in the range 95.1–97.9%) [15], [11], [19], [23], [16].
TABLE I.
Mean (std) of accuracies (%) | p-value (vs. SBR) | ||||
---|---|---|---|---|---|
SBR | S + O | S | S + O | S | |
LR | 98.77 (0.97) | 98.81 (0.91) | 98.94 (0.86) | 0.744 | 0.181 |
SVM | 98.67 (1.02) | 98.77 (0.86) | 98.69 (0.98) | 0.475 | 0.902 |
CNN | 98.68 (1.07) | 98.59 (0.94) | 98.41 (1.08) | 0.543 | 0.079 |
To compare the classification accuracies of standard SBR vs. self-normalized methods, we calculated p-values from a t-test comparing the mean classification accuracy of the self-normalizing methods to that of standard SBR. The p-values (see Table I) are all significantly above 0.05, showing that there is no significant performance difference between the methods.
Classification accuracies for PD baseline vs. PD year 4 show a similar pattern (see Fig. 4(b) and Table II). While the classification accuracies are not as high as PD vs. HC (this is discussed further in Section V), the differences in the performance of the classifiers are again not statistically significant, except for one case (LR on S + O).
TABLE II.
Mean (std) of accuracies (%) | p-value (vs. SBR) | ||||
---|---|---|---|---|---|
SBR | S + O | S | S + O | S | |
LR | 77.74 (3.92) | 76.47 (4.17) | 77.50 (3.94) | 0.028 | 0.666 |
SVM | 75.29 (4.42) | 74.63 (4.69) | 76.51 (4.50) | 0.307 | 0.055 |
CNN | 73.02 (4.33) | 71.74 (5.38) | 71.90 (6.08) | 0.066 | 0.135 |
D. Salient Voxels
Recall that part of our goal is to identify salient voxels (voxels which contribute significantly to classification). As mentioned before, classification weights of linear classifiers indicate these voxels. For nonlinear classifiers, the saliency map [28], which calculates the derivative of the logit with respect to the input, can be used for similar purposes. Because the saliency map is calculated for each test example, we took one train-test split and averaged the saliency maps over all the test examples.
The linear weights and the averaged saliency maps are shown in Fig. 5. Over the 9 combinations of normalizing strategies and classifiers used for each classification task, the results are surprisingly consistent. For HC vs. PD, the negative classifier weights and negative saliency (both rendered in blue) for PD vs. HC are mostly in the putamen on the right side (the more affected putamen). The positive weights and positive saliency is mostly in the left caudate (the least affected side). Thus reduced values in the right putamen relative to the values in the left caudate are significant in classifying PD vs. HC.
PD baseline vs. PD year 4 shows negative coefficients and negative saliency in the putamen on the left side, showing that decreasing values in this putamen corresponds to progression from baseline to year 4.
To make the above observations more quantitative, we evaluated the contribution of each region to the classification. Noting that all classifiers give almost identical salient voxels, we focused on the sparse logistic linear classifier. We restricted w to each of the four caudate and putamen regions from the MNI atlas, and evaluated using the restricted w and self-normalized for PD vs. HC and PD baseline vs. PD year 4. For PD vs. HC, the mean has the largest difference when w is restricted to the right putamen, indicating that this is the region where PD has the largest effect (see Fig. 6). Similarly, for PD baseline vs. PD year 4, we evaluated the mean for PD baseline and year 4. The largest change in the mean is in the left putamen (see Fig. 6).
V. Discussion and Conclusion
The experiments clearly show that self-normalized classification does not lead to any loss of classification accuracy. In fact, as noted above, the classification accuracies for PD vs. HC are higher than those reported in the literature. One contributing factor to the increased accuracy is the fact that the images were flipped around the mid-plane so that the more affected side appears to the right.
The lower accuracy of PD baseline vs. year 4 is also understandable. It is estimated that the onset of PD occurs almost 10 years before PD is diagnosed [30]. Thus PD vs. HC images, even at baseline, have about 10 years of accumulated disease evidence for classification. In contrast, baseline vs year 4 has evidence accumulated from less than half of that time. Moreover, the PD images at baseline may correspond to different stages of the disease. Therefore, the two classes are not as easily separable as PD vs. HC.
Finally, the salient voxels identified by the classifiers are quite meaningful. PD is known to start asymmetrically, affecting the putamen in one hemisphere before affecting the caudate. Then, as the disease progresses, it affects the other hemisphere. Consistent with this, our results show a decrease in DaTscan intensity in the right putamen (compared with the left caudate) is significant in classifying baseline PD vs. HC. And subsequent decrease in the left putamen is indicative of longitudinal progression from baseline to year 4.
In summary, the geometry of multiplicative equivalence shows that PET/SPECT images can be classified by self-normalization without any loss of accuracy. This method is effective with linear and non-linear classifiers, and it provides an understanding of disease affecting voxels in the image.
Acknowledgment
This research was supported by the grant R01NS107328 from the NIH (NINDS). Data used in the preparation of this article were obtained from the Parkinson’s Progression Markers Initiative (PPMI) database (www.ppmi-info.org/data). For up-to-date information on the study, visit www.ppmi-info.org. PPMI – a public-private partnership – is funded by the Michael J. Fox Foundation for Parkinson’s Research and funding partners. The full list of funding partners can be found at https://www.ppmi-info.org/about-ppmi/who-we-are/study-sponsors/.
Appendix: Proofs of Claims
Proof of Claim 1:
and are distinct d − 1 dimensional subspaces of . Thus their intersection is non-empty, and is an affine subspace of . Moreover, does not contain the origin (because does not) and has dimension d − 2. Thus its span has dimension d − 2 + 1 = d − 1, i.e. dim .
Denote the set of all satisfying (w − b1N)Tx = 0 by . Because the unit norm vector w ∈ ΠC, and the nonzero vector 1N ∈ ΠN, and ΠC ⊥ ΠN, w is linearly independent of 1N. Hence the vector (w − b1N) cannot be 0 for any b. Thus is a d − 1 dimensional subspace of . We have to show that is equal to . Any satisfies
(2) |
Subtracting b times the second row of the equation from the first shows that x also satisfies (w − b1N)Tx = 0. It follows that if x1, , then (w−b1N)T(α1x1+α2x2) = 0, showing that , which is the span of , is contained in . But , hence . □
Proof of Claim 2:
All images whose normalized versions lie on one side of have half-rays that lie on one side of . □
Proof of Claim 3:
The proof is by contradiction. Suppose we have unit norm vectors w1, w2 ∈ ΠC and real numbers b1, b2, such that (w1, b1) ≠ (w2, b2). And suppose that the classification subspaces for the two are equal. The classification subspaces are given by (w1 − b11N)Tx = 0 and (w2 − b21N)Tx = 0. For these subspaces to be the same, there must exist a λ such that λ(w1 −b11N) = (w2 −b21N). Rearranging this equation gives (λw1−w2) = (λb1−b2)1N. The term on the left hand side is a linear combination of vectors in ΠC, hence is a vector in ΠC. The term on the right hand side is a vector in ΠN. Since ΠC ⊥ ΠN the equation can hold only if each term is 0. Setting the left hand side to 0 gives λw1 = w2. Since w1 and w2 are unit norm vectors, this implies λ = 1 and w1 = w2. Similarly, setting the right hand side equal to zero, and using λ = 1, gives b1 = b2. Thus the two classification subspaces are the same if and only if w1 = w2 and b1 = b2, which contradicts the assumption. Therefore, the subspace classification boundaries are different, showing that each pair w, b gives a unique subspace classification boundary. □
Proof of Claim 4:
Since the normalization regions N ≠ N′, the vectors 1N and 1N′ are linearly independent. Also since N and N′ are both normalization regions, ΠN, ΠN′ ⊥ ΠC. Thus their direct sum ΠN ⊕ ΠN′ is also orthogonal to ΠC.
Suppose . Then, there must exist a λ such that λ(w − b1N) = (w′ − b′1N′), i.e. λw − w′ = λb1N − b′1N′. As before, the left hand side of this equation is a vector in ΠC. The right hand side is a vector in ΠN ⊕ ΠN′ and thus orthogonal to ΠC. The two vectors can be equal only if they are 0. Equating the left hand side to 0 gives w = w′ and λ = 1. Equating the right hand side to zero, and using λ = 1, gives b1N = b′1N′. Since 1N and 1N′ are linearly independent, this is possible only if b = b′ = 0, which proves the Claim in one direction.
For the opposite direction, assuming w = w′ and b = b′ = 0 gives wTx = 0 as the equation for both and , showing that they are identical. □
Contributor Information
Yuan Zhou, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
Hemant D. Tagare, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
References
- [1].Brahim A, Ramírez J, Górriz JM, Khedher L, and Salas-Gonzalez D, “Comparison between Different Intensity Normalization Methods in 123I-Ioflupane Imaging for the Automatic Detection of Parkinsonism,” PLOS ONE, vol. 10, no. 6, p. e0130274, Jun. 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Borghammer P, Aanerud J, and Gjedde A, “Data-driven intensity normalization of PET group comparison studies is superior to global mean normalization,” NeuroImage, vol. 46, no. 4, pp. 981–988, Jul. 2009. [DOI] [PubMed] [Google Scholar]
- [3].Innis RB, Cunningham VJ, Delforge J, Fujita M, Gjedde A, Gunn RN et al. , “Consensus nomenclature for in vivo imaging of reversibly binding radioligands,” Journal of Cerebral Blood Flow & Metabolism, vol. 27, no. 9, pp. 1533–1539, 2007. [DOI] [PubMed] [Google Scholar]
- [4].Dukart J, Mueller K, Horstmann A, Vogt B, Frisch S, Barthel H et al. , “Differential effects of global and cerebellar normalization on detection and differentiation of dementia in FDG-PET studies,” Neuroimage, vol. 49, no. 2, pp. 1490–1495, 2010. [DOI] [PubMed] [Google Scholar]
- [5].Ortega Lozano S, Martinez del Valle Torres M, Ramos Moreno E, Sanz Viedma S, Amrani Raissouni T, and Jiménez-Hoyuela J, “Quantitative evaluation of SPECT with FP-CIT. Importance of the reference area,” Revista Española de Medicina Nuclear (English Edition), vol. 29, no. 5, pp. 246–250, Jan. 2010. [DOI] [PubMed] [Google Scholar]
- [6].Shokouhi S, Mckay JW, Baker SL, Kang H, Brill AB, Gwirtsman HE et al. , “Reference tissue normalization in longitudinal 18 F-florbetapir positron emission tomography of late mild cognitive impairment,” Alzheimer’s research & therapy, vol. 8, no. 1, pp. 1–12, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [7].Nugent S, Croteau E, Potvin O, Castellano C-A, Dieumegarde L, Cunnane SC et al. , “Selection of the optimal intensity normalization region for FDG-PET studies of normal aging and Alzheimer’s disease,” Scientific Reports, vol. 10, no. 1, p. 9261, Jun. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [8].López-González FJ, Silva-Rodríguez J, Paredes-Pacheco J, Niñerola-Baizán A, Efthimiou N, Martín-Martín C et al. , “Intensity normalization methods in brain FDG-PET quantification,” NeuroImage, vol. 222, p. 117229, Nov. 2020. [DOI] [PubMed] [Google Scholar]
- [9].Messa C, Volonté MA, Fazio F, Zito F, Carpinelli A, d’Amico A et al. , “Differential distribution of striatal [123I]β-CIT in Parkinson’s disease and progressive supranuclear palsy, evaluated with single-photon emission tomography,” European Journal of Nuclear Medicine, vol. 25, no. 9, pp. 1270–1276, Sep. 1998. [DOI] [PubMed] [Google Scholar]
- [10].Oliveira FPM and Castelo-Branco M, “Computer-aided diagnosis of Parkinson’s disease based on [123I]FP-CIT SPECT binding potential images, using the voxels-as-features approach and support vector machines,” Journal of Neural Engineering, vol. 12, no. 2, p. 026008, Feb. 2015. [DOI] [PubMed] [Google Scholar]
- [11].Tagare HD, DeLorenzo C, Chelikani S, Saperstein L, and Fulbright RK, “Voxel-based logistic analysis of PPMI control and Parkinson’s disease DaTscans,” NeuroImage, vol. 152, pp. 299–311, May 2017. [DOI] [PubMed] [Google Scholar]
- [12].Happe S, Pirker W, Klösch G, Sauter C, and Zeitlhofer J, “Periodic leg movements in patients with Parkinson’s disease are associated with reduced striatal dopamine transporter binding,” Journal of Neurology, vol. 250, no. 1, pp. 83–86, Jan. 2003. [DOI] [PubMed] [Google Scholar]
- [13].Salas-Gonzalez D, Górriz JM, Ramírez J, Illán IA, and Lang EW, “Linear intensity normalization of FP-CIT SPECT brain images using the α-stable distribution,” NeuroImage, vol. 65, pp. 449–455, 2013. [DOI] [PubMed] [Google Scholar]
- [14].Illán IA, Górriz JM, Ramírez J, Segovia F, Jiménez-Hoyuela JM, and Lozano SJO, “Automatic assistance to Parkinson’s disease diagnosis in DaTSCAN SPECT imaging,” Medical Physics, vol. 39, no. 10, pp. 5971–5980, 2012. [DOI] [PubMed] [Google Scholar]
- [15].Prashanth R, Dutta Roy S, Mandal PK, and Ghosh S, “Automatic classification and prediction models for early Parkinson’s disease diagnosis from SPECT imaging,” Expert Systems with Applications, vol. 41, no. 7, pp. 3333–3342, Jun. 2014. [Google Scholar]
- [16].Ortiz A, Munilla J, Martínez-Ibañez M, Górriz JM, Ramírez J, and Salas-Gonzalez D, “Parkinson’s Disease Detection Using Isosurfaces-Based Features and Convolutional Neural Networks,” Frontiers in Neuroinformatics, vol. 13, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [17].Koch W, Radau PE, Hamann C, and Tatsch K, “Clinical Testing of an Optimized Software Solution for an Automated, Observer-Independent Evaluation of Dopamine Transporter SPECT Studies,” Journal of Nuclear Medicine, vol. 46, no. 7, pp. 1109–1118, Jul. 2005. [PubMed] [Google Scholar]
- [18].Taylor JC and Fenner JW, “Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: The beginning of the end for semi-quantification?” EJNMMI Physics, vol. 4, no. 1, p. 29, Nov. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Adeli E, Wu G, Saghafi B, An L, Shi F, and Shen D, “Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease,” Scientific Reports, vol. 7, no. 1, p. 41069, Jan. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Wang Z, Zhu X, Adeli E, Zhu Y, Nie F, Munsell B et al. , “Multi-modal classification of neurodegenerative disease by progressive graphbased transductive learning,” Medical Image Analysis, vol. 39, pp. 218–230, Jul. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Choi H, Ha S, Im HJ, Paek SH, and Lee DS, “Refining diagnosis of Parkinson’s disease with deep learning-based interpretation of dopamine transporter imaging,” NeuroImage: Clinical, vol. 16, pp. 586–594, Jan. 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [22].Huang G-H, Lin C-H, Cai Y-R, Chen T-B, Hsu S-Y, Lu N-H et al. , “Multiclass machine learning classification of functional brain images for Parkinson’s disease stage prediction,” Statistical Analysis and Data Mining: The ASA Data Science Journal, vol. 13, no. 5, pp. 508–523, 2020. [Google Scholar]
- [23].Oliveira FPM, Faria DB, Costa DC, Castelo-Branco M, and Tavares JMRS, “Extraction, selection and comparison of features for an effective automated computer-aided diagnosis of Parkinson’s disease based on [123I]FP-CIT SPECT images,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 45, no. 6, pp. 1052–1062, Jun. 2018. [DOI] [PubMed] [Google Scholar]
- [24].Prashanth R, Roy SD, Mandal PK, and Ghosh S, “High-Accuracy Classification of Parkinson’s Disease Through Shape Analysis and Surface Fitting in 123I-Ioflupane SPECT Imaging,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 3, pp. 794–802, May 2017. [DOI] [PubMed] [Google Scholar]
- [25].Huang S-F, Wen Y-H, Chu C-H, and Hsu C-C, “A Shape Approximation for Medical Imaging Data,” Sensors, vol. 20, no. 20, p. 5879, Jan. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [26].Zhou Y, Tinaz S, and Tagare HD, “Robust Bayesian analysis of early-stage Parkinson’s disease progression using DaTscan images,” IEEE Transactions on Medical Imaging, vol. 40, no. 2, pp. 549–561, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Otsu N, “A threshold selection method from gray-level histograms,” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62–66, 1979. [Google Scholar]
- [28].Simonyan K, Vedaldi A, and Zisserman A, “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps,” in arXiv:1312.6034 [Cs], Apr. 2014. [Google Scholar]
- [29].Wright SJ, Nowak RD, and Figueiredo MAT, “Sparse Reconstruction by Separable Approximation,” IEEE Transactions on Signal Processing, vol. 57, no. 7, pp. 2479–2493, Jul. 2009. [Google Scholar]
- [30].Gaig C and Tolosa E, “When does Parkinson’s disease begin?” Movement Disorders, vol. 24, no. S2, pp. S656–S664, 2009. [DOI] [PubMed] [Google Scholar]