Abstract
Whole brain segmentation on a structural magnetic resonance imaging (MRI) is essential in non-invasive investigation for neuroanatomy. Historically, multi-atlas segmentation (MAS) has been regarded as the de facto standard method for whole brain segmentation. Recently, deep neural network approaches have been applied to whole brain segmentation by learning random patches or 2D slices. Yet, few previous efforts have been made on detailed whole brain segmentation using 3D networks due to the following challenges: (1) fitting entire whole brain volume into 3D networks is restricted by the current GPU memory, and (2) the large number of targeting labels (e.g., >100 labels) with limited number of training 3D volumes (e.g., <50 scans). In this paper, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks to cover overlapped sub-spaces in a standard atlas space. This strategy simplifies the whole brain learning task to localized sub-tasks, which was enabled by combing canonical registration and label fusion techniques with deep learning. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by MAS for pre-training. From empirical validation, the state-of-the-art MAS method achieved mean Dice value of 0.76, 0.71, and 0.68, while the proposed method achieved 0.78, 0.73, and 0.71 on three validation cohorts. Moreover, the computational time reduced from >30 h using MAS to \(\approx \)15 min using the proposed method. The source code is available online (https://github.com/MASILab/SLANTbrainSeg).
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
Historically, multi-atlas segmentation (MAS) has been regarded as the de facto standard method on detailed whole brain segmentation (>100 anatomical regions) due to its high accuracy. Moreover, MAS only demands a small number of manually labeled examples (atlases) [1]. Recently, deep convolutional neural networks (DCNN) have been applied to whole brain segmentation. To address the challenges of training a network on a small number of manually traced brains, patch-based DCNN methods have been proposed. de Brébisson et al. [2] proposed to learn 2D and 3D patches as well as spatial information, which was extended to include 2.5D patches by BrainSegNet [9]. Recently, DeepNAT [12] was proposed to perform hierarchical multi-task learning on 3D patches. Li et al. [7] introduced the 3D patch-based HC Net for high resolution segmentation. From another perspective, Roy et al. [11] proposed to use 2D fully convolutional network (FCN) to learn slice-wise image features by using auxiliary labels on initially unlabeled data. Although detailed cortical parcellations were not performed, Roy et al. revealed a promising direction on how to use initially unlabeled data to leverage training. With a large number of auxiliary labels, it is appealing to perform 3D FCN (e.g., 3D U-Net [3]) on whole brain segmentation since it typically yields higher spatial consistency than 2D or patch-based methods. However, directly applying 3D FCN to whole brain segmentation (e.g., 1 mm isotropic resolution) is restricted by the current graphics processing unit (GPU) memory. A common solution is to down sample the inputs, yet, the accuracy can be sacrificed.
In this paper, we propose the spatially localized atlas network tiles (SLANT) method for detailed whole brain segmentation (133 labels under BrainCOLOR protocol [5]) by combining canonical medical image processing techniques with deep learning. SLANT distributes a set of independent 3D networks (network tiles) to cover overlapped sub-spaces in a standard MNI atlas space.
Then, majority vote label fusion was used to obtain final whole brain segmentation from the overlapped sub-spaces. To leverage learning performance on 133 labels with only 45 manually traced training data, auxiliary labels on 5111 initially unlabeled scans were created from non-local spatial STAPLE (NLSS) MAS [1] for pre-training inspired by [11].
2 Methods
Registration and Intensity Harmonization: Affine registration [10] was employed to register all training and testing scans to MNI 305 space (Fig. 1). Then, N4 bias field correction was deployed to reduce bias. To further harmonize the intensities on large-scale MRI, we introduced a regression-based intensity normalization method. First, we defined a gray-scale MRI volume (with N voxels) as a vector \(I \in \mathbb {R}^{N\times 1}\). I was demeaned and normalized by standard deviation (std) to \(I^{'}\). The intensities were harmonized by a pre-trained linear regression model on sorted intensity. The sorted intensity vector \(V_s\) was calculated from \(V_s=\text {sort}(I^{'}(mask>0))\), where “sort” rearrange intensities from largest to smallest, and “mask” was a prior mask learned from a union operation from all atlases. To train the linear regression, mean sorted intensity vector \(\overline{V_s}\) was obtained by averaging \(V_s\) from all atlases. The coefficients were fitted between \(V_{s}^{'}\) (from \(I^{'}\)) and \(\overline{V_s}\), and intensity normalized image \(\widehat{I^{'}}\) is obtained from fitted \(\beta _1\) and \(\beta _0\): \(\overline{V_s} = \beta _{1} \cdot {} {V_{s}^{'}} + \beta _{0}\), and \(\widehat{I^{'}} = \beta _{1} \cdot {} {I^{'}} + \beta _{0}\).
Network Tiles: After affine registration, all training brains were mapped to the same MNI atlas space (\(172\,\times \,220\,\times \,156\) voxels with 1 mm isotropic resolution). We employed k 3D U-Net as a network tiles to cover entire MNI space with/without overlaps (Fig. 2). To be compatible with 133 labels, the number of channels of deconvolutional layers in each 3D U-Net were defined as 133 (Fig. 1). Each sub-space \(\psi _n\) was presented by one coordinate \((x_n,y_n,z_n)\) and sub-space size \((d_x,d_y,d_z)\), \(n \in \{1,2,\ldots ,k\}\) as
As showed in Fig. 2, SLANT-8 covered the MNI space using eight U-Nets by covering \(k=2\times 2\times 2=8\) non-overlapped sub-spaces. To improve spatial consistency at boundaries, SLANT-27 covered \(k=3\times 3\times 3=27\) overlapped sub-spaces.
Label Fusion: For SLANT-27, whose sub-spaces were overlapped, the label fusion method were employed to get a single segmentation from overlapped sub-spaces. Briefly, the k segmentations \(\{S_1,S_2,\ldots ,S_k\}\) from network tiles were fused to achieve final segmentation \(S_{MNI}\) in MNI space by performing majority vote:
where \(\{0,1,\ldots ,L-1\}\) represents L possible labels for a given voxel \(i \in \{1,2,\ldots ,N\}\). \(p(l|S_m,i)=1\) if \(S_m (i)=l\), while \(p(l|S_m,i)=0\), otherwise. Then, the \(S_{MNI}\) was registered to the original space by conducting another affine registration [10]. When using SLANT-8, whose sub-spaces were not overlapped, the native concatenation was applied rather than performing the label fusion.
Boost Learning on Unlabeled Data: Similar to [11], the auxiliary labels on large-scale initially unlabeled MRI scans were obtained by performing existing segmentation tools. Briefly, MAS using hierarchical non-local spatial staple (NLSS) label fusion [1] was performed on 5111 multi-site scans. Next, the large-scale auxiliary labels were used for pre-training. Then, the small-scale manually labeled training data were used for fine-tuning the network.
3 Experiments
Training Cohort: 45 T1-weighted (T1w) MRI scans from Open Access Series on Imaging Studies (OASIS) dataset [8] were manually labeled to 133 labels according to BrainCOLOR protocol [5]. 5111 multi-site T1w MRI scans for auxiliary labels were achieved from night different projects (described in [5]). Testing Cohort 1: Five withheld T1w MRI scans from OASIS dataset with manual segmentation (BrainCOLOR protocol) were used for validation, which evaluates the performance of different methods on the same site testing data. Testing Cohort 2: One T1 MRI scan from colin27 cohort [4] with manual segmentation (BrainCOLOR protocol) was used for testing. This cohort evaluates the performance of different methods on a widely used standard template. Testing Cohort 3: 13 T1 MRI scans from Child and Adolescent Neuro Development Initiative (CANDI) [6] were used for testing. This cohort evaluates the performance of different methods on an independent population, whose age range (5–15 yrs.) was not covered by OASIS training cohort (18–96 yrs.).
Experimental Design: The experimental design is presented in Fig. 3. First, two state-of-the-art multi-atlas label fusion methods, joint label fusion (JLF) [13] and non-local spatial staple (NLSS) [1], were used as baseline methods. The parameters were set the same as the papers, which were optimized for whole brain segmentation. Next, patch-based network [2] and naive 3D U-Net [3] methods were used using their open-source implementations. By using affine registration as preprocessing, Reg. + U-Net was trained using 45 manually labeled scans and 5111 auxiliary labeled scans. Then, the proposed SLANT methods were evaluated on covering eight non-overlapped sub-spaces (SLANT-8) and 27 overlapped sub-spaces (SLANT-27), trained by 5111 auxiliary labeled scans. Last, 45 manually labeled scans were used to fine-tune the SLANT networks.
For all 3D U-Net in baseline methods and SLANT networks, we used the same parameters with 3D batch size = 1, input resolution = \(96\times 128\times 88\), input channel = 1, output channel = 133, optimizer = Adam, learning rate = 0.0001. The deep networks can fit into an NVIDIA Titan GPU with 12 GB memory. For all the training using 5111 scans, 6 epochs were trained (\(\approx \)24 training hours); while for all the training using 45 scans, 1000 epochs were trained to ensure the similar training batches as 5111 scans. For the fine-tuning using 45 scans, 30 epochs were trained.
Results reported in this paper were from the epoch with best overall performance on OASIS cohort for each method, so that colin27 and CANDI were independent testing cohorts as external validation.
4 Result
The qualitative and quantitative results have been shown in Figs. 4 and 5. In Fig. 5, “45” indicated the 45 OASIS manually traced images were used in training, while “5111” indicated the 5111 auxiliary label images were used in training. The mean Dice similarity coefficient (DSC) values on 132 anatomical labels (excluding background) between automatic segmentation methods with manual segmentation in original image space were showed as boxplots in Fig. 5. From the results, the affine registration (Reg. + U-Net) significantly leveraged the U-Net performance (compared with Naive U-Net). For the same network (Reg. + U-Net), results using 5111 auxiliary labeled scans achieved better performance than using 45 manual labeled scans. From Table 1, the proposed SLANT-27 method with fine-tuning achieved superior performance on mean DSC across testing cohorts. All claims of statistical significance in this paper have been calculated using the Wilcoxon signed rank test for p < 0.05.
5 Conclusion and Discussion
In this study, we developed the SLANT method to combine the canonical medical image processing approaches with localized 3D FCN networks in MNI space. For the same network (Reg. + U-Net), results from 5111 auxiliary labeled scans achieved better performance than the results from 45 manual labeled scans. From Figs. 4 and 5, and Table 1, we demonstrate that our proposed strategy successfully takes advantages of historical efforts (registration, harmonization, and label-fusion) and consistently yields superior performance. Moreover, the proposed method requires \(\approx \)15 min, compared with >30 h by MAS. Note that the 3D U-Net in the proposed SLANT can be replaced by other 3D segmentation networks, which might yield better performance.
References
Asman, A.J., Landman, B.A.: Hierarchical performance estimation in the statistical label fusion framework. Med. Image Anal. 18(7), 1070–1081 (2014)
de Brébisson, A., Montana, G.: Deep neural networks for anatomical brain segmentation. arXiv preprint arXiv:1502.02445 (2015)
Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T., Ronneberger, O.: 3D U-net: learning dense volumetric segmentation from sparse annotation. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9901, pp. 424–432. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46723-8_49
Collins, D.L., et al.: Design and construction of a realistic digital brain phantom. Trans. Med. Imaging 17(3), 463–468 (1998)
Huo, Y., Aboud, K., Kang, H., Cutting, L.E., Landman, B.A.: Mapping lifetime brain volumetry with covariate-adjusted restricted cubic spline regression from cross-sectional multi-site MRI. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 81–88. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46720-7_10
Kennedy, D.N., Haselgrove, C., Hodge, S.M., Rane, P.S., Makris, N., Frazier, J.A.: Candishare: a resource for pediatric neuroimaging data. Neuroinformatics 10, 319–322 (2012)
Li, W., Wang, G., Fidon, L., Ourselin, S., Cardoso, M.J., Vercauteren, T.: On the compactness, efficiency, and representation of 3D convolutional networks: brain parcellation as a pretext task. In: Niethammer, M., Styner, M., Aylward, S., Zhu, H., Oguz, I., Yap, P.-T., Shen, D. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 348–360. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_28
Marcus, D.S., Wang, T.H., Parker, J., Csernansky, J.G., Morris, J.C., Buckner, R.L.: Open access series of imaging studies (oasis): cross-sectional mri data in young, middle aged, nondemented, and demented older adults. J. Cogn. Neurosci. 19(9), 1498–1507 (2007)
Mehta, R., Majumdar, A., Sivaswamy, J.: Brainsegnet: a convolutional neural network architecture for automated segmentation of human brain structures. J. Med. Imaging 4(2), 024003 (2017)
Ourselin, S., Roche, A., Subsol, G., Pennec, X., Ayache, N.: Reconstructing a 3D structure from serial histological sections. Image Vis. Comput. 19(1–2), 25–31 (2001)
Roy, A.G., Conjeti, S., Sheet, D., Katouzian, A., Navab, N., Wachinger, C.: Error corrective boosting for learning fully convolutional networks with limited data. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10435, pp. 231–239. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66179-7_27
Wachinger, C., Reuter, M., Klein, T.: Deepnat: deep convolutional neural network for segmenting neuroanatomy. NeuroImage 170, 434–445 (2017)
Wang, H., Yushkevich, P.: Multi-atlas segmentation with joint label fusion and corrective learning-an open source implementation. Front. Neuroinformatics 7, 27 (2013)
Acknowledgments
This research was supported by NSF CAREER 1452485, NIH R01EB017230, R21EY024036, R21NS064534, R01EB006136, R03EB012461, R01NS095291, Intramural Research Program, National Institute on Aging, NIH.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Huo, Y. et al. (2018). Spatially Localized Atlas Network Tiles Enables 3D Whole Brain Segmentation from Limited Data. In: Frangi, A., Schnabel, J., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2018. MICCAI 2018. Lecture Notes in Computer Science(), vol 11072. Springer, Cham. https://doi.org/10.1007/978-3-030-00931-1_80
Download citation
DOI: https://doi.org/10.1007/978-3-030-00931-1_80
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-00930-4
Online ISBN: 978-3-030-00931-1
eBook Packages: Computer ScienceComputer Science (R0)