1 Introduction

Historically, multi-atlas segmentation (MAS) has been regarded as the de facto standard method on detailed whole brain segmentation (>100 anatomical regions) due to its high accuracy. Moreover, MAS only demands a small number of manually labeled examples (atlases) [1]. Recently, deep convolutional neural networks (DCNN) have been applied to whole brain segmentation. To address the challenges of training a network on a small number of manually traced brains, patch-based DCNN methods have been proposed. de Brébisson et al. [2] proposed to learn 2D and 3D patches as well as spatial information, which was extended to include 2.5D patches by BrainSegNet [9]. Recently, DeepNAT [12] was proposed to perform hierarchical multi-task learning on 3D patches. Li et al. [7] introduced the 3D patch-based HC Net for high resolution segmentation. From another perspective, Roy et al. [11] proposed to use 2D fully convolutional network (FCN) to learn slice-wise image features by using auxiliary labels on initially unlabeled data. Although detailed cortical parcellations were not performed, Roy et al. revealed a promising direction on how to use initially unlabeled data to leverage training. With a large number of auxiliary labels, it is appealing to perform 3D FCN (e.g., 3D U-Net [3]) on whole brain segmentation since it typically yields higher spatial consistency than 2D or patch-based methods. However, directly applying 3D FCN to whole brain segmentation (e.g., 1 mm isotropic resolution) is restricted by the current graphics processing unit (GPU) memory. A common solution is to down sample the inputs, yet, the accuracy can be sacrificed.

In this paper, we propose the spatially localized atlas network tiles (SLANT) method for detailed whole brain segmentation (133 labels under BrainCOLOR protocol [5]) by combining canonical medical image processing techniques with deep learning. SLANT distributes a set of independent 3D networks (network tiles) to cover overlapped sub-spaces in a standard MNI atlas space.

Fig. 1.
figure 1

The proposed SLANT-27 (27 network tiles) method is presented, which combines canonical medical image processing methods (registration, harmonization, label fusion) with 3D network tiles. 3D U-Net is used as each tile, whose deconvolutional channel numbers are modified to 133. The tiles are spatially overlapped in MNI space.

Then, majority vote label fusion was used to obtain final whole brain segmentation from the overlapped sub-spaces. To leverage learning performance on 133 labels with only 45 manually traced training data, auxiliary labels on 5111 initially unlabeled scans were created from non-local spatial STAPLE (NLSS) MAS [1] for pre-training inspired by [11].

2 Methods

Registration and Intensity Harmonization: Affine registration [10] was employed to register all training and testing scans to MNI 305 space (Fig. 1). Then, N4 bias field correction was deployed to reduce bias. To further harmonize the intensities on large-scale MRI, we introduced a regression-based intensity normalization method. First, we defined a gray-scale MRI volume (with N voxels) as a vector \(I \in \mathbb {R}^{N\times 1}\). I was demeaned and normalized by standard deviation (std) to \(I^{'}\). The intensities were harmonized by a pre-trained linear regression model on sorted intensity. The sorted intensity vector \(V_s\) was calculated from \(V_s=\text {sort}(I^{'}(mask>0))\), where “sort” rearrange intensities from largest to smallest, and “mask” was a prior mask learned from a union operation from all atlases. To train the linear regression, mean sorted intensity vector \(\overline{V_s}\) was obtained by averaging \(V_s\) from all atlases. The coefficients were fitted between \(V_{s}^{'}\) (from \(I^{'}\)) and \(\overline{V_s}\), and intensity normalized image \(\widehat{I^{'}}\) is obtained from fitted \(\beta _1\) and \(\beta _0\): \(\overline{V_s} = \beta _{1} \cdot {} {V_{s}^{'}} + \beta _{0}\), and \(\widehat{I^{'}} = \beta _{1} \cdot {} {I^{'}} + \beta _{0}\).

Network Tiles: After affine registration, all training brains were mapped to the same MNI atlas space (\(172\,\times \,220\,\times \,156\) voxels with 1 mm isotropic resolution). We employed k 3D U-Net as a network tiles to cover entire MNI space with/without overlaps (Fig. 2). To be compatible with 133 labels, the number of channels of deconvolutional layers in each 3D U-Net were defined as 133 (Fig. 1). Each sub-space \(\psi _n\) was presented by one coordinate \((x_n,y_n,z_n)\) and sub-space size \((d_x,d_y,d_z)\), \(n \in \{1,2,\ldots ,k\}\) as

$$\begin{aligned} \psi _n=[x_n:(x_n+d_x),y_n:(y_n+d_y),z_n:(z_n+d_z)] \end{aligned}$$
(1)

As showed in Fig. 2, SLANT-8 covered the MNI space using eight U-Nets by covering \(k=2\times 2\times 2=8\) non-overlapped sub-spaces. To improve spatial consistency at boundaries, SLANT-27 covered \(k=3\times 3\times 3=27\) overlapped sub-spaces.

Fig. 2.
figure 2

SLANT-8 covered eight non-overlapped sub-spaces in MNI, while SLANT-27 covered 27 overlapped sub-spaces in MNI. Middle coronal slices from all 27 sub-spaces were visualized (lower left panel). The number of overlays, as well as sub-spaces overlays, were showed (upper right panel). The incorrect labels (red arrow) in one sub-space were corrected in final segmentation by performing majority vote label fusion.

Label Fusion: For SLANT-27, whose sub-spaces were overlapped, the label fusion method were employed to get a single segmentation from overlapped sub-spaces. Briefly, the k segmentations \(\{S_1,S_2,\ldots ,S_k\}\) from network tiles were fused to achieve final segmentation \(S_{MNI}\) in MNI space by performing majority vote:

$$\begin{aligned} S_{MNI}(i)=\mathop {\arg \min }_{l \in \{0,1,\ldots ,L-1\}}\frac{1}{k} \sum _{m=1}^{k} p(l|S_m,i) \end{aligned}$$
(2)

where \(\{0,1,\ldots ,L-1\}\) represents L possible labels for a given voxel \(i \in \{1,2,\ldots ,N\}\). \(p(l|S_m,i)=1\) if \(S_m (i)=l\), while \(p(l|S_m,i)=0\), otherwise. Then, the \(S_{MNI}\) was registered to the original space by conducting another affine registration [10]. When using SLANT-8, whose sub-spaces were not overlapped, the native concatenation was applied rather than performing the label fusion.

Fig. 3.
figure 3

This figure demonstrates the major components of different segmentation methods. (45) indicated the 45 OASIS manually traced images were used in training, while (5111) indicated the 5111 auxiliary label images were used in training.

Boost Learning on Unlabeled Data: Similar to [11], the auxiliary labels on large-scale initially unlabeled MRI scans were obtained by performing existing segmentation tools. Briefly, MAS using hierarchical non-local spatial staple (NLSS) label fusion [1] was performed on 5111 multi-site scans. Next, the large-scale auxiliary labels were used for pre-training. Then, the small-scale manually labeled training data were used for fine-tuning the network.

3 Experiments

Training Cohort: 45 T1-weighted (T1w) MRI scans from Open Access Series on Imaging Studies (OASIS) dataset [8] were manually labeled to 133 labels according to BrainCOLOR protocol [5]. 5111 multi-site T1w MRI scans for auxiliary labels were achieved from night different projects (described in [5]). Testing Cohort 1: Five withheld T1w MRI scans from OASIS dataset with manual segmentation (BrainCOLOR protocol) were used for validation, which evaluates the performance of different methods on the same site testing data. Testing Cohort 2: One T1 MRI scan from colin27 cohort [4] with manual segmentation (BrainCOLOR protocol) was used for testing. This cohort evaluates the performance of different methods on a widely used standard template. Testing Cohort 3: 13 T1 MRI scans from Child and Adolescent Neuro Development Initiative (CANDI) [6] were used for testing. This cohort evaluates the performance of different methods on an independent population, whose age range (5–15 yrs.) was not covered by OASIS training cohort (18–96 yrs.).

Experimental Design: The experimental design is presented in Fig. 3. First, two state-of-the-art multi-atlas label fusion methods, joint label fusion (JLF) [13] and non-local spatial staple (NLSS) [1], were used as baseline methods. The parameters were set the same as the papers, which were optimized for whole brain segmentation. Next, patch-based network [2] and naive 3D U-Net [3] methods were used using their open-source implementations. By using affine registration as preprocessing, Reg. + U-Net was trained using 45 manually labeled scans and 5111 auxiliary labeled scans. Then, the proposed SLANT methods were evaluated on covering eight non-overlapped sub-spaces (SLANT-8) and 27 overlapped sub-spaces (SLANT-27), trained by 5111 auxiliary labeled scans. Last, 45 manually labeled scans were used to fine-tune the SLANT networks.

For all 3D U-Net in baseline methods and SLANT networks, we used the same parameters with 3D batch size = 1, input resolution = \(96\times 128\times 88\), input channel = 1, output channel = 133, optimizer = Adam, learning rate = 0.0001. The deep networks can fit into an NVIDIA Titan GPU with 12 GB memory. For all the training using 5111 scans, 6 epochs were trained (\(\approx \)24 training hours); while for all the training using 45 scans, 1000 epochs were trained to ensure the similar training batches as 5111 scans. For the fine-tuning using 45 scans, 30 epochs were trained.

Results reported in this paper were from the epoch with best overall performance on OASIS cohort for each method, so that colin27 and CANDI were independent testing cohorts as external validation.

Fig. 4.
figure 4

Qualitative results of manual segmentation, MAS methods, patch-based DCNN method, U-Net approaches and proposed SLANT methods.

Fig. 5.
figure 5

From quantitative results, The propsed SLANT-27 using 5111 auxiliary labels for pretraining and fine-tuned (FT) by 45 manual labels achieved highest median Dice similarity coefficient (DSC) values. The SLANT-27 was used as the reference method (REF) in statistical analysis. The significant difference to REF was marked with *.

4 Result

The qualitative and quantitative results have been shown in Figs. 4 and 5. In Fig. 5, “45” indicated the 45 OASIS manually traced images were used in training, while “5111” indicated the 5111 auxiliary label images were used in training. The mean Dice similarity coefficient (DSC) values on 132 anatomical labels (excluding background) between automatic segmentation methods with manual segmentation in original image space were showed as boxplots in Fig. 5. From the results, the affine registration (Reg. + U-Net) significantly leveraged the U-Net performance (compared with Naive U-Net). For the same network (Reg. + U-Net), results using 5111 auxiliary labeled scans achieved better performance than using 45 manual labeled scans. From Table 1, the proposed SLANT-27 method with fine-tuning achieved superior performance on mean DSC across testing cohorts. All claims of statistical significance in this paper have been calculated using the Wilcoxon signed rank test for p < 0.05.

Table 1. Mean, std and median DSC values on three validation cohorts

5 Conclusion and Discussion

In this study, we developed the SLANT method to combine the canonical medical image processing approaches with localized 3D FCN networks in MNI space. For the same network (Reg. + U-Net), results from 5111 auxiliary labeled scans achieved better performance than the results from 45 manual labeled scans. From Figs. 4 and 5, and Table 1, we demonstrate that our proposed strategy successfully takes advantages of historical efforts (registration, harmonization, and label-fusion) and consistently yields superior performance. Moreover, the proposed method requires \(\approx \)15 min, compared with >30 h by MAS. Note that the 3D U-Net in the proposed SLANT can be replaced by other 3D segmentation networks, which might yield better performance.