Nothing Special   »   [go: up one dir, main page]

Skip to main content

Brain tumor image segmentation based on improved FPN

Abstract

Purpose

Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor.

Materials and methods

Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features.

Results

Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother.

Conclusions

The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors.

Peer Review reports

Introduction

Glioma is the most common primary brain tumor. In adults, glioma accounts for 30% to 40% of all brain tumors and 80% of brain malignant tumors [1]. Surgical resection is the most effective treatment method at present, and then radiotherapy is used to treat some residual tumors and cells around the tumor focus, so that glioma can be well controlled [2]. The automatic segmentation of brain tumors is helpful to measure tumor characteristics and help doctors diagnose, plan treatment and predict survival in clinical applications [3].

Magnetic resonance imaging (MRI) is a commonly used technology in the field of radiology. Because of its advantages of non-invasive, non-ionizing radiation and high contrast of soft tissue imaging, it has become the preferred imaging method for brain tumor diagnosis and treatment [4]. At present, the gold standard of brain tumor segmentation is still manual segmentation, but it is expensive, time-consuming and subjective [5]. Therefore, a fast and accurate automatic segmentation method for brain tumor MRI is of great significance for clinical application.

At present, there are two main methods for automatic segmentation of brain tumor images. (1) Machine learning method based on artificial features. This method uses different classifiers for various artificial features, such as support vector machines with spatial and intensity features and Gaussian mixture models with intensity features. However, these algorithms require manual feature extraction, which is costly and error prone, and the model based on manual features is not generalized enough. (2) Based on end-to-end deep learning method. This method can achieve more accurate segmentation without designing complex manual features. For example, the full convolutional network (FCN) expands image level classification to pixel level classification, replaces the full connection layer with convolutional layer, and achieves better segmentation effect, becoming the pioneer of deep learning in semantic segmentation applications [6,7,8,9]. Its disadvantage is that it is insensitive to detail features, does not fully consider the relationship between pixels, and the segmentation result is not fine enough.

U-Net network model is an improvement and extension of FCN. It uses cascading operations to fuse depth features and shallow features, replacing the summation method in FCN. It alleviates the loss of deep features in the upsampling process, and effectively improves the segmentation accuracy [10]. V-Net is a segmentation application method for 3D medical images. The residual network is used in the coding part to ensure the deepening of the network depth and reduce the risk of gradient disappearance or gradient explosion. Daimary et al. proposed a hybrid depth neural network model, Seg UNet and U-SegNet, for brain tumor MRI image segmentation by combining SegNet and U-Net [11]. Zhu et al. discussed the application of the attention gate model in MRI brain tumor segmentation, and proposed a new model, AGResU-Net, to enhance feature learning and extract cerebellar tumor information [12]. Chen et al. proposed a new network VoxResNet, which seamlessly integrates multimodal and multi-level context information into the network to make use of complementary information of different modes and features of different scales [13]. Weng et al. [14] reviewed Generative Adversarial Network based fast MRI segmentation methods and shown that the model has good generalizability and robustness. U-Net proposed by Ronnebergeret al. [15] with the gated recurrent unit (GRU) on ahigher feature level to propagate slice features from one to another. The U-Net +  + network uses a series of grid like dense hop paths to compensate for the semantic difference between the encoder and decoder sub paths, so that the proportion difference of the feature map is smaller during fusion, the gradient flow is improved, and the segmentation accuracy is improved. However, the above-mentioned U-Net and U-Net +  + networks cannot extract the characteristic information of different receptive fields, and their multi-scale processing capacity is limited.

In view of the above shortcomings, this paper proposes a brain tumor image segmentation based on improved feature pyramid networks (FPN) convolution neural network, which can be applied to tumors of different sizes to achieve fine segmentation of brain tumor edge regions. The main contribution of this paper is to introduce the feature pyramid FPN in the downsampling phase of the U-Net network to obtain multi-scale context information with multiple receptive fields and improve the multi-scale processing capability of the network; It enriches the information required by each pixel in the classification, and alleviates the problem of poor edge segmentation of the above model target [16].

Materials and methods

U-Net model

U-Net was proposed by Ronneberger et al. in 2015. It is an improved model based on FCN. Initially, it was mainly used in the field of biomedical cell segmentation. It mainly adopts U-shaped symmetric structure of encoder (contraction path) and decoder (expansion path). As shown in Fig. 1, four down samples and four up samples are required. Each upsampling is fused with the feature output of the same size, and the feature information lost in the pooling process of the encoder is transmitted to the corresponding decoder at different stages using jump connections to enhance the feature information extraction capability. The network model is suitable for multiscale image segmentation; at the same time, it has the advantages of obtaining accurate segmentation results and fast segmentation speed in the case of a small amount of data training model.

Fig. 1
figure 1

Network structure of U-Net model

Because the location and size of tumors in different images vary greatly, the network needs to have enough receptive fields and powerful spatial multi-scale processing capabilities.

Due to the large differences in tumor location and size in different images, the network needs to have sufficient receptive field and powerful spatial multi-scale processing capability. However, unlike Atrous Spatial Pyramid Pooling (ASPP), U-Net cannot master as much semantic information of different scales as possible, nor can it satisfy the accurate segmentation of the region near the target edge in the image [17]. Therefore, the FPN structure is mainly added to U-Net in this paper to improve U-Net's ability to integrate multi-scale semantic information and enrich the information contained in the features used in pixel tag classification.

FPN model

FPN structure is called feature pyramid network, which is widely used in target detection. FPN structure can effectively integrate multi-scale semantic information from encoders. FPN consists of three main parts: bottom-up process, top-down process and horizontal connection. As shown in Fig. 2.

Fig. 2
figure 2

Network structure of FPN model

In the bottom-up pathway, as one moves up, the spatial resolution decreases while high-level structure features extracted increase. At the top of the bottom-up pathway, 1 × 1 convolution is used to reduce the channel depth. Then two 3 × 3 convolutetion were applied which gives the first feature map for segmentation. In the top-down pathways, as one goes from the top-down, the previous layer is up sampled by using nearest neighbors up sampling. Again, 1 × 1 convolution is applied to the corresponding feature maps in the bottom-up pathway and then added element-wise. Two 3 × 3 convolutions are then applied to output the final feature map for image segmentation. In the end, all feature maps having 128 channels are concatenated resulting in 512 channels. A 512 3 × 3 convolution filter is then applied with batch normalization and RELU activation and then a 1 × 1 convolution is applied to obtain a final feature map.

Improved FPN model

In order to improve the segmentation effect of brain tumors, this paper improved the U-Net model, introduced FPN structure into U-Net structure, and made full use of the multi-scale information ability of U-Net encoder [18]. The deconvolution oversampling used in U-Net can obtain relatively smooth structural features, but the original features are not well preserved after the transformation of convolution and nonlinear activation layer [19]. The FPN oversampling method uses quadratic linear interpolation, which preserves the original features more completely than deconvolution operation. The integration of additional functionality ensures that each level passed to the decoder contains as much multi-scale information as possible. By comparing the structure of FPN and U-Net, it is found that they are similar. The horizontal connection in FPN structure can be realized through the horizontal connection in U-Net. This capability facilitates the extension of the U-Net model using the FPN structure. While making full use of U-Net structure, the information of different scales contained in U-Net model is widely used. The network structure of the improved FPN model is shown in Fig. 3.

Fig. 3
figure 3

Network structure of improved FPN model

Dataset

The MRI data set used in this paper was derived from publicly available medical imaging data from the cancer genome atlas (TCGA) [20]. A total of 110 patients with an average age of 47 were enrolled in the TCGA database, including 56 males and 53 females, including 1 unidentified patient, all of whom were patients with lower-grade glioma (LGG). The data consisted of 3,929 MRI images, including 1,373 images with brain tumors and 2,556 normal brain images without tumors. The histogram distribution of MRI diagnostic data set is shown in Fig. 4. A positive sign indicates a tumor in the MRI image, while a negative diagnosis indicates the presence or absence of a tumor in the MRI image. Figure 5 shows the distribution of the diagnostic data sets (positive and negative) for each patient. Figure 6 Visualizes the brain tumor dataset with a tumor mask, shown from top to bottom as: original brain MRI, corresponding segmentation mask, and fusion images of the original image and segmentation mask. The MRI image is a 3-channel RGB image, the corresponding segmentation mask is a single-channel grayscale image, and the image resolution is 256X256.

Fig. 4
figure 4

Histogram distribution of diagnostic datasets

Fig. 5
figure 5

Dataset visualization (MRI image above, segmentation mask in the middle, and fusion image of original image and segmentation mask below)

Fig. 6
figure 6

Brain tumor images of different enhancement types

Data preprocessing

In the field of computer vision, data preprocessing plays a very important role, especially in medical image analysis, redundant information and inaccurate data may reduce the performance of the model. In this paper, data samples are preprocessed in four ways: data filtering, global pixel normalization, data segmentation and data enhancement. First, in order to alleviate the problem of imbalance between positive and negative sample categories in training samples, images without brain tumors are deleted, and the resulting dataset has 1373 images and 1373 corresponding tumor masks. Then, the global pixel normalization processing is performed on the image, and the pixel value is scaled from 0 to 255 to 0 to 1, so that the convergence speed is faster when the gradient drops, so as to better train. After the segmentation mask is normalized, it becomes a binary mask through threshold processing, that is, the background pixel (tumor free area) is 0, and the foreground pixel (tumor area) is 1. Training set, verification set and test set are divided into 70%, 20% and 10%. The segmented training set has 988 images, the verification set has 247 images, and the test set has 138 images. Deep learning requires a large number of labeled datasets, but medical datasets are limited. To avoid over fitting and improve the generalization ability of the model, we use data enhancement technology [21, 22]. Data enhancement is a method of scaling, flipping, and rotating existing data while retaining the same label. In this study, we used six enhancement methods: horizontal offset, transpose processing, vertical offset, blur processing, random clipping, random rotation, random scaling, random flipping, and random brightness. Table 1 lists the types and parameters of data enhancement. Figure 6 Brain tumor images with different enhancement types.

Table 1 Types and parameters of data enhancement

Performance assessment metrics

In the medical image segmentation task, due to the small region of interest (ROI), the pixels of the positive sample occupy a relatively small proportion in the whole image. Using cross entropy loss function for training will make the model focus on the learning of negative samples, thus affecting the segmentation effect of positive samples. Therefore, Dice loss is adopted in this paper to solve the problem of disequilibrium between classes. Dice is a set similarity measurement index, which is usually used as a measurement index in medical image competition. The value range of Dice coefficient is [0,1], and the value of Dice 1 is an extreme condition, indicating that the predicted value of network is exactly the same as the label graph. When Dice is 0, the predicted network value is completely different from the label graph. The formula is as follows: where, A represents the set of pixels actually segmented; B represents the set of truly segmented pixels in the tag; ξ is a parameter set to prevent denominator being 0, ξ in this experiment is 100.

$${L}_{dice}=1-\frac{2\times \left|A\bigcap B\right|+\xi }{\left|A\right|+\left|B\right|+\xi }$$

Evaluation indicator

This paper uses three evaluation indexes to evaluate the model's performance, which are dice similarity coefficient (DSC), Jaccard index, also known as IoU index and accuracy (ACC). Each evaluation index is defined as follows: Where, A represents the set of pixels actually segmented; B represents the set of truly segmented pixels in the tag; TP, TN, FP and FN correspond to true positive (labeled 1 with a predictive value of 1), true negative (labeled 0 with a predictive value of 0), false positive (labeled 0 with a predictive value of 1) and false negative (labeled 1 with a predictive value of 0) respectively.

$$DSC=\frac{2\times \left|A\bigcap B\right|}{\left|A\right|+\left|B\right|}$$
$$Jac/IoU=\frac{\left|A\bigcap B\right|}{\left|A\right|+\left|B\right|-\left|A\bigcap B\right|}$$
$$Acc=\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{FN}+\mathrm{FP}}$$

Training protocol

The experimental hardware environment was NVIDIA GeForce RTX Quadra K620 single GPU, and the software environment was Keras2.2.4 library at the back end of Tensorflow2.1. The input image size of the network model would be 256X256X3, the bitchsize would be set to 8, the number of training epoch would be set to 150, and the Early Stopping method would be used to stop the training in advance to avoid network overfitting. Hyperparameters such as filter size, learning rate, optimizer selection to optimize the segmentation results arebased on exhaustive methods of selection and then experiments are designed. The initial learning rate varied from 10–3 to 10–4 and based on the learning curve, the learning rate was selected to be 10–4.The Adam optimizer was used to update the parameters. In order to ensure the fairness of experimental results, all experiments were conducted under the same Settings above.

Results

The algorithm proposed in this paper is based on the improvement of U-Net network and the improved strategy of FPN and data enhancement technology, which can effectively segment the brain tumor region. In order to intuitively and fairly evaluate the segmentation performance of the proposed algorithm, a comparison experiment was conducted with the classical algorithms U-Net and FPN. The performance comparison of the two segmentation algorithms is listed in Table 2. Figure 7 shows the variation trend and segmentation results of Dice in the training process. As can be seen from Table 2, the Dice coefficient, Jaccard index and accuracy of the improved FPN model proposed in this paper reached 92%, 86% and 99.1%, respectively, which were superior to the U-Net model. Experiments proved that the segmentation model proposed in this paper improved the segmentation accuracy of brain tumors. As can be seen from Fig. 7, the convergence rate of the improved FPN model is faster than that of the U-Net model, and it can be seen from the training trend of intact tumors that U-Net has a wide range of changes, indicating that the accuracy rate of segmentation results varies greatly during the training process.

Table 2 Performance evaluation of the two models
Fig. 7
figure 7

Change trend of brain tumor segmentation task during training

Figure 8 shows the visualization of brain tumor segmentation results by two segmentation methods. From left to right are the original brain MRI image, the corresponding segmentation mask, the segmentation results of U-Net model and the segmentation results of the improved FPN model proposed in this paper. As can be seen from Fig. 8, although the brain tumor region segmented by U-Net is relatively accurate, the segmented brain tumor boundary details are missing and the segmentation accuracy is not high, resulting in the problems of under-segmentation and over-segmentation. The algorithm proposed in this paper can segment most of the edge regions of the tumor well, which is closer to the segmentation template, and effectively refines the boundary of the brain tumor and enhances the detailed features of the outline of the brain tumor. It also improves the segmentation accuracy to some extent, indicating that the algorithm in this paper has strong segmentation ability.

Fig. 8
figure 8

Segmentation results of two models (MRI image, mask, U-Net and improved FPN from left to right)

Table 3 shows the comparison of the improved FPN model method with other paper methods and the performance of brain tumor segmentation on different data sets. The results show that the proposed method achieves the optimal performance with 92% Dice coefficient, 85% Jacc and 99.1% accuracy, which is close to or better than some existing algorithms in segmentation performance.

Table 3 Performance comparison of the proposed model with state-of-art methods

Discussion

This paper presents an improved FPN convolutional neural network for brain tumor image segmentation. On the one hand, the FPN module is used in the coding block to obtain enough receptive field and improve the adaptability of the model to different scale features. On the other hand, residual connections and data enhancement strategies are used to reduce the risk of overfitting and network degradation. The experiment used 1373 brain MRI images and the performance of all models was evaluated by Dice coefficient, Jacc index and accuracy. The U-Net model is compared with the method proposed in this paper. Experiments show that the proposed network structure can refine tumor boundaries and achieve high segmentation accuracy. The Dice coefficient was 92%, the Jaccard index was 86.%, and the accuracy was 99.1%. At the same time, different modules were added to U-Net for comparative ablation experiments. The results show that both the improved FPN model and the data enhancement technology can improve the model segmentation performance to a certain extent. Finally, the performance of the proposed method is compared with that of the most advanced method, and the competitiveness and superiority of the proposed method are illustrated. In the study of related issues, Sumit Tripathi [28] et al. attempted a total of 100 periods by optimizing the programs of Adam, SGD, and Adamax in the FPN algorithm, and then selected Adam based on the dice scores on the validation set. Since the cross entropy loss of dice continues to decrease on the verification set, the above training scheme helps to achieve the highest segmentation accuracy. The evaluation index values obtained for the network indicate that the algorithm works normally and provides good segmentation results. In conclusion, this study will provide new ideas for multi-scale research in the field of MRI segmentation of brain tumors.

Availability of data and materials

The data in this article adopt publicly available medical imaging data from the cancer genome atlas (TCGA).Download address: https://www.kaggle.com/datasets/mateuszbuda/lgg-mri-segmentation.

Abbreviations

FCN:

Full convolutional neural network

FPN:

Feature pyramid networks

MRI:

Magnetic resonance imaging

ASPP:

Atrous spatial pyramid pooling

TCGA:

The cancer genome atlas

LGG:

Lower-grade glioma

ROI:

Region of interest

ACC:

Accuracy

DSC:

Dice similarity coefficient

TP:

True positive

TN:

True negative

FP:

False positive

FN:

False negative

References

  1. Cheung A, Li W, Ho L, et al. Impact of brain tumor and its treatment on the physical and psychological well-being, and quality of life amongst pediatric brain tumor survivors[J]. Eur J Oncol Nurs. 2019;41:104–9.

    Article  PubMed  Google Scholar 

  2. Stebner A, Ensser A, Geidrfer W, et al. Molecular diagnosis of polymicrobial brain abscesses with 16S rDNA-based next generation sequencing[J]. Clin Microbiol Infect. 2021;27(1):76–82.

    Article  CAS  PubMed  Google Scholar 

  3. Gordillo N, Montseny E, Sobrevilla P. State of the art survey on MRI brain tumor segmentation[J]. Magn Reson Imaging. 2013;31(8):1426–38.

    Article  PubMed  Google Scholar 

  4. Nema S, Dudhane A, Murala S, et al. RescueNet: An unpaired GAN for brain tumor segmentation[J]. Biomed Signal Process Control. 2020;55:101641–52.

    Article  Google Scholar 

  5. Menze, Bjoern Reyes, Mauricio Van Leemput, Koen et al. The Multimodal Brain TumorImage Segmentation Benchmark (BRATS) [J]. IEEE Transact Med Imaging. 2015;34(10):1993–2024.

  6. Moeskops P, Viergever MA, Mendrik AM, et al. Automatic Segmentation of MR Brain Images With a Convolutional Neural Network[J]. IEEE Trans Med Imaging. 2016;35(5):1252–61.

    Article  PubMed  Google Scholar 

  7. Di WU, Dai F, Guo WY, et al. Method for Image Segmentation Based on Optimized Multi-Kernel SVM and K-means Clustering[J]. Comput Syst Appl. 2016;025(004):191–6.

    Google Scholar 

  8. Jie W , Zw B , Li W B , et al. A cascaded nested network for 3T brain MR image segmentation guided by 7T labeling[J]. 2022;124:108420–108432.

  9. Zhou Z, Sanders JW, Johnson JM, et al. Computer-aided detection of brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors[J]. Radiology. 2020;295(2):407–15.

    Article  PubMed  Google Scholar 

  10. Wang X, Li XH, Cho JW, et al. U-Net Model for brain extraction: trained on humans for transfer to non-human primates[J]. Neuroimage. 2021;235:118001–7.

    Article  PubMed  Google Scholar 

  11. Daimary D, Bora MB, Amitab K, et al. Brain Tumor Segmentation from MRI Images using Hybrid Convolutional Neural Networks[J]. Procedia Computer Science. 2020;167:2419–28.

    Article  Google Scholar 

  12. Zhang J, Jiang Z, Dong J, et al. Attention Gate ResU-Net for automatic MRI brain tumor segmentation[J]. IEEE Access. 2020;8:58533–45.

    Article  Google Scholar 

  13. Chen H , Dou Q , Yu L , et al. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images. 2018;170:446–455.

  14. Weng W, Zhu X. INet: Convolutional Networks for Biomedical Image Segmentation[J]. IEEE Access, 2021, PP(99):1–1.

  15. Yang G , Lv J , Chen Y , et al. Generative Adversarial Networks (GAN) Powered Fast Magnetic Resonance Imaging -- Mini Review, Comparison and Perspectives[J]. 2021.

  16. Chen Y, Liu Y. Automatic Segmentation of Hippocampal Subfields MRI Based on FPN-DenseVoxNet[C]//. Asia-Pacific Conference on Commun Technol Comput Sci (ACCTCS). 2021;2021:58–62.

    Google Scholar 

  17. Yang Y, Yang P, Zhang B. Automatic segmentation in fetal ultrasound images based on improved U-Net[J]. J Phys Conf Ser. 2020;1693(1):012183–9.

    Article  Google Scholar 

  18. Tripathi S, Sharan TS, Sharma S, et al. An Augmented Deep Learning Network with Noise Suppression Feature for Efficient Segmentation of Magnetic Resonance Images[J]. IETE Tech Rev. 2021;6:1–14.

    Google Scholar 

  19. Bhatti H , Li J , Siddeeq S , et al. Multi-detection and Segmentation of Breast Lesions Based on Mask RCNN-FPN[C]// 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2020:2698–2704.

  20. Tomczak K, Czerwinska P, Wiznerowicz M. The Cancer Genome Atlas (TCGA): An immeasurable source of knowledge[J]. Contemp Oncol. 2015;19(1A):A68–77.

    Google Scholar 

  21. Mikolajczyk A, Grochowski M. Data augmentation for improving deep learning in image classification problem[C]// International Interdisciplinary Phd Workshop. IEEE. 2018:117–22.

  22. Kalaiselvi T, Padmapriya ST, Padmanaban S, et al. A deep learning approach for brain tumour detection system using convolutional neural networks. Int J Dynam Syst Different Eq. 2020;8(1):1–10.

  23. Dong N, Li W, Gao Y, et al. Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Proc IEEE Int Symp Biomed Imaging. 2015;108:214–24.

  24. Pereira S, Pinto A, Alves V, et al. Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images. IEEE Trans Med Imaging. 2016;35(5):1240–51.

  25. Zhao X, Wu Y, Song G, et al. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal. 2017;43:98–111.

  26. Akkus Z, Ali I, Jiří Sedlář, et al. Predicting Deletion of Chromosomal Arms 1p/19q in Low-Grade Gliomas from MR Images Using Machine Intelligence. J Digit Imaging. 2017;30(4):469–76.

  27. Hashemzehi R, Mahdavi S, Kheirabadi M, et al. Detection of brain tumors from MRI images base on deep learning using hybrid model CNN and NADE. Biocybern Biomed Eng. 2020;40(3):1225–32.

  28. Sharan TS, Information VFA, Tripathi S, et al. Encoder Modified U-Net and Feature Pyramid Network for Multi-class Segmentation of Cardiac Magnetic Resonance Images. IETE Technical Review. https://doi.org/10.1080/02564602.2021.1955760.

Download references

Acknowledgements

Not applicable.

Funding

This work was supported in part by the Science and Technology Bureau of Zhongshan City (2022B1113), in part by the Medical Research Foundation of Guangdong Province (A2023177/B2021420). The funding body played no role in the design of the study and collection, analysis, interpretation of data, and in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

SHT designed the algorithm and is the main contributor to the manuscript.

LXP, WN participated in the preparation of the manuscript and the preliminary material preparation. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ning Wang.

Ethics declarations

Ethics approval and consent to participate

The study was carried out in accordance with the Declaration of Helsinki. This study has been approved by the Ethics Review Committee of Zhongshan Hospital of Traditional Chinese Medicine. All methods and data were analysed in accordance with relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, H., Yang, S., Chen, L. et al. Brain tumor image segmentation based on improved FPN. BMC Med Imaging 23, 172 (2023). https://doi.org/10.1186/s12880-023-01131-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12880-023-01131-1

Keywords