MBNet: Multi-Branch Network for Extraction of Rural Homesteads Based on Aerial Images
"> Figure 1
<p>Part of the study area. (<b>a</b>) The location of Deqing County in China; (<b>b</b>) hilly area of Deqing County; (<b>c</b>) plain area of Deqing County; (<b>d</b>) mountain tombs in Deqing County; (<b>e</b>) dense housing area in Deqing County.</p> "> Figure 2
<p>Overall network framework.</p> "> Figure 3
<p>Mixed-scale spatial attention module.</p> "> Figure 4
<p>Point-to-point module (PTPM).</p> "> Figure 5
<p>Examples of data augmentation by several methods.</p> "> Figure 6
<p>Segmentation results of the different SOTA methods on the mountain landform scene 1 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 7
<p>Segmentation results of the different SOTA methods on the mountain landform scene 2 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 8
<p>Segmentation results of the different SOTA methods on the mountain landform scene 3 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 9
<p>Segmentation results of the different SOTA methods on the plain landform scene 1 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 10
<p>Segmentation results of the different SOTA methods on the plain landform scene 2 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 11
<p>Segmentation results of the different SOTA methods on the plain landform scene 3 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 12
<p>Segmentation results of the different SOTA methods on the hilly landform scene 1 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 13
<p>Segmentation results of the different SOTA methods on the hilly landform scene 2 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 14
<p>Segmentation results of the different SOTA methods on the hilly landform scene 3 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 15
<p>Segmentation results of the different SOTA methods on the suburban dense area scene 1 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 16
<p>Segmentation results of the different SOTA methods on the suburban dense area scene 2 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 17
<p>Segmentation results of the different SOTA methods on the suburban dense area scene 3 test dataset. The bottom row is a close-up view of the red box.</p> "> Figure 18
<p>Visualization of mixed-scale spatial attention features.</p> "> Figure 19
<p>Boundary branch ablation experiment. The left side is the image, the upper part of the right side represents the results of semantic segmentation in the ablation experiment, and the lower part of the right side represents the results of boundary in the ablation experiment.</p> "> Figure 20
<p>PTPM ablation experiment. The left side is the image, the upper part of the right side represents the results of semantic segmentation in the ablation experiment, and the lower part of the right side represents the results of boundary in the ablation experiment.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area
2.2. Methodology
2.2.1. Overview of Network Architecture
2.2.2. Detail Branch
2.2.3. Semantic Branch
2.2.4. Boundary Branch
- (1)
- Obtain the feature map output by the stem module as the main feature, up-sample to the original image size, and go through a 1 × 1 convolutional layer. Then input it into BasicBlock and use 1 × 1 convolution to reduce the dimension to half the number of original channels;
- (2)
- Reduce the fused feature map from the first module of the detail branch to a single channel, and then up-sample the result to the original image size;
- (3)
- Concatenate the feature maps obtained in the first step and the second step, reducing the dimension to a single channel after the BN layer and RELU activation, and use the sigmoid function to generate the boundary attention mechanism (BAM):
- (4)
- Reduce the edge feature map after the current weighted semantic information from the residual function according to ResNet’s residual paradigm:
- (5)
- Take the feature map generated in step 4 as the main feature, and cycle the above steps to the feature map which is concatenated by the last step in the detail branch and semantic branch.
- (6)
- The final boundary generated in step 5 is input to the feature which is concatenated by the last layer of the detail branch and the semantic branch. Then input the final boundary map to point-to-point refinement module (PTPM) for correction processing to obtain the final accurate semantic boundary. The specific point-to-point module will be described next.
2.2.5. Loss Layer Design
3. Results
3.1. Dataset
- Starting from the upper left corner of the image, the number of clipping sliding steps is 256. When cropping to a certain area, the number of background pixels/total number of pixels ≤ 0.92, and then the number of sliding steps is reduced by half; otherwise, the number of sliding steps is multiplied by 0.9 and then rounded up.
- After randomly cropping the image; if the number of background pixels/total number of pixels ≤ 0.9, it is kept; otherwise, it is discarded.
3.2. Experimental Setting
3.3. Evaluation Metric
3.4. Comparisons and Analysis
3.4.1. Comparison with SOTA Methods on Mountain Landforms
3.4.2. Comparison with SOTA Methods on Plain Landforms
3.4.3. Comparison with SOTA Methods on Hilly Landforms
3.4.4. Comparison with SOTA Methods on Suburban Landforms
4. Discussion
4.1. Visualization Experiment of Mixed-Scale Spatial Attention Module
4.2. Boundary Branch Ablation Experiment
4.3. Point-to-Point Module (PTPM) Ablation Experiment
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liu, Y.-Q.; Wang, A.-L.; Hou, J.; Chen, X.-Y.; Xia, J.-S. Comprehensive evaluation of rural courtyard utilization efficiency: A case study in Shandong Province, Eastern China. J. Mt. Sci. 2020, 17, 2280–2295. [Google Scholar] [CrossRef]
- Li, G.-Q. Research on the surveying and mapping techniques for the integration of house sites and lands in rural areas. China High Tech. 2021, 18, 93–94. [Google Scholar]
- Ghanea, M.; Moallem, P.; Momeni, M. Building extraction from high-resolution satellite images in urban areas: Recent methods and strategies against significant challenges. Int. J. Remote Sens. 2016, 37, 5234–5248. [Google Scholar] [CrossRef]
- Shaker, I.F.; Abd-Elrahman, A.; Abdel-Gawad, A.K.; Sherief, M.A. Building Extraction from High Resolution Space Images in High Density Residential Areas in the Great Cairo Region. Remote Sens. 2011, 3, 781–791. [Google Scholar] [CrossRef] [Green Version]
- Zhao, K.; Kang, J.; Jung, J.; Sohn, G. Building extraction from satellite images using mask R-CNN with building boundary regularization. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar] [CrossRef]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation; Springer: Berlin, Germany, 2018. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Xu, L.; Liu, Y.; Yang, P.; Chen, H.; Zhang, H.; Wang, D.; Zhang, X. HA U-Net: Improved Model for Building Extraction From High Resolution Remote Sensing Imagery. IEEE Access 2021, 9, 101972–101984. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, Y. JointNet: A Common Neural Network for Road and Building Extraction. Remote Sens. 2019, 11, 696. [Google Scholar] [CrossRef] [Green Version]
- Ye, Z.; Fu, Y.; Gan, M.; Deng, J.; Comber, A.; Wang, K. Building Extraction from Very High Resolution Aerial Imagery Using Joint Attention Deep Neural Network. Remote Sens. 2019, 11, 2970. [Google Scholar] [CrossRef] [Green Version]
- Xia, L.; Zhang, J.; Zhang, X.; Yang, H.; Xu, M. Precise Extraction of Buildings from High-Resolution Remote-Sensing Images Based on Semantic Edges and Segmentation. Remote Sens. 2021, 13, 3083. [Google Scholar] [CrossRef]
- Pan, Z.; Xu, J.; Guo, Y.; Hu, Y.; Wang, G. Deep Learning Segmentation and Classification for Urban Village Using a Worldview Satellite Image Based on U-Net. Remote Sens. 2020, 12, 1574. [Google Scholar] [CrossRef]
- Ye, Z.; Si, B.; Lin, Y.; Zheng, Q.; Zhou, R.; Huang, L.; Wang, K. Mapping and Discriminating Rural Settlements Using Gaofen-2 Images and a Fully Convolutional Network. Sensors 2020, 20, 6062. [Google Scholar] [CrossRef]
- Sun, L.; Tang, Y.; Zhang, L. Rural Building Detection in High-Resolution Imagery Based on a Two-Stage CNN Model. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1998–2002. [Google Scholar] [CrossRef]
- Li, Y.; Xu, W.; Chen, H.; Jiang, J.; Li, X. A Novel Framework Based on Mask R-CNN and Histogram Thresholding for Scalable Segmentation of New and Old Rural Buildings. Remote Sens. 2021, 13, 1070. [Google Scholar] [CrossRef]
- Zhang, X. Village-Level Homestead and Building Floor Area Estimates Based on UAV Imagery and U-Net Algorithm. ISPRS Int. J. Geo Inf. 2020, 9, 403. [Google Scholar] [CrossRef]
- Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Comput. Sci. 2014, 1409, 1566. [Google Scholar]
- Li, X.; Li, X.; Zhang, L.; Cheng, G.; Shi, J.; Lin, Z.; Tan, S.; Tong, Y. Improving Semantic Segmentation via Decoupled Body and Edge Supervision. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 435–452. [Google Scholar] [CrossRef]
- Wang, Y.; Xin, Z.; Huang, K. Deep Crisp Boundaries. In Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Takikawa, T.; Acuna, D.; Jampani, V.; Fidler, S. Gated-scnn: Gated shape cnns for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019. [Google Scholar]
- Misra, I.; Shrivastava, A.; Gupta, A.; Hebert, M. Cross-Stitch Networks for Multi-task Learning. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3994–4003. [Google Scholar] [CrossRef] [Green Version]
- Cipolla, R.; Gal, Y.; Kendall, A. Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7482–7491. [Google Scholar] [CrossRef] [Green Version]
- Kirillov, A.; Wu, Y.; He, K.; Girshick, R. PointRend: Image Segmentation as Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 9796–9805. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
- Wang, P.; Chen, P.; Yuan, Y.; Liu, D.; Huang, Z.; Hou, X.; Cottrell, G. Understanding Convolution for Semantic Segmentation. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018. [Google Scholar]
- Zhou, L.; Zhang, C.; Wu, M. D-LinkNet: LinkNet with Pretrained Encoder and Dilated Convolution for High Resolution Satellite Imagery Road Extraction. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 182–186. [Google Scholar] [CrossRef]
- Treisman, A.M.; Gelade, G. A feature-integration theory of attention. Cogn. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef]
- Chen, H.; Shi, Z. A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection. Remote Sens. 2020, 12, 1662. [Google Scholar] [CrossRef]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Lake Tahoe, NV, USA, 12–15 March 2018. [Google Scholar]
- Huang, Z.; Wang, X.; Wei, Y.; Huang, L.; Huang, T.S. CCNet: Criss-Cross Attention for Semantic Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2020; pp. 603–612. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Shen, T.; Zhou, T.; Long, G.; Jiang, J.; Pan, S.; Zhang, C. Disan: Directional self-attention network for rnn/cnn-free language understanding. In Proceedings of the AAAI conference on artificial intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
- Lin, Z.; Feng, M.; Santos, C.N.D.; Yu, M.; Xiang, B.; Zhou, B.; Bengio, Y. A structured self-attentive sentence embedding. arXiv 2017, arXiv:1703.03130. [Google Scholar]
- Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-attention generative adversarial networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Qin, X.; Zhang, Z.; Huang, C.; Gao, C.; Jagersand, M. BASNet: Boundary-Aware Salient Object Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Wu, J. UNet 3+: A Full-Scale Connected UNet for Medical Image Segmentation. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar] [CrossRef]
- Lee, C.-Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z. Deeply-supervised nets. In Proceedings of the Eighteenth International Conference Artificial Intelligence and Statistics (PMLR), San Diego, CA, USA, 9–12 May 2015. [Google Scholar]
- Wei, X.; Li, X.; Liu, W.; Zhang, L.; Cheng, D.; Ji, H.; Zhang, W.; Yuan, K. Building Outline Extraction Directly Using the U2-Net Semantic Segmentation Model from High-Resolution Aerial Images and a Comparison Study. Remote Sens. 2021, 13, 3187. [Google Scholar] [CrossRef]
- Poma, X.S.; Riba, E.; Sappa, A. Dense extreme inception network: Towards a robust cnn model for edge detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020. [Google Scholar]
- De Boer, P.-T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
- Deng, R.; Shen, C.; Liu, S.; Wang, H.; Liu, X. Learning to predict crisp boundaries. In ECCV 2018: Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11210, pp. 570–586. [Google Scholar] [CrossRef] [Green Version]
- Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
- Borgefors, G. Distance transformations in digital images. Comput. Vis. Graph. Image Process. 1986, 34, 344–371. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the International Conference on Computer Vision, Las Condes, Chile, 11–18 December 2015; pp. 1026–1034. [Google Scholar]
- Sun, K.; Zhao, Y.; Jiang, B.; Cheng, T.; Xiao, B.; Liu, D.; Wang, J. High-resolution representations for labeling pixels and regions. arXiv 2019, arXiv:1904.04514. [Google Scholar]
- Zhu, Q.; Liao, C.; Hu, H.; Mei, X.; Li, H. MAP-Net: Multiple Attending Path Neural Network for Building Footprint Extraction from Remote Sensed Imagery. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 6169–6181. [Google Scholar] [CrossRef]
- Shi, F.; Zhang, T. A Multi-Task Network with Distance–Mask–Boundary Consistency Constraints for Building Extraction from Aerial Images. Remote Sens. 2021, 13, 2656. [Google Scholar] [CrossRef]
Method | Segmentation Evaluation Metrics | Boundary Evaluation Metric | ||||
---|---|---|---|---|---|---|
OA | Precision | Recall | F1-Score | IoU | F1-Score | |
FCN-ResNet | 97.47 | 89.13 | 89.62 | 89.37 | 78.65 | 52.05 |
U-Net-ResNet | 98.14 | 89.42 | 90.11 | 89.76 | 79.47 | 52.97 |
DeepLabV3+ | 98.22 | 90.06 | 90.73 | 90.39 | 80.69 | 53.21 |
HRNetV2 | 98.46 | 91.53 | 91.75 | 91.64 | 82.24 | 56.12 |
MAP-Net | 98.51 | 91.89 | 92.31 | 92.10 | 82.59 | 57.60 |
DMBC-Net | 98.53 | 92.01 | 92.81 | 92.41 | 83.01 | 59.36 |
MBNet | 98.63 | 92.07 | 93.96 | 93.01 | 83.90 | 61.74 |
Method | Segmentation Evaluation Metrics | Boundary Evaluation Metric | ||||
---|---|---|---|---|---|---|
OA | Precision | Recall | F1-Score | IoU | F1-Score | |
FCN-ResNet | 96.32 | 87.68 | 85.01 | 86.32 | 73.56 | 46.01 |
U-Net-ResNet | 96.88 | 88.10 | 86.66 | 87.37 | 74.08 | 48.55 |
DeepLabV3+ | 97.09 | 88.39 | 87.49 | 87.94 | 74.98 | 49.06 |
HRNetV2 | 97.65 | 89.06 | 90.08 | 89.57 | 77.80 | 53.72 |
MAP-Net | 97.79 | 89.33 | 88.80 | 89.06 | 76.85 | 52.70 |
DMBC-Net | 97.83 | 90.04 | 88.91 | 89.47 | 77.62 | 54.89 |
MBNet | 97.95 | 90.79 | 89.93 | 90.36 | 78.33 | 55.63 |
Method | Segmentation Evaluation Metrics | Boundary Evaluation Metric | ||||
---|---|---|---|---|---|---|
OA | Precision | Recall | F1-Score | IoU | F1-Score | |
FCN-ResNet | 95.21 | 87.11 | 84.35 | 85.71 | 71.31 | 42.16 |
U-Net-ResNet | 95.79 | 87.42 | 86.04 | 86.72 | 73.65 | 46.51 |
DeepLabV3+ | 96.31 | 88.27 | 86.55 | 87.40 | 74.28 | 47.28 |
HRNetV2 | 97.02 | 88.93 | 87.29 | 88.10 | 76.57 | 51.86 |
MAP-Net | 97.36 | 89.15 | 88.62 | 88.88 | 77.22 | 52.66 |
DMBC-Net | 97.61 | 89.76 | 87.71 | 88.72 | 76.82 | 53.10 |
MBNet | 97.92 | 90.32 | 87.95 | 89.12 | 77.94 | 53.41 |
Method | Segmentation Evaluation Metrics | Boundary Evaluation Metric | ||||
---|---|---|---|---|---|---|
OA | Precision | Recall | F1-Score | IoU | F1-Score | |
FCN-ResNet | 95.73 | 87.34 | 84.61 | 85.95 | 73.48 | 44.79 |
U-Net-ResNet | 96.02 | 87.92 | 86.59 | 87.25 | 74.19 | 48.91 |
DeepLabV3+ | 96.48 | 88.42 | 87.14 | 87.78 | 74.90 | 48.96 |
HRNetV2 | 97.21 | 89.26 | 88.26 | 88.76 | 76.88 | 52.34 |
MAP-Net | 97.52 | 89.59 | 88.30 | 88.94 | 77.31 | 52.27 |
DMBC-Net | 97.86 | 90.03 | 88.86 | 89.44 | 78.36 | 54.37 |
MBNet | 98.11 | 90.80 | 89.32 | 90.05 | 79.25 | 55.53 |
Method | Segmentation Metrics | Boundary Metric | ||||
---|---|---|---|---|---|---|
OA | Precision | Recall | F1-Score | IoU | F1-Score | |
Baseline | 97.20 | 90.47 | 89.69 | 90.08 | 78.81 | 55.27 |
+ Boundary Branch | 98.15 | 90.99 | 90.29 | 90.64 | 79.86 | 56.58 |
Method | Segmentation Metrics | Boundary Metric | ||||
---|---|---|---|---|---|---|
OA | Precision | Recall | F1-Score | IoU | F1-Score | |
w/o PTPM | 97.53 | 90.61 | 89.79 | 90.20 | 79.04 | 55.74 |
K/highest confidence | 97.84 | 90.72 | 89.96 | 90.34 | 79.37 | 56.13 |
2K/points in total | 98.15 | 90.99 | 90.29 | 90.64 | 79.86 | 56.58 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wei, R.; Fan, B.; Wang, Y.; Zhou, A.; Zhao, Z. MBNet: Multi-Branch Network for Extraction of Rural Homesteads Based on Aerial Images. Remote Sens. 2022, 14, 2443. https://doi.org/10.3390/rs14102443
Wei R, Fan B, Wang Y, Zhou A, Zhao Z. MBNet: Multi-Branch Network for Extraction of Rural Homesteads Based on Aerial Images. Remote Sensing. 2022; 14(10):2443. https://doi.org/10.3390/rs14102443
Chicago/Turabian StyleWei, Ren, Beilei Fan, Yuting Wang, Ailian Zhou, and Zijuan Zhao. 2022. "MBNet: Multi-Branch Network for Extraction of Rural Homesteads Based on Aerial Images" Remote Sensing 14, no. 10: 2443. https://doi.org/10.3390/rs14102443
APA StyleWei, R., Fan, B., Wang, Y., Zhou, A., & Zhao, Z. (2022). MBNet: Multi-Branch Network for Extraction of Rural Homesteads Based on Aerial Images. Remote Sensing, 14(10), 2443. https://doi.org/10.3390/rs14102443