Comparative Study of Marine Ranching Recognition in Multi-Temporal High-Resolution Remote Sensing Images Based on DeepLab-v3+ and U-Net
<p>Research roadmap.</p> "> Figure 2
<p>Experimental area (false color).</p> "> Figure 3
<p>Ground images of marine ranching areas (Xiapu County, Ningde, 1 November 2022). (<b>a</b>) Floating raft aquaculture areas. (<b>b</b>) Cage aquaculture areas.</p> "> Figure 4
<p>Floating raft aquaculture areas (imaged on November 2013) by GF-1. (<b>a</b>) The first type (area in red box). (<b>b</b>) The second type (area in red box). (<b>c</b>) The third type (area in red box).</p> "> Figure 5
<p>Cage aquaculture areas in GF-1. (<b>a</b>) The first type of cage aquaculture (imaged in November 2013). (<b>b</b>) The second type of cage aquaculture (imaged in November 2017). (<b>c</b>) The third type of cage aquaculture (imaged in April 2020).</p> "> Figure 6
<p>Spectral features of sample from different time. (<b>a</b>) Sample from November 2013. (<b>b</b>) Sample from November 2017. (<b>c</b>) Sample from April 2020. (<b>d</b>) Sample from August 2020.</p> "> Figure 7
<p>Training data (false color: band combination 421). (<b>a</b>) Training data A and B from November 2013. (<b>b</b>) Training data C from November 2013. (<b>c</b>) Training data D and E from November 2017.</p> "> Figure 8
<p>Test data (false color: band combination 421). (<b>a</b>) Test data of November 2013 (test A). (<b>b</b>) Test data of November 2017 (test B). (<b>c</b>) Test data of April 2020 (test C). (<b>d</b>) Test data of August 2020 (test D).</p> "> Figure 9
<p>The composition of dataset, training data and test data.</p> "> Figure 10
<p>The framework of DeepLab-v3+.</p> "> Figure 11
<p>The framework of U-Net.</p> "> Figure 12
<p>Precision, recall, F-score and IOU of DeepLab-v3+ and U-Net extraction results. (<b>a</b>) DeepLab-v3+ extraction results of floating raft aquaculture. (<b>b</b>) U-Net extraction results of floating raft aquaculture. (<b>c</b>) DeepLab-v3+ extraction results of cage aquaculture. (<b>d</b>) U-Net extraction results of cage aquaculture.</p> "> Figure 13
<p>The images (false color: band combination 421), labels and extraction results of four test images. (<b>a</b>) Test A. (<b>b</b>) Label A. (<b>c</b>) DeepLab-v3+ experiment 2 of test A. (<b>d</b>) U-Net experiment 1 of test A. (<b>e</b>) Test B. (<b>f</b>) Label B. (<b>g</b>) DeepLab-v3+ experiment 2 of test B. (<b>h</b>) U-Net experiment 1 of test B. (<b>i</b>) Test C. (<b>j</b>) Label C. (<b>k</b>) DeepLab-v3+ experiment 2 of test C. (<b>l</b>) U-Net experiment 1 of test C. (<b>m</b>) Test D. (<b>n</b>) Label D. (<b>o</b>) DeepLab-v3+ experiment 2 of test D. (<b>p</b>) U-Net experiment 1 of test D.</p> "> Figure 14
<p>Evaluation index value of floating raft aquaculture of four test images. (<b>a</b>) Precision. (<b>b</b>) Recall. (<b>c</b>) F-score. (<b>d</b>) IOU.</p> "> Figure 15
<p>Evaluation index value of cage aquaculture of four test images. (<b>a</b>) Precision. (<b>b</b>) Recall. (<b>c</b>) F-score. (<b>d</b>) IOU.</p> "> Figure 16
<p>Local images, labels and extraction results of test data. (<b>a</b>,<b>e</b>,<b>i</b>,<b>m</b>,<b>q</b>,<b>u</b>) Local images. (<b>b</b>,<b>f</b>,<b>j</b>,<b>n</b>,<b>r</b>,<b>v</b>) Labels. (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>,<b>s</b>,<b>w</b>) Extraction results of DeepLab-v3+ experiment 2. (<b>d</b>,<b>h</b>,<b>l</b>,<b>p</b>,<b>t</b>,<b>x</b>) Extraction results of U-Net experiment 1.</p> "> Figure 17
<p>Loss of U-Net and DeepLab-v3+.</p> "> Figure 18
<p>Spectral value of training data and test data after normalization. (<b>a</b>) Training data. (<b>b</b>) Test A. (<b>c</b>) Test B. (<b>d</b>) Test C. (<b>e</b>) Test D.</p> "> Figure 19
<p>Training data of cage aquaculture. (<b>a</b>) The first type of cage aquaculture. (<b>b</b>,<b>c</b>) The second type of cage aquaculture.</p> "> Figure 20
<p>The local images, labels and extraction results of test C. (<b>a</b>,<b>d</b>,<b>g</b>,<b>j</b>,<b>m</b>) Local images. (<b>b</b>,<b>e</b>,<b>h</b>,<b>k</b>,<b>n</b>) Labels. (<b>c</b>,<b>f</b>,<b>i</b>,<b>l</b>,<b>o</b>) Extraction results of U-Net experiment 5.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Area and Data Source
2.2. Dataset
2.2.1. Feature Analysis of Marine Ranching
2.2.2. Data Processing
2.3. Semantic Segmentation
2.3.1. DeepLab-v3+
2.3.2. U-Net
2.3.3. Training Strategy
3. Results
3.1. Performance Comparison of DeepLab-v3+ and U-Net
3.2. Performance Comparison with Different Test Images
- U-Net (experiment 8–18) exhibits a relatively good performance in the recognition of the four test images—of both floating raft aquaculture and cage aquaculture—at different times;
- For floating raft aquaculture, the extraction result of test D is relatively poor—as is the extraction result of test C for cage aquaculture.
4. Discussion
4.1. Model Recognition Ability Analysis
4.2. Model Generalization Ability Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lin, G.; Zheng, C.; Cai, F.; Huang, F. Application of remote sensing technique in marine delimiting. J. Oceanogr. Taiwan Strait 2004, 23, 219–224+262. [Google Scholar]
- Wu, Y.; Zhang, J.; Tian, G.; Cai, D.; Liu, S. A Survey to Aquiculture with Remote Sensing Technology in Hainan Province. Chin. J. Trop. Crops 2006, 27, 108–111. [Google Scholar]
- Ma, Y.; Zhao, D.; Wang, R. Comparative Study of the Offshore Aquatic Farming Areas Extraction Method Based on ASTER Data. Bull. Surv. Mapp. 2011, 26, 59–63. [Google Scholar]
- Lu, Y.; Li, Q.; Du, X.; Wang, H.; Liu, J. A Method of Coastal Aquaculture Area Automatic Extraction with High Spatial Resolution Images. Remote Sens. Technol. Appl. 2015, 30, 486–494. [Google Scholar]
- Cheng, B.; Liu, Y.; Liu, X.; Wang, G.; Ma, X. Research on Extraction Method of Coastal Aquaculture Areas on High Resolution Remote Sensing Image based on Multi-features Fusion. Remote Sens. Technol. Appl. 2018, 33, 296–304. [Google Scholar]
- Wu, Y.; Chen, F.; Ma, Y.; Liu, J.; Li, X. Research on automatic extraction method for coastal aquaculture area using Landsat8 data. Remote Sens. Land Resour. 2018, 30, 96–105. [Google Scholar]
- Wang, J.; Sui, L.; Yang, X.; Wang, Z.; Liu, Y.; Kang, J.; Lu, C.; Yang, F.; Liu, B. Extracting Coastal Raft Aquaculture Data from Landsat 8 OLI Imagery. Sensors 2019, 19, 1221. [Google Scholar] [CrossRef] [Green Version]
- Chen, S. Spatiotemporal dynamics of mariculture area in Sansha Bay and its driving factors. Chin. J. Ecol. 2021, 40, 1137–1145. [Google Scholar] [CrossRef]
- Zhu, C.; Luo, J.; Shen, Z.; Li, J.; Hu, X. Extract enclosure culture in coastal waters based on high spatial resolution remote sensing image. J. Dalian Marit. Univ. 2011, 37, 66–69. [Google Scholar] [CrossRef]
- Xing, Q.; An, D.; Zheng, X.; Wei, Z.; Wang, X.; Li, L.; Tian, L.; Chen, J. Monitoring Seaweed Aquaculture in the Yellow Sea with Multiple Sensors for Managing the Disaster of Macroalgal Blooms. Remote Sens. Environ. 2019, 231, 111279. [Google Scholar] [CrossRef]
- Zhou, X.; Wang, X.; Xiang, T.; Jiang, H. Method of Automatic Extracting Seaside Aquaculture Land Based on ASTER Remote Sensing Image. Wetl. Sci. 2006, 4, 64–68. [Google Scholar] [CrossRef]
- Xie, Y.; Wang, M.; Zhang, X. An Object-oriented Approach for Extracting Farm Waters within Coastal Belts. Remote Sens. Technol. Appl. 2009, 24, 68–72+136. [Google Scholar]
- Wang, M.; Cui, Q.; Wang, J.; Ming, D.; Lv, G. Raft Cultivation Area Extraction from High Resolution Remote Sensing Imagery by Fusing Multi-Scale Region-Line Primitive Association Features. ISPRS J. Photogramm. Remote Sens. 2017, 123, 104–113. [Google Scholar] [CrossRef]
- Xu, S.; Xia, L.; Peng, H.; Liu, X. Remote Sensing Extraction of Mariculture Models Based on Object-oriented. Geomat. Spat. Inf. Technol. 2018, 41, 110–112. [Google Scholar]
- Ren, C.; Wang, Z.; Zhang, Y.; Zhang, B.; Chen, L.; Xi, Y.; Xiao, X.; Doughty, R.B.; Liu, M.; Jia, M.; et al. Rapid Expansion of Coastal Aquaculture Ponds in China from Landsat Observations during 1984–2016. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101902. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z.; Yang, X.; Zhang, Y.; Yang, F.; Liu, B.; Cai, P. Satellite-Based Monitoring and Statistics for Raft and Cage Aquaculture in China’s Offshore Waters. Int. J. Appl. Earth Obs. Geoinf. 2020, 91, 102118. [Google Scholar] [CrossRef]
- Wang, Z.; Yang, X.; Liu, Y.; Chen, L. Extraction of Coastal Raft Cultivation Area with Heterogeneous Water Background by Thresholding Object-Based Visually Salient NDVI from High Spatial Resolution Imagery. Remote Sens. Lett. 2018, 9, 839–846. [Google Scholar] [CrossRef]
- Wang, J.; Yang, X.; Wang, Z.; Ge, D.; Kang, J. Monitoring Marine Aquaculture and Implications for Marine Spatial Planning—An Example from Shandong Province, China. Remote Sens. 2022, 14, 732. [Google Scholar] [CrossRef]
- Kang, J.; Sui, L.; Yang, X.; Liu, Y.; Wang, Z.; Wang, J.; Yang, F.; Liu, B.; Ma, Y. Sea Surface-Visible Aquaculture Spatial-Temporal Distribution Remote Sensing: A Case Study in Liaoning Province, China from 2000 to 2018. Sustainability 2019, 11, 7186. [Google Scholar] [CrossRef] [Green Version]
- Chu, J.; Zhao, D.; Zhang, F. Wakame Raft Interpretation Method of Remote Sensing based on Association Rules. Remote Sens. Technol. Appl. 2012, 27, 941–946. [Google Scholar]
- Wang, F.; Xia, L.; Chen, Z.; Cui, W.; Liu, Z.; Pan, C. Remote sensing identification of coastal zone mariculture modes based on association-rules object-oriented method. Trans. Chin. Soc. Agric. Eng. 2018, 34, 210–217. [Google Scholar]
- Chu, J.; Shao, G.; Zhao, J.; Gao, N.; Wang, F.; Cui, B. Information extraction of floating raft aquaculture based on GF-1. Sci. Surv. Mapp. 2020, 45, 92–98. [Google Scholar] [CrossRef]
- Xu, Y.; Hu, Z.; Zhang, Y.; Wang, J.; Yin, Y.; Wu, G. Mapping Aquaculture Areas with Multi-Source Spectral and Texture Features: A Case Study in the Pearl River Basin (Guangdong), China. Remote Sens. 2021, 13, 4320. [Google Scholar] [CrossRef]
- Geng, J.; Fan, J.; Wang, H. Weighted Fusion-Based Representation Classifiers for Marine Floating Raft Detection of SAR Images. IEEE Geosci. Remote Sens. Lett. 2017, 14, 444–448. [Google Scholar] [CrossRef]
- Liu, Y.; Yang, X.; Wang, Z.; Lu, C. Extracting raft aquaculture areas in Sanduao from high-resolution remote sensing images using RCF. Haiyang Xuebao 2019, 41, 119–130. [Google Scholar]
- Shi, T.; Xu, Q.; Zou, Z.; Shi, Z. Automatic Raft Labeling for Remote Sensing Images via Dual-Scale Homogeneous Convolutional Neural Network. Remote Sens. 2018, 10, 1130. [Google Scholar] [CrossRef] [Green Version]
- Cui, B.; Fei, D.; Shao, G.; Lu, Y.; Chu, J. Extracting Raft Aquaculture Areas from Remote Sensing Images via an Improved U-Net with a PSE Structure. Remote Sens. 2019, 11, 2053. [Google Scholar] [CrossRef] [Green Version]
- Sui, B.; Jiang, T.; Zhang, Z.; Pan, X.; Liu, C. A Modeling Method for Automatic Extraction of Offshore Aquaculture Zones Based on Semantic Segmentation. ISPRS Int. J. Geo-Inf. 2020, 9, 145. [Google Scholar] [CrossRef] [Green Version]
- Cheng, B.; Liang, C.; Liu, X.; Liu, Y.; Ma, X.; Wang, G. Research on a Novel Extraction Method Using Deep Learning Based on GF-2 Images for Aquaculture Areas. Int. J. Remote Sens. 2020, 41, 3575–3591. [Google Scholar] [CrossRef]
- Liang, C.; Cheng, B.; Xiao, B.; He, C.; Liu, X.; Jia, N.; Chen, J. Semi-/Weakly-Supervised Semantic Segmentation Method and Its Application for Coastal Aquaculture Areas Based on Multi-Source Remote Sensing Images—Taking the Fujian Coastal Area (Mainly Sanduo) as an Example. Remote Sens. 2021, 13, 1083. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, C.; Ji, Y.; Chen, J.; Deng, Y.; Chen, J.; Jie, Y. Combining Segmentation Network and Nonsubsampled Contourlet Transform for Automatic Marine Raft Aquaculture Area Extraction from Sentinel-1 Images. Remote Sens. 2020, 12, 4182. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar] [CrossRef]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 833–851. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs. arXiv 2014, arXiv:1412.7062. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [Green Version]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Liu, K.; Fu, J.; Li, F. Evaluation Study of Four Fusion Methods of GF-1 PAN and Multi-spectral Images. Remote Sens. Technol. Appl. 2015, 30, 980–986. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. arXiv 2015, arXiv:1411.4038. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. arXiv 2017, arXiv:1610.02357. [Google Scholar] [CrossRef]
- Dumoulin, V.; Visin, F. A Guide to Convolution Arithmetic for Deep Learning. arXiv 2018, arXiv:1603.07285. [Google Scholar] [CrossRef]
Indicator | PMS | WFV | ||
---|---|---|---|---|
Spectral range | Pan | 0.45–0.90 μm | Pan | —— |
Multispectral | 0.45–0.52 μm | Multispectral | 0.45–0.52 μm | |
0.52–0.59 μm | 0.52–0.59 μm | |||
0.63–0.69 μm | 0.63–0.69 μm | |||
0.77–0.89 μm | 0.77–0.89 μm | |||
Spatial resolution | Pan | 2 m | Pan | —— |
Multispectral | 8 m | Multispectral | 16 m |
Date | Camera | Central Coordinates |
---|---|---|
30 November 2013 | PMS | E119.7_N26.3 |
30 November 2013 | PMS | E119.8_N26.6 |
30 November 2013 | PMS | E119.8_N26.8 |
03 November 2017 | PMS | E119.9_N26.6 |
27 April 2020 | PMS | E119.9_N26.8 |
24 August 2020 | PMS | E119.7_N26.6 |
Serial | Learning Rate | Momentum | Batch Size | ASPP | Weight Decay | Class Weight |
---|---|---|---|---|---|---|
1 | 0.001 | 0.9 | 4 | (1, 6, 12, 18) | 0.00004 | (1, 1, 1) |
2 | 0.001 | 0.9 | 4 | (1, 2, 4, 8) | 0.00004 | (1, 1, 1) |
3 | 0.005 | 0.9 | 4 | (1, 2, 4, 8) | 0.00004 | (1, 1, 1) |
4 | 0.0001 | 0.9 | 4 | (1, 2, 4, 8) | 0.00004 | (1, 1, 1) |
5 | 0.001 | 0.9 | 8 | (1, 2, 4, 8) | 0.00004 | (1, 1, 1) |
6 | 0.001 | 0.9 | 16 | (1, 2, 4, 8) | 0.00004 | (1, 1, 1) |
7 | 0.001 | 0.9 | 4 | (1, 2, 4, 8) | 0.0004 | (1, 1, 1) |
Serial | Band | Learning Rate | Momentum | Batch Size | Weight Decay | Class Weight |
---|---|---|---|---|---|---|
1 | 421 | 0.001 | 0.9 | 4 | 0.00004 | 1, 1, 1 |
2 | 421 | 0.001 | 0.9 | 4 | 0.00004 | 1, 2.95, 5.55 |
3 | 431 | 0.001 | 0.9 | 4 | 0.00004 | 1, 1, 1 |
4 | 431 | 0.001 | 0.9 | 4 | 0.00004 | 1, 2.95, 5.55 |
5 | 432 | 0.001 | 0.9 | 4 | 0.00004 | 1, 1, 1 |
6 | 432 | 0.001 | 0.9 | 4 | 0.00004 | 1, 2.95, 5.55 |
7 | 321 | 0.001 | 0.9 | 4 | 0.00004 | 1, 1, 1 |
8 | 321 | 0.001 | 0.9 | 4 | 0.00004 | 1, 2.95, 5.55 |
9 | 432 | 0.001 | 0.9 | 16 | 0.00004 | 1, 1, 1 |
10 | 432 | 0.001 | 0.9 | 8 | 0.00004 | 1, 1, 1 |
11 | 432 | 0.001 | 0.9 | 8 | 0.00004 | 1, 2.95, 5.55 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, Y.; He, G.; Yin, R.; Zheng, K.; Wang, G. Comparative Study of Marine Ranching Recognition in Multi-Temporal High-Resolution Remote Sensing Images Based on DeepLab-v3+ and U-Net. Remote Sens. 2022, 14, 5654. https://doi.org/10.3390/rs14225654
Chen Y, He G, Yin R, Zheng K, Wang G. Comparative Study of Marine Ranching Recognition in Multi-Temporal High-Resolution Remote Sensing Images Based on DeepLab-v3+ and U-Net. Remote Sensing. 2022; 14(22):5654. https://doi.org/10.3390/rs14225654
Chicago/Turabian StyleChen, Yanlin, Guojin He, Ranyu Yin, Kaiyuan Zheng, and Guizhou Wang. 2022. "Comparative Study of Marine Ranching Recognition in Multi-Temporal High-Resolution Remote Sensing Images Based on DeepLab-v3+ and U-Net" Remote Sensing 14, no. 22: 5654. https://doi.org/10.3390/rs14225654
APA StyleChen, Y., He, G., Yin, R., Zheng, K., & Wang, G. (2022). Comparative Study of Marine Ranching Recognition in Multi-Temporal High-Resolution Remote Sensing Images Based on DeepLab-v3+ and U-Net. Remote Sensing, 14(22), 5654. https://doi.org/10.3390/rs14225654