Study on the Evolutionary Characteristics of Post-Fire Forest Recovery Using Unmanned Aerial Vehicle Imagery and Deep Learning: A Case Study of Jinyun Mountain in Chongqing, China
<p>Location of the study area. Source from: <a href="https://www.google.com.hk/maps/" target="_blank">https://www.google.com.hk/maps/</a>, accessed on 13 October 2024. Source from: <a href="http://www.bigemap.com/" target="_blank">http://www.bigemap.com/</a>, accessed on 13 October 2024.</p> "> Figure 2
<p>Image data of the same forest area.</p> "> Figure 3
<p>Overall architecture of Mask2former. The Pixel Decoder obtains the outputs of all stages in the feature extraction network and converts them into pixel-level prediction results, obtaining output features with multiple sizes. The largest output feature is used to calculate the mask, while the smaller output features are used as inputs to the Transformer Decoder.</p> "> Figure 4
<p>(<b>a</b>) Swin Transformer Network Architecture; (<b>b</b>) Swin Transformer Block Structure (right). The figure on the right shows two Swin Transformer Blocks connected in series. In network architecture, this structure appears in pairs, at least two of which are grouped together. In this structure, W-MSA represents Multi Head Self-Attention with a window, while SW-MSA represents Multi Head Self-Attention with a sliding window.</p> "> Figure 5
<p>(<b>a</b>) Window Multi-Head Self-Attention, W-MSA and (<b>b</b>) Shifted Window Multi-Head Self-Attention, SW-MSA.</p> "> Figure 6
<p>The approximate calculation process for the adaptive module EA Block. In the figure, the “+” inside a circle indicates that the inputs to that node are added together, and the “*” inside a circle indicates that the inputs are multiplied together.</p> "> Figure 7
<p>Structure of EAswin-Mask2former.</p> "> Figure 8
<p>Some segmentation results of EAswin-Mask2former and other models.</p> "> Figure 9
<p>Comparison of mIou between EAswin-Mask2former and other models. DLV3+, SEG, R-M, and S-M in the table represent DeepLabV3+, Segformer, Resnet50-Mask2former, and Swin-Mask2former, respectively.</p> "> Figure 10
<p>Satellite remote sensing images of the forest area from 2022 to 2024.</p> "> Figure 11
<p>Unmanned aerial vehicle images of Region A at different times and their segmentation effects. The corresponding shooting times from top to bottom are October 2022, March 2023, March 2023, and February 2024.</p> "> Figure 11 Cont.
<p>Unmanned aerial vehicle images of Region A at different times and their segmentation effects. The corresponding shooting times from top to bottom are October 2022, March 2023, March 2023, and February 2024.</p> "> Figure 12
<p>The trend over time of the burned and damaged area and the proportion of forest area in Region A.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. The Study Area
2.2. Data Acquisition
2.3. Methodology
2.4. Model
2.4.1. Semantic Segmentation Model Mask2Former
2.4.2. Backbone Swin-Transformer
2.4.3. Efficiently Adaptive Block
3. Experiment
3.1. Experimental Conditions
3.2. Performance Index
3.3. Comparative Experiments and Ablation Experiment
4. Results and Discussion
4.1. Experimental Results
4.2. Segmentation of Overfire Areas by EAswin-Mask2former
4.3. Evolutionary Characterization of Region A
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Costanza, R.; d’Arge, R.; De Groot, R.; Farber, S.; Grasso, M.; Hannon, B.; Limburg, K.; Naeem, S.; O’neill, R.V.; Paruelo, J.; et al. The value of the world’s ecosystem services and natural capital. Nature 1997, 387, 258–260. [Google Scholar] [CrossRef]
- Tyukavina, A.; Potapov, P.; Hansen, M.C.; Pickens, A.; Stehman, S.; Turubanova, S.; Parker, D.; Zalles, A.; Lima, A.; Kommareddy, I.; et al. Global trends of forest loss due to fire, 2001–2019. Front. Remote Sens. 2022, 3, 825190. [Google Scholar] [CrossRef]
- Yue, C.; Ciais, P.; Cadule, P.; Thonicke, K.; van Leeuwen, T.T. Modelling the role of fires in the terrestrial carbon balance by incorporating SPITFIRE into the global vegetation model ORCHIDEE—Part 2: Carbon emissions and the role of fires in the global carbon balance. Geosci. Model Dev. 2015, 8, 1285–1297. [Google Scholar] [CrossRef]
- Singleton, M.P.; Thode, A.E.; Meador, A.J.S.; Iniguez, J.M. Increasing trends in high-severity fire in the southwestern USA from 1984 to 2015. For. Ecol. Manag. 2019, 433, 709–719. [Google Scholar] [CrossRef]
- Kurbanov, E.; Vorobyev, O.; Leznin, S.; Polevshikova, Y.; Demisheva, E. Assessment of burn severity in Middle Povozhje with Landsat multitemporal data. Int. J. Wildland Fire 2017, 26, 772–782. [Google Scholar] [CrossRef]
- Campagnolo, M.L.; Oom, D.; Padilla, M.; Pereira, J.M.C. A patch-based algorithm for global and daily burned area mapping. Remote Sens. Environ. 2019, 232, 111288. [Google Scholar] [CrossRef]
- Kibler, C.L.; Parkinson, A.-M.L.; Peterson, S.H.; Roberts, D.A.; D’Antonio, C.M.; Meerdink, S.K.; Sweeney, S.H. Monitoring post-fire recovery of chaparral and conifer species using field surveys and Landsat time series. Remote Sens. 2019, 11, 2963. [Google Scholar] [CrossRef]
- Jordan, C.F. Derivation of leaf-area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
- Richardson, A.J.; Weigand, C. Distinguishing vegetation from soil background information. Photogramm. Eng. Remote Sens. 1977, 43, 1541–1552. [Google Scholar]
- Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with Erts; NASA Special Publication: Washington, DC, USA, 1974; pp. 309–351. [Google Scholar]
- Reid, A.; Ramos, F.; Sukkarieh, S. Multi-class classification of vegetation in natural environments using an Unmanned Aerial system. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 2953–2959. [Google Scholar]
- Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
- Wang, X.; Wang, M.; Wang, S.; Wu, Y. Extraction of vegetation information from visible unmanned aerial vehicle images. Trans. Chin. Soc. Agric. Eng. 2015, 31, 152. [Google Scholar]
- Healey, S.P.; Cohen, W.B.; Zhiqiang, Y.; Krankina, O.N. Comparison of tasseled cap-based Landsat data structures for use in forest disturbance detection. Remote Sens. Environ. 2005, 97, 301–310. [Google Scholar] [CrossRef]
- Dunn, C.J.; O’Connor, C.D.; Reilly, M.J.; Calkin, D.E.; Thompson, M.P. Spatial and temporal assessment of responder exposure to snag hazards in post-fire environments. For. Ecol. Manag. 2019, 441, 202–214. [Google Scholar] [CrossRef]
- Domingo, D.; de la Riva, J.; Lamelas, M.T.; García-Martín, A.; Ibarra, P.; Echeverría, M.; Hoffrén, R. Fuel type classification using airborne laser scanning and Sentinel 2 data in Mediterranean forest affected by wildfires. Remote Sens. 2020, 12, 3660. [Google Scholar] [CrossRef]
- Attarchi, S.; Gloaguen, R. Classifying complex mountainous forests with L-Band SAR and Landsat data integration: A comparison among different machine learning methods in the Hyrcanian forest. Remote Sens. 2014, 6, 3624–3647. [Google Scholar] [CrossRef]
- Petropoulos, G.P.; Kontoes, C.; Keramitsoglou, I. Burnt area delineation from a uni-temporal perspective based on Landsat TM imagery classification using Support Vector Machines. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 70–80. [Google Scholar] [CrossRef]
- Maeda, E.E.; Arcoverde, G.F.B.; Pellikka, P.K.E.; Shimabukuro, Y.E. Fire risk assessment in the Brazilian Amazon using MODIS imagery and change vector analysis. Appl. Geogr. 2011, 31, 76–84. [Google Scholar] [CrossRef]
- Sedano, F.; Kempeneers, P.; Strobl, P.; McInerney, D.; San Miguel, J. Increasing spatial detail of burned scar maps using IRS-AWiFS data for Mediterranean Europe. Remote Sens. 2012, 4, 726–744. [Google Scholar] [CrossRef]
- Pereira-Pires, J.E.; Aubard, V.; Ribeiro, R.A.; Fonseca, J.M.; Silva, J.M.N.; Mora, A. Semi-automatic methodology for fire break maintenance operations detection with sentinel-2 imagery and artificial neural network. Remote Sens. 2020, 12, 909. [Google Scholar] [CrossRef]
- Cheng, B.; Misra, I.; Schwing, A.G.; Kirillov, A.; Girdhar, R. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 1290–1299. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–21 October 2021; pp. 10012–10022. [Google Scholar]
- Li, Y.; Yao, C.; Pan, Y.; Mei, T. Contextual Transformer Networks for Visual Recognition. arXiv 2021, arXiv:2107.12292. [Google Scholar] [CrossRef]
- Ouyang, D.; He, S.; Zhan, J.; Guo, H.; Huang, Z.; Luo, M.; Zhang, G. Efficient Multi-Scale Attention Module with Cross-Spatial Learning. arXiv 2023, arXiv:2305.13563. [Google Scholar]
- Xie, E.; Wang, W.; Yu, Z.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. Adv. Neural Inf. Process. Syst. (NeurIPS) 2021, 34, 12077–12090. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Chokkalingam, U.; De Jong, W. Secondary forest: A working definition and typology. Int. For. Rev. 2001, 19–26. [Google Scholar]
- Ahn, Y.S.; Ryu, S.R.; Lim, J.; Lee, C.H.; Shin, J.H.; Choi, W.I.; Lee, B.; Jeong, J.H.; An, K.W.; Seo, J.I. Effects of forest fires on forest ecosystems in eastern coastal areas of Korea and an overview of restoration projects. Landsc. Ecol. Eng. 2014, 10, 229–237. [Google Scholar] [CrossRef]
Shooting Time | Weather Conditions | UAV Flight Altitude | UAV Flight Duration | |
---|---|---|---|---|
1 | 2 October 2022 | cloudy to sunny | 600~800 m | 4 × 30 min |
2 | 6 March 2023 | cloudy to sunny | 600~850 m | 4 × 30 min |
3 | 1 August 2023 | sunny | 600~950 m | 5 × 30 min |
4 | 17 February 2024 | sunny | 600~950 m | 6 × 30 min |
Segmentation Model | Backbone | mAcc | mIou | mDice | mPrecision |
---|---|---|---|---|---|
DeepLabV3+ | Resnet101 | 0.7209 | 0.6361 | 0.7260 | 0.7817 |
Segformer | MixViT | 0.7689 | 0.6806 | 0.7968 | 0.8433 |
Mask2former | Resnet50 | 0.7753 | 0.6732 | 0.7910 | 0.8184 |
Mask2former | Swin-Transformer (tiny) | 0.8191 | 0.7068 | 0.8213 | 0.8303 |
Mask2former | EAswin-Transformer (tiny) | 0.8365 | 0.7281 | 0.8352 | 0.8396 |
Segmentation Model | Backbone | mAcc | mIou | mDice | mPrecision |
---|---|---|---|---|---|
DeepLabV3+ | Resnet101 | 0.8299 | 0.7231 | 0.8373 | 0.8455 |
Segformer | MixViT | 0.8481 | 0.7460 | 0.8528 | 0.8580 |
Mask2former | Resnet50 | 0.8545 | 0.7309 | 0.8421 | 0.8310 |
Mask2former | Swin-Transformer (tiny) | 0.8576 | 0.7577 | 0.8602 | 0.8633 |
Mask2former | EAswin-Transformer (tiny) | 0.8623 | 0.7635 | 0.8642 | 0.8668 |
Model | CoT | EMA | mAcc | mIou | mDice | mPrecision |
---|---|---|---|---|---|---|
Swin-Mask2former | × | × | 0.8576 | 0.7577 | 0.8602 | 0.8633 |
Swin-Mask2former | √ | × | 0.8528 | 0.7599 | 0.8618 | 0.8715 |
Swin-Mask2former | × | √ | 0.8607 | 0.7609 | 0.8625 | 0.8659 |
EAswin-Mask2former | √ | √ | 0.8623 | 0.7635 | 0.8642 | 0.8668 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, D.; Yang, P. Study on the Evolutionary Characteristics of Post-Fire Forest Recovery Using Unmanned Aerial Vehicle Imagery and Deep Learning: A Case Study of Jinyun Mountain in Chongqing, China. Sustainability 2024, 16, 9717. https://doi.org/10.3390/su16229717
Zhu D, Yang P. Study on the Evolutionary Characteristics of Post-Fire Forest Recovery Using Unmanned Aerial Vehicle Imagery and Deep Learning: A Case Study of Jinyun Mountain in Chongqing, China. Sustainability. 2024; 16(22):9717. https://doi.org/10.3390/su16229717
Chicago/Turabian StyleZhu, Deli, and Peiji Yang. 2024. "Study on the Evolutionary Characteristics of Post-Fire Forest Recovery Using Unmanned Aerial Vehicle Imagery and Deep Learning: A Case Study of Jinyun Mountain in Chongqing, China" Sustainability 16, no. 22: 9717. https://doi.org/10.3390/su16229717
APA StyleZhu, D., & Yang, P. (2024). Study on the Evolutionary Characteristics of Post-Fire Forest Recovery Using Unmanned Aerial Vehicle Imagery and Deep Learning: A Case Study of Jinyun Mountain in Chongqing, China. Sustainability, 16(22), 9717. https://doi.org/10.3390/su16229717