On Transfer Learning for Building Damage Assessment from Satellite Imagery in Emergency Contexts
"> Figure 1
<p>Building damage assessment incident workflow. The post-incident execution phase is triggered by a natural disaster but is only initiated upon retrieval of post-disaster satellite images from an imagery archive. Those images are then used to produce maps and analyzed to produce a damage assessment report. The duration of each task is approximate and depends upon many external factors.</p> "> Figure 2
<p>Per-disaster empirical distribution of building damage. The numbers are ratios of Damage buildings per disaster.</p> "> Figure 3
<p>Image pair before and after Hurricane Florence. The bounding box focuses on a single building. The area surrounding the building is flooded.</p> "> Figure 4
<p>Damage types: structural (<b>left</b>) and peripheral (<b>right</b>).</p> "> Figure 5
<p>Two-step modelling approach composed of (1) a building detection model (<span class="html-italic">BuildingNet</span>) and (2) a damage classification model (<span class="html-italic">DamageNet</span>). The input of <span class="html-italic">BuildingNet</span> is a pre-disaster image, and the output a binary segmentation heatmap, i.e., that each pixel has a sigmoid output. The input of <span class="html-italic">DamageNet</span> is both the pre- and post-disaster image patches centred on a single building along with the building mask. The two models are applied sequentially.</p> "> Figure 6
<p><span class="html-italic">BuildingNet</span> follows an Attention-U-Net architecture. The pre-disaster image is downsampled and then upsampled (i.e., a bottleneck architecture) at different spatial scales. The skip connections allow an encoding at a certain scale to skip through further downscaling and to merge with the upsampling stream after being filtered through an attention gate. The attention gate learns to focus on different structures.</p> "> Figure 7
<p><span class="html-italic">DamageNet</span> follows Siamese-ResNet architecture. Both pre- and post-disaster feature streams are eventually concatenated into one damage classification stream. The building mask is applied as an attention mechanism. This figure shows the feature map shape for ResNet34.</p> "> Figure 8
<p>Ablation study configurations for the fusion of the pre- and post-disaster streams after the first (1), second (2), third (3), and fourth (4) blocks.</p> "> Figure 9
<p><span class="html-italic">BuildingNet</span><math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> score per disaster event.</p> "> Figure 10
<p>Ablation study results for the fusion of the pre- and post-disaster streams after the first (1), second (2), third (3), and fourth (4) blocks. Each line represents ResNet with a different capacity.</p> "> Figure 11
<p><math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> score of <span class="html-italic">DamageNet</span> per disaster event.</p> "> Figure 12
<p>Results of <span class="html-italic">DamageNet</span> fine-tuned with supervision on annotated samples of the current disaster event. Each line represents the <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> score for a given disaster event with an increasing number of samples from the current disaster.</p> "> Figure 13
<p>Fine-tuning steps.</p> "> Figure 14
<p>Comparison of manual and automatic damage classification incident workflows. Manual annotation takes up to days after the reception of post-disaster satellite images. Supervised fine-tuning still involves manual annotation but for more than 10 times fewer samples. All durations are approximate. Data annotation durations are relative to each other.</p> "> Figure 15
<p>Complete building damage assessment incident workflow supported by machine learning. Building detection inference depends on the pre-disaster satellite images only. Damage classification depends on both the pre- and post-disaster images. It also depends on building detection model inference. Data analytics depend on the damage classification model inference. All durations are approximative.</p> "> Figure A1
<p>Pre-disaster samples from different disaster events along with the ground-truth and <span class="html-italic">BuildingNet</span> prediction. Samples are from the five disaster events on which <span class="html-italic">BuildingNet</span> performs the worst.</p> "> Figure A2
<p>Pre-disaster samples from different disaster events along with the ground-truth and <span class="html-italic">BuildingNet</span> prediction. Samples are from the five disaster events on which <span class="html-italic">BuildingNet</span> performs the worst.</p> ">
Abstract
:1. Introduction
1.1. The Humanitarian Context
On the Use of Satellite Images
1.2. Damage Assessment
1.3. Related Works
2. Materials and Methods
2.1. Annotation
2.2. Images
2.2.1. Location
2.2.2. Disaster and Damage Type
2.2.3. Time and Seasons
2.2.4. Other Factors
2.3. Problem Complexity
2.4. Requirements
2.4.1. Model Readiness and Post-Incident Execution Time
2.4.2. Model Performance
2.4.3. Interpretability
2.5. Approach
2.6. Model Architectures
2.6.1. Training Strategy
2.6.2. Evaluation
2.7. Experimental Setting
2.8. Transfer Learning
Training Hyperparameters
3. Results
3.1. Comparison to the State-of-the-Art Model
3.2. BuildingNet
3.3. Damage Classification
Transfer Learning
4. Discussion
4.1. Building Detection
4.2. Damage Classification
Transfer Learning
4.3. Proposed Incident Workflow
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. BuildingNet Results
References
- Voigt, S.; Giulio-Tonolo, F.; Lyons, J.; Kučera, J.; Jones, B.; Schneiderhan, T.; Platzeck, G.; Kaku, K.; Hazarika, M.K.; Czaran, L.; et al. Global trends in satellite-based emergency mapping. Science 2016, 353, 247–252. [Google Scholar] [CrossRef] [PubMed]
- Ben-David, S.; Blitzer, J.; Crammer, K.; Kulesza, A.; Pereira, F.; Vaughan, J.W. A theory of learning from different domains. Mach. Learn. 2010, 79, 151–175. [Google Scholar] [CrossRef] [Green Version]
- Rolnick, D.; Donti, P.L.; Kaack, L.H.; Kochanski, K.; Lacoste, A.; Sankaran, K.; Ross, A.S.; Milojevic-Dupont, N.; Jaques, N.; Waldman-Brown, A.; et al. Tackling climate change with machine learning. arXiv 2019, arXiv:1906.05433. [Google Scholar]
- Rausch, L.; Friesen, J.; Altherr, L.C.; Meck, M.; Pelz, P.F. A holistic concept to design optimal water supply infrastructures for informal settlements using remote sensing data. Remote Sens. 2018, 10, 216. [Google Scholar] [CrossRef] [Green Version]
- Kogan, F. Remote Sensing for Food Security; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
- Nielsen, M.M. Remote sensing for urban planning and management: The use of window-independent context segmentation to extract urban features in Stockholm. Comput. Environ. Urban Syst. 2015, 52, 1–9. [Google Scholar] [CrossRef]
- Filipponi, F. Exploitation of sentinel-2 time series to map burned areas at the national level: A case study on the 2017 italy wildfires. Remote Sens. 2019, 11, 622. [Google Scholar] [CrossRef] [Green Version]
- Foody, G.M. Remote sensing of tropical forest environments: Towards the monitoring of environmental resources for sustainable development. Int. J. Remote Sens. 2003, 24, 4035–4046. [Google Scholar] [CrossRef]
- Schumann, G.J.; Brakenridge, G.R.; Kettner, A.J.; Kashif, R.; Niebuhr, E. Assisting flood disaster response with earth observation data and products: A critical assessment. Remote Sens. 2018, 10, 1230. [Google Scholar] [CrossRef] [Green Version]
- Alshehhi, R.; Marpu, P.R.; Woon, W.L.; Dalla Mura, M. Simultaneous extraction of roads and buildings in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2017, 130, 139–149. [Google Scholar] [CrossRef]
- Shrestha, S.; Vanneschi, L. Improved fully convolutional network with conditional random fields for building extraction. Remote Sens. 2018, 10, 1135. [Google Scholar] [CrossRef] [Green Version]
- Huang, J.; Zhang, X.; Xin, Q.; Sun, Y.; Zhang, P. Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network. ISPRS J. Photogramm. Remote Sens. 2019, 151, 91–105. [Google Scholar] [CrossRef]
- Yuan, J. Learning building extraction in aerial scenes with convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2793–2798. [Google Scholar] [CrossRef] [PubMed]
- Liu, P.; Liu, X.; Liu, M.; Shi, Q.; Yang, J.; Xu, X.; Zhang, Y. Building footprint extraction from high-resolution images via spatial residual inception convolutional neural network. Remote Sens. 2019, 11, 830. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Gross, L.; Li, Z.; Li, X.; Fan, X.; Qi, W. Automatic building extraction on high-resolution remote sensing imagery using deep convolutional encoder-decoder with spatial pyramid pooling. IEEE Access 2019, 7, 128774–128786. [Google Scholar] [CrossRef]
- Ma, J.; Wu, L.; Tang, X.; Liu, F.; Zhang, X.; Jiao, L. Building extraction of aerial images by a global and multi-scale encoder-decoder network. Remote Sens. 2020, 12, 2350. [Google Scholar] [CrossRef]
- Xie, Y.; Zhu, J.; Cao, Y.; Feng, D.; Hu, M.; Li, W.; Zhang, Y.; Fu, L. Refined extraction of building outlines from high-resolution remote sensing imagery based on a multifeature convolutional neural network and morphological filtering. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1842–1855. [Google Scholar] [CrossRef]
- Guo, H.; Shi, Q.; Du, B.; Zhang, L.; Wang, D.; Ding, H. Scene-driven multitask parallel attention network for building extraction in high-resolution remote sensing images. IEEE Trans. Geosci. Remote Sens. 2020, 59, 4287–4306. [Google Scholar] [CrossRef]
- Guo, H.; Shi, Q.; Marinoni, A.; Du, B.; Zhang, L. Deep building footprint update network: A semi-supervised method for updating existing building footprint from bi-temporal remote sensing images. Remote Sens. Environ. 2021, 264, 112589. [Google Scholar] [CrossRef]
- Cooner, A.J.; Shao, Y.; Campbell, J.B. Detection of urban damage using remote sensing and machine learning algorithms: Revisiting the 2010 Haiti earthquake. Remote Sens. 2016, 8, 868. [Google Scholar] [CrossRef] [Green Version]
- Fujita, A.; Sakurada, K.; Imaizumi, T.; Ito, R.; Hikosaka, S.; Nakamura, R. Damage detection from aerial images via convolutional neural networks. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 5–8. [Google Scholar]
- Sublime, J.; Kalinicheva, E. Automatic post-disaster damage mapping using deep-learning techniques for change detection: Case study of the Tohoku tsunami. Remote Sens. 2019, 11, 1123. [Google Scholar] [CrossRef] [Green Version]
- Doshi, J.; Basu, S.; Pang, G. From satellite imagery to disaster insights. arXiv 2018, arXiv:1812.07033. [Google Scholar]
- Van Etten, A.; Lindenbaum, D.; Bacastow, T.M. Spacenet: A remote sensing dataset and challenge series. arXiv 2018, arXiv:1807.01232. [Google Scholar]
- Demir, I.; Koperski, K.; Lindenbaum, D.; Pang, G.; Huang, J.; Basu, S.; Hughes, F.; Tuia, D.; Raskar, R. Deepglobe 2018: A challenge to parse the earth through satellite images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 172–181. [Google Scholar]
- Gupta, R.; Hosfelt, R.; Sajeev, S.; Patel, N.; Goodman, B.; Doshi, J.; Heim, E.; Choset, H.; Gaston, M. xbd: A dataset for assessing building damage from satellite imagery. arXiv 2019, arXiv:1911.09296. [Google Scholar]
- Durnov, V. Github—DIUx-xView/xView2_first_place: 1st Place Solution for ’xView2: Assess Building Damage’ Challenge. Available online: https://github.com/DIUx-xView/xView2_first_place (accessed on 1 March 2020).
- Shao, J.; Tang, L.; Liu, M.; Shao, G.; Sun, L.; Qiu, Q. BDD-Net: A General Protocol for Mapping Buildings Damaged by a Wide Range of Disasters Based on Satellite Imagery. Remote Sens. 2020, 12, 1670. [Google Scholar] [CrossRef]
- Gupta, R.; Shah, M. Rescuenet: Joint building segmentation and damage assessment from satellite imagery. arXiv 2020, arXiv:2004.07312. [Google Scholar]
- Weber, E.; Kané, H. Building disaster damage assessment in satellite imagery with multi-temporal fusion. arXiv 2020, arXiv:2004.05525. [Google Scholar]
- Hao, H.; Baireddy, S.; Bartusiak, E.R.; Konz, L.; LaTourette, K.; Gribbons, M.; Chan, M.; Comer, M.L.; Delp, E.J. An attention-based system for damage assessment using satellite imagery. arXiv 2020, arXiv:2004.06643. [Google Scholar]
- Shen, Y.; Zhu, S.; Yang, T.; Chen, C. Cross-directional Feature Fusion Network for Building Damage Assessment from Satellite Imagery. arXiv 2020, arXiv:2010.14014. [Google Scholar]
- Boin, J.B.; Roth, N.; Doshi, J.; Llueca, P.; Borensztein, N. Multi-class segmentation under severe class imbalance: A case study in roof damage assessment. arXiv 2020, arXiv:2010.07151. [Google Scholar]
- Khvedchenya, E.; Gabruseva, T. Fully convolutional Siamese neural networks for buildings damage assessment from satellite images. arXiv 2021, arXiv:2111.00508. [Google Scholar]
- Xiao, H.; Peng, Y.; Tan, H.; Li, P. Dynamic Cross Fusion Network for Building-Based Damage Assessment. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo (ICME), Shenzhen, China, 5–9 July 2021; pp. 1–6. [Google Scholar]
- Shen, Y.; Zhu, S.; Yang, T.; Chen, C.; Pan, D.; Chen, J.; Xiao, L.; Du, Q. Bdanet: Multiscale convolutional neural network with cross-directional attention for building damage assessment from satellite images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–14. [Google Scholar] [CrossRef]
- Calton, L.; Wei, Z. Using Artificial Neural Network Models to Assess Hurricane Damage through Transfer Learning. Appl. Sci. 2022, 12, 1466. [Google Scholar] [CrossRef]
- Xu, J.Z.; Lu, W.; Li, Z.; Khaitan, P.; Zaytseva, V. Building damage detection in satellite imagery using convolutional neural networks. arXiv 2019, arXiv:1910.06444. [Google Scholar]
- Valentijn, T.; Margutti, J.; van den Homberg, M.; Laaksonen, J. Multi-hazard and spatial transferability of a cnn for automated building damage assessment. Remote Sens. 2020, 12, 2839. [Google Scholar] [CrossRef]
- Benson, V.; Ecker, A. Assessing out-of-domain generalization for robust building damage detection. arXiv 2020, arXiv:2011.10328. [Google Scholar]
- Li, Y.; Wang, N.; Shi, J.; Liu, J.; Hou, X. Revisiting batch normalization for practical domain adaptation. arXiv 2016, arXiv:1603.04779. [Google Scholar]
- Athiwaratkun, B.; Finzi, M.; Izmailov, P.; Wilson, A.G. There are many consistent explanations of unlabeled data: Why you should average. arXiv 2018, arXiv:1806.05594. [Google Scholar]
- Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural building damage detection with deep learning: Assessment of a state-of-the-art CNN in operational conditions. Remote Sens. 2019, 11, 2765. [Google Scholar] [CrossRef] [Green Version]
- Lee, J.; Xu, J.Z.; Sohn, K.; Lu, W.; Berthelot, D.; Gur, I.; Khaitan, P.; Huang, K.-W.(F.); Koupparis, K.; Kowatsch, B. Assessing Post-Disaster Damage from Satellite Imagery using Semi-Supervised Learning Techniques. arXiv 2020, arXiv:2011.14004. [Google Scholar]
- Berthelot, D.; Carlini, N.; Goodfellow, I.; Papernot, N.; Oliver, A.; Raffel, C. Mixmatch: A holistic approach to semi-supervised learning. arXiv 2019, arXiv:1905.02249. [Google Scholar]
- Sohn, K.; Berthelot, D.; Li, C.L.; Zhang, Z.; Carlini, N.; Cubuk, E.D.; Kurakin, A.; Zhang, H.; Raffel, C. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. arXiv 2020, arXiv:2001.07685. [Google Scholar]
- Xia, J.; Yokoya, N.; Adriano, B. Building Damage Mapping with Self-PositiveUnlabeled Learning. arXiv 2021, arXiv:2111.02586. [Google Scholar]
- Ismail, A.; Awad, M. Towards Cross-Disaster Building Damage Assessment with Graph Convolutional Networks. arXiv 2022, arXiv:2201.10395. [Google Scholar]
- Kuzin, D.; Isupova, O.; Simmons, B.D.; Reece, S. Disaster mapping from satellites: Damage detection with crowdsourced point labels. arXiv 2021, arXiv:2111.03693. [Google Scholar]
- Anand, V.; Miura, Y. PREDISM: Pre-Disaster Modelling With CNN Ensembles for At-Risk Communities. arXiv 2021, arXiv:2112.13465. [Google Scholar]
- Presa-Reyes, M.; Chen, S.C. Weakly-Supervised Damaged Building Localization and Assessment with Noise Regularization. In Proceedings of the 2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR), Virtual, 8–10 September 2021; pp. 8–14. [Google Scholar]
- Pi, Y.; Nath, N.D.; Behzadan, A.H. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inform. 2020, 43, 101009. [Google Scholar] [CrossRef]
- Xiong, C.; Li, Q.; Lu, X. Automated regional seismic damage assessment of buildings using an unmanned aerial vehicle and a convolutional neural network. Autom. Constr. 2020, 109, 102994. [Google Scholar] [CrossRef]
- Rudner, T.G.J.; Rußwurm, M.; Fil, J.; Pelich, R.; Bischke, B.; Kopacková, V.; Bilinski, P. Rapid Computer Vision-Aided Disaster Response via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery. In Proceedings of the First Workshop on AI for Social Good. Neural Information Processing Systems (NIPS-2018), Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
- Li, X.; Caragea, D.; Zhang, H.; Imran, M. Localizing and quantifying infrastructure damage using class activation mapping approaches. Soc. Netw. Anal. Min. 2019, 9, 44. [Google Scholar] [CrossRef]
- Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Satellite image classification of building damages using airborne and satellite image samples in a deep learning approach. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-2, 89–96. [Google Scholar] [CrossRef] [Green Version]
- Weber, E.; Papadopoulos, D.P.; Lapedriza, A.; Ofli, F.; Imran, M.; Torralba, A. Incidents1M: A large-scale dataset of images with natural disasters, damage, and incidents. arXiv 2022, arXiv:2201.04236. [Google Scholar]
- Glasmachers, T. Limits of End-to-End Learning. In Proceedings of the Asian Conference on Machine Learning, Seoul, Korea, 15–17 November 2017; pp. 17–32. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.C.H.; Heinrich, M.P.; Misawa, K.; Mori, K.; McDonagh, S.G.; Hammerla, N.Y.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese neural networks for one-shot image recognition. In Proceedings of the ICML Deep Learning Workshop, Lille, France, 6–11 July 2015; Volume 2. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Chen, H.; Nemni, E.; Vallecorsa, S.; Li, X.; Wu, C.; Bromley, L. Dual-Tasks Siamese Transformer Framework for Building Damage Assessment. arXiv 2022, arXiv:2201.10953. [Google Scholar]
- Li, Y.; Lin, C.; Li, H.; Hu, W.; Dong, H.; Liu, Y. Unsupervised Domain Adaptation with Self-attention for Post-disaster Building Damage Detection. Neurocomputing 2020, 415, 27–39. [Google Scholar] [CrossRef]
- Benjdira, B.; Bazi, Y.; Koubaa, A.; Ouni, K. Unsupervised Domain Adaptation Using Generative Adversarial Networks for Semantic Segmentation of Aerial Images. Remote Sens. 2019, 11, 1369. [Google Scholar] [CrossRef] [Green Version]
- Xu, Q.; Yuan, X.; Ouyang, C. Class-Aware Domain Adaptation for Semantic Segmentation of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2020, 60, 1–17. [Google Scholar] [CrossRef]
xBD Original Class | Simplified Class | Description |
---|---|---|
0 (No damage) | 0 (No damage) | Undisturbed. No signs of water, structural or shingle damage, or burn marks. |
1 (Minor Damage) | 0 (No damage) | Building partially burnt, water surrounding structure, volcanic flow nearby, roof element missing, or visible crack. |
2 (Major Damage) | 1 (Damage) | Partial wall or roof collapse, encroaching volcanic flow, or surrounded by water/mud. |
3 (Destroyed) | 1 (Damage) | Scorched, completely collapsed, partially/completely covered with water/mud, or otherwise no longer present. |
Disaster Event | Abbreviation | Country |
---|---|---|
Hurricane Florence | hurr-florence | USA |
Hurricane Harvey | hurr-harvey | USA |
Hurricane Matthew | hurr-matthew | Haiti |
Hurricane Michael | hurr-michael | USA |
Joplin Tornado | joplin-tornado | USA |
Lower Puna Volcano | lower-puna-volcano | USA (Hawai) |
Mexico Earthquake | mexico-earthquake | Mexico |
Moore Tornado | moore-tornado | USA |
Midwest Flood | mw-flood | USA |
Nepal Flooding | nepal-flooding | Nepal |
Palu Tsunami | palu-tsunami | Indonesia |
Pinery Bushfire | pinery-bushfire | Australia |
Portugal Wildfire | portugal-wildfire | Portugal |
Socal Fire | socal-fire | USA |
Santa Rosa Fire | sr-fire | USA |
Sunda Tsunami | sunda-tsunami | Indonesia |
Tuscaloosa Tornado | tuscaloosa-tornado | USA |
Woolsey Fire | woolsey-fire | USA |
Localization | Classification | |
---|---|---|
Weber [30] | 0.835 | 0.697 |
RescueNet [29] | 0.840 | 0.740 |
BDANet [36] | 0.864 | 0.782 |
DCFNet [35] | 0.864 | 0.795 |
DamFormer [63] | 0.869 | 0.728 |
Our model | 0.846 (0.002) | 0.709 (0.003) |
Disaster Event | No Fine-Tuning | Fine-Tuning |
---|---|---|
Hurricane Florence | 0.792 | 0.931 |
Hurricane Harvey | 0.372 | 0.402 |
Hurricane Matthew | 0.697 | 0.702 |
Hurricane Michael | 0.094 | 0.850 |
Joplin Tornado | 0.889 | 0.853 |
Lower Puna Volcano | 0.745 | 0.941 |
Mexico Earthquake | 0.01 | 0.027 |
Moore Tornado | 0.859 | 0.879 |
Midwest Flood | 0.570 | 0.737 |
Nepal Flooding | 0.472 | 0.646 |
Palu Tsunami | 0.777 | 0.833 |
Pinery Bushfire | 0.405 | 0.498 |
Portugal Wildfire | 0.493 | 0.540 |
Socal Fire | 0.803 | 0.801 |
Santa Rosa Fire | 0.924 | 0.920 |
Sunda Tsunami | 0.245 | 0.523 |
Tuscaloosa Tornado | 0.778 | 0.770 |
Woolsey Fire | 0.765 | 0.766 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bouchard, I.; Rancourt, M.-È.; Aloise, D.; Kalaitzis, F. On Transfer Learning for Building Damage Assessment from Satellite Imagery in Emergency Contexts. Remote Sens. 2022, 14, 2532. https://doi.org/10.3390/rs14112532
Bouchard I, Rancourt M-È, Aloise D, Kalaitzis F. On Transfer Learning for Building Damage Assessment from Satellite Imagery in Emergency Contexts. Remote Sensing. 2022; 14(11):2532. https://doi.org/10.3390/rs14112532
Chicago/Turabian StyleBouchard, Isabelle, Marie-Ève Rancourt, Daniel Aloise, and Freddie Kalaitzis. 2022. "On Transfer Learning for Building Damage Assessment from Satellite Imagery in Emergency Contexts" Remote Sensing 14, no. 11: 2532. https://doi.org/10.3390/rs14112532
APA StyleBouchard, I., Rancourt, M. -È., Aloise, D., & Kalaitzis, F. (2022). On Transfer Learning for Building Damage Assessment from Satellite Imagery in Emergency Contexts. Remote Sensing, 14(11), 2532. https://doi.org/10.3390/rs14112532