Using Artificial Neural Network Models to Assess Hurricane Damage through Transfer Learning
<p>The transfer learning workflow for binary classification into floods and non-floods by using pre-trained ResNet and MobileNet.</p> "> Figure 2
<p>The transfer learning workflow for object detection of four object categories by using pre-trained ResNet, MobileNet, and EfficientNet.</p> "> Figure 3
<p>The training and validation cross-entropy loss and accuracy for the flood damage classification using ResNet and MobileNet.</p> "> Figure 4
<p>The training metrics for damage detection using classification loss, localization loss, regularization loss, and total loss up to the twenty-thousandth (20 k) epoch.</p> "> Figure 5
<p>The confusion matrix for flood damage classification using (<b>a</b>) ResNet and (<b>b</b>) MobileNet. The locations of four possible predictions are: TN (top left), FP (top right), FN (lower left), and TP (lower right).</p> "> Figure 6
<p>Inference carried out on the image of a <span class="html-italic">damaged roof</span> for each of the three models. The green bounding boxes correspond to the correct predictions of a <span class="html-italic">damaged roof</span>, and the grey bounding box corresponds to the incorrect prediction of <span class="html-italic">structural damage</span> from the first row of <a href="#applsci-12-01466-t004" class="html-table">Table 4</a>.</p> "> Figure 7
<p>Inference carried out on the image of a <span class="html-italic">damaged wall</span> for each of the three models. The blue bounding boxes correspond to the correct prediction of a <span class="html-italic">damaged wall</span> from the first row of <a href="#applsci-12-01466-t005" class="html-table">Table 5</a>.</p> "> Figure 8
<p>Inference carried out on the image of <span class="html-italic">flood damage</span> for each of the three models. The white bounding boxes correspond to the correct predictions of <span class="html-italic">flood damage</span> from the first row of <a href="#applsci-12-01466-t006" class="html-table">Table 6</a>.</p> "> Figure 9
<p>Inference carried out on the image of <span class="html-italic">structural damage</span> for each of the three models. The tan colored bounding boxes correspond to the correct predictions of <span class="html-italic">structural damage</span> from the first row of <a href="#applsci-12-01466-t007" class="html-table">Table 7</a>.</p> ">
Abstract
:1. Introduction
2. Building Damage Dataset
2.1. Data Collection and Preparation
2.2. Data Statistics
- Damaged roof. The bounding box label highlights a roof that has the whole roof, some shingles, or parts of the roof damaged. The bounding box label typically encompasses the entire roof in the image; however, if the entire roof is not visible then the damaged area and any additional parts of the roof that are visible were included.
- Damaged wall. The labeling bounding box highlights a damaged building wall or windows within a wall. Damage to walls/windows could range from areas with minor disintegration of brick or glass structure to entire loss of the wall or window structure.
- Flood damage. The bounding box label highlights flood waters in an image. The flood water can occur in various places as explained in the binary classification dataset. Due to this sporadic nature, in some images, multiple bounding box labels were used to encapsulate the entirety of the flood water.
- Structural damage. The bounding box label highlights a building suffering from structural damage, e.g., the disintegration of the roof and/or any floor(s) within the building, complete loss of multiple walls/structures, or the collapse of the whole building.
3. Transfer Learning
3.1. The Fundamentals of Transfer Learning
3.2. Artificial Neural Network Models
3.2.1. ResNet
3.2.2. MobileNet
3.2.3. EfficientNet
3.3. Transfer Learning Workflow for Flood Damage Classification
- Obtain the pre-trained neural network model and its weights;
- Remove the top layer which is used to predict the original 1000 classes;
- Freeze other layers in the pre-trained model to avoid destroying any of the extracted feature information;
- Add new and trainable layers on top of the frozen layers. These layers learn to turn the old features into the target predictions (i.e., floods and non-floods images) using a new dataset;
- Train the new layers on our in-house hurricane damage dataset related to flood damage.
3.4. Transfer Learning Workflow for Hurricane Damage Detection
- Initialize training with pre-trained neural network model and extracted feature weights;
- Configure a new pipeline with specified training parameters for our model;
- Use the pre-trained model checkpoint as the starting point for adding new, trainable layers that contain predictions of the four distinct object categories in our dataset;
- Train the new layers on our in-house hurricane damage dataset related to building damage.
3.5. Computing Environment
- The CPU model name is Intel(R) Xeon(R) CPU @ 2.00 GHz;
- The clock speed of the CPU is 2 K MHz, and the CPU cache size is 39,424 KB;
- The Graphics Processing Units (GPU) card is NVIDIA Tesla P100. It is based on the NVIDIA Pascal GPU architecture, and it has 3584 NVIDIA CUDA cores. The GPU memory is 16 GB. A single GPU card was used in this study.
4. Results and Discussion
4.1. Metrics and Prediction Skills
4.1.1. Classification Metrics
4.1.2. Object Detection Metrics
4.2. Model Training
4.3. Damage Classification
- TN/True negative: an image was non-floods and predicted as non-floods;
- TP/True positive: an image was floods and predicted as floods;
- FN/False negative: an image was floods and predicted as non-floods;
- FP/False positive: an image was non-floods and predicted as floods.
4.4. Damage Detection
4.4.1. Damaged Roof Comparison
4.4.2. Damage Wall Comparison
4.4.3. Flood Damage Comparison
4.4.4. Structural Damage Comparison
4.4.5. Overall Performance of the Damage Detection Models
5. Conclusions
- The transfer learning based flood damage classification models were developed using ResNet and MobileNet. A binary classification was carried out to detect floods and non-floods images. Several methods were used to evaluate the performance of the transfer learning models. The confusion matrix comparison showed both ResNet and MobileNet are able to correctly classify floods and non-floods with a relatively high accuracy. Specifically, the overall accuracy is about 76% using ResNet and 87% using MobileNet. Three metrics (precision, recall, and F1 score) were further calculated and compared between two models. The result obtained using MobileNet as the base model is consistently better than that using ResNet. For example, the F1 score, a harmonic mean of precision and recall, is about 0.88 using MobileNet. It is about 9% higher than the F1 score using ResNet (0.79). Overall, this study showed that hurricane flood damage to buildings can be correctly classified using artificial intelligence models developed using transfer learning techniques on the basis of advancing machine learning models in computer vision.
- The transfer-learning-based damage detection models were developed using ResNet, MobileNet, and EfficientNet. Four damage types were captured in four object classes: damaged roof, damaged wall, flood damage, and structural damage. Two methods were primarily used to evaluate the performance of the transfer learning models for damage detection. The top three confidence scores and associated object class were tabulated for each model, showing that each model was capable of predicting the correct object class in the image; the MobileNet model consistently achieved the highest confidence score and proved to be the more accurate model in detecting hurricane damage. Then, the images of each type of damage were displayed with the top bounding box prediction for each model. Likewise, MobileNet consistently achieved the most accurate localizations of the detected damage in each image. Therefore, this study showed that various types of damage from hurricanes can be accurately detected using artificial intelligence models developed through transfer learning to further advance machine learning applications in computer vision.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Cooper, R. Hurricane Florence Recovery Recommendations. 2018. Available online: https://www.osbm.nc.gov/media/824/open (accessed on 15 March 2021).
- Guikema, S. Artificial Intelligence for Natural Hazards Risk Analysis: Potential, Challenges, and Research Needs. Risk Anal. 2020, 40, 1117–1123. [Google Scholar] [CrossRef] [PubMed]
- Massarra, C.C. Hurricane Damage Assessment Process for Residential Buildings. Master’s Thesis, Louisiana State University, Baton Rouge, LA, USA, 2012. [Google Scholar]
- FEMA. FEMA Preliminary Damage Assessment Guide; FEMA: Washington, DC, USA, 2020.
- Lam, D.; Kuzma, R.; McGee, K.; Dooley, S.; Laielli, M.; Klaric, M.; Bulatov, Y.; McCord, B. xview: Objects in context in overhead imagery. arXiv 2018, arXiv:1802.07856. [Google Scholar]
- Gupta, R.; Hosfelt, R.; Sajeev, S.; Patel, N.; Goodman, B.; Doshi, J.; Heim, E.; Choset, H.; Gaston, M. xbd: A dataset for assessing building damage from satellite imagery. arXiv 2019, arXiv:1911.09296. [Google Scholar]
- Roueche, D.B.; Lombardo, F.T.; Krupar, R.; Smith, D.J. Collection of Perishable Data on Wind-and Surge-Induced Residential Building Damage During Hurricane Harvey (TX); DesignSafe-CI: Austin, TX, USA, 2018. [Google Scholar]
- Weber, E.; Kané, H. Building disaster damage assessment in satellite imagery with multi-temporal fusion. arXiv 2020, arXiv:2004.05525. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 27–29 October 2017; pp. 2961–2969. [Google Scholar]
- Hao, H.; Baireddy, S.; Bartusiak, E.R.; Konz, L.; LaTourette, K.; Gribbons, M.; Chan, M.; Comer, M.L.; Delp, E.J. An attention-based system for damage assessment using satellite imagery. arXiv 2020, arXiv:2004.06643. [Google Scholar]
- Gupta, R.; Goodman, B.; Patel, N.; Hosfelt, R.; Sajeev, S.; Heim, E.; Doshi, J.; Lucas, K.; Choset, H.; Gaston, M. Creating xBD: A dataset for assessing building damage from satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 10–17. [Google Scholar]
- Cheng, C.S.; Behzadan, A.H.; Noshadravan, A. Deep learning for post-hurricane aerial damage assessment of buildings. Comput.-Aided Civ. Infrastruct. Eng. 2021, 36, 695–710. [Google Scholar] [CrossRef]
- Hao, H.; Wang, Y. Leveraging multimodal social media data for rapid disaster damage assessment. Int. J. Disaster Risk Reduct. 2020, 51, 101760. [Google Scholar] [CrossRef]
- Imran, M.; Alam, F.; Qazi, U.; Peterson, S.; Ofli, F. Rapid Damage Assessment Using Social Media Images by Combining Human and Machine Intelligence. arXiv 2020, arXiv:2004.06675. [Google Scholar]
- Zhang, Y.; Zong, R.; Wang, D. A Hybrid Transfer Learning Approach to Migratable Disaster Assessment in Social Media Sensing. In Proceedings of the 2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), The Hague, The Netherlands, 7–10 December 2020; pp. 131–138. [Google Scholar] [CrossRef]
- Hao, H.; Wang, Y. Hurricane damage assessment with multi-, crowd-sourced image data: A case study of Hurricane Irma in the city of Miami. In Proceedings of the 17th International Conference on Information System for Crisis Response and Management (ISCRAM), Valencia, Spain, 19–22 May 2019. [Google Scholar]
- Li, Y.; Hu, W.; Dong, H.; Zhang, X. Building Damage Detection from Post-Event Aerial Imagery Using Single Shot Multibox Detector. Appl. Sci. 2019, 9, 1128. [Google Scholar] [CrossRef] [Green Version]
- Presa-Reyes, M.; Chen, S.C. Assessing Building Damage by Learning the Deep Feature Correspondence of before and after Aerial Images. In Proceedings of the 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Shenzhen, China, 6–8 August 2020; pp. 43–48. [Google Scholar] [CrossRef]
- Pi, Y.; Nath, N.D.; Behzadan, A.H. Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inform. 2020, 43, 101009. [Google Scholar] [CrossRef]
- Pi, Y.; Nath, N.D.; Behzadan, A.H. Disaster impact information retrieval using deep learning object detection in crowdsourced drone footage. In Proceedings of the International Workshop on Intelligent Computing in Engineering, Berlin, Germany, 1–4 July 2020; pp. 134–143. [Google Scholar]
- Liao, Y.; Mohammadi, M.E.; Wood, R.L. Deep Learning Classification of 2D Orthomosaic Images and 3D Point Clouds for Post-Event Structural Damage Assessment. Drones 2020, 4, 24. [Google Scholar] [CrossRef]
- Wang, X.; Li, Y.; Lin, C.; Liu, Y.; Geng, S. Building damage detection based on multi-source adversarial domain adaptation. J. Appl. Remote Sens. 2021, 15, 036503. [Google Scholar] [CrossRef]
- Li, Y.; Hu, W.; Li, H.; Dong, H.; Zhang, B.; Tian, Q. Aligning Discriminative and Representative Features: An Unsupervised Domain Adaptation Method for Building Damage Assessment. IEEE Trans. Image Process. 2020, 29, 6110–6122. [Google Scholar] [CrossRef] [PubMed]
- Valentijn, T.; Margutti, J.; van den Homberg, M.; Laaksonen, J. Multi-Hazard and Spatial Transferability of a CNN for Automated Building Damage Assessment. Remote Sens. 2020, 12, 2839. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and Efficient Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Tzutalin. LabelImg. 2015. Available online: https://github.com/tzutalin/labelImg (accessed on 15 March 2021).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Lin, T.; Maire, M.; Belongie, S.J.; Bourdev, L.D.; Girshick, R.B.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. arXiv 2014, arXiv:1405.0312. [Google Scholar]
- Chollet, F. Deep Learning with Python; Simon and Schuster: New York, NY, USA, 2017. [Google Scholar]
- Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv 2019, arXiv:1905.11946. [Google Scholar]
- Parmar, R. Common loss functions in machine learning. Available online: https://towardsdatascience.com/common-loss-functions-in-machine-learning-46af0ffc4d23 (accessed on 22 January 2022).
- Jiang, S.; Qin, H.; Zhang, B.; Zheng, J. Optimized Loss Functions for Object detection: A Case Study on Nighttime Vehicle Detection. arXiv 2020, arXiv:2011.05523. [Google Scholar]
- Wu, Y.; Chen, Y.; Yuan, L.; Liu, Z.; Wang, L.; Li, H.; Fu, Y. Rethinking Classification and Localization in R-CNN. arXiv 2019, arXiv:1904.06493. [Google Scholar]
- Zhang, W.; Deng, L.; Wu, D. Overcoming Negative Transfer: A Survey. arXiv 2020, arXiv:2009.00909. [Google Scholar]
- Wang, Z.; Dai, Z.; Póczos, B.; Carbonell, J.G. Characterizing and Avoiding Negative Transfer. arXiv 2018, arXiv:1811.09751. [Google Scholar]
Damage Classification Types | # of Samples | Percentage |
---|---|---|
floods | 463 | 46.3% |
non-floods | 537 | 53.7% |
Damage Detection Types | # of Samples | Percentage |
---|---|---|
damaged roof | 365 | 45.625% |
damaged wall | 281 | 35.125% |
flood damage | 167 | 20.875% |
structural damage | 145 | 18.125% |
ResNet | MobileNet | ||||
---|---|---|---|---|---|
Precision | Recall | F1-score | Precision | Recall | F1-score |
0.75 | 0.85 | 0.79 | 0.87 | 0.89 | 0.88 |
ResNet | MobileNet | EfficientNet | ||||
---|---|---|---|---|---|---|
Score | Type | Score | Type | Score | Type | |
#1 | 28.12% | structural damage | 90.93% | damaged roof | 62.85% | damaged roof |
#2 | 21.99% | structural damage | 32.73% | damaged roof | 47.72% | damaged roof |
#3 | 12.47% | flood damage | 21.75% | damaged roof | 15.97% | structural damage |
ResNet | MobileNet | EfficientNet | ||||
---|---|---|---|---|---|---|
Score | Type | Score | Type | Score | Type | |
#1 | 75.00% | damaged wall | 97.58% | damaged wall | 55.22% | damaged wall |
#2 | 23.44% | structural damage | 15.07% | structural damage | 18.41% | damaged wall |
#3 | 20.54% | damaged roof | 11.56% | structural damage | 13.46% | damaged wall |
ResNet | MobileNet | EfficientNet | ||||
---|---|---|---|---|---|---|
Score | Type | Score | Type | Score | Type | |
#1 | 52.46% | flood damage | 97.45% | flood damage | 48.79% | flood damage |
#2 | 18.73% | damaged wall | 24.39% | damaged wall | 10.45% | structural damage |
#3 | 12.63% | damaged roof | 6.14% | damaged roof | 10.30% | flood damage |
ResNet | MobileNet | EfficientNet | ||||
---|---|---|---|---|---|---|
Score | Type | Score | Type | Score | Type | |
#1 | 68.11% | structural damage | 42.85% | structural damage | 67.96% | structural damage |
#2 | 13.43% | damaged wall | 11.74% | damaged roof | 25.56% | structural damage |
#3 | 12.26% | structural damage | 6.30% | damaged roof | 18.86% | flood damage |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Calton, L.; Wei, Z. Using Artificial Neural Network Models to Assess Hurricane Damage through Transfer Learning. Appl. Sci. 2022, 12, 1466. https://doi.org/10.3390/app12031466
Calton L, Wei Z. Using Artificial Neural Network Models to Assess Hurricane Damage through Transfer Learning. Applied Sciences. 2022; 12(3):1466. https://doi.org/10.3390/app12031466
Chicago/Turabian StyleCalton, Landon, and Zhangping Wei. 2022. "Using Artificial Neural Network Models to Assess Hurricane Damage through Transfer Learning" Applied Sciences 12, no. 3: 1466. https://doi.org/10.3390/app12031466
APA StyleCalton, L., & Wei, Z. (2022). Using Artificial Neural Network Models to Assess Hurricane Damage through Transfer Learning. Applied Sciences, 12(3), 1466. https://doi.org/10.3390/app12031466