Deep Learning for Pavement Condition Evaluation Using Satellite Imagery
<p>Proposed workflow.</p> "> Figure 2
<p>Satellite images coverage.</p> "> Figure 3
<p>Pavement network data used in this case study.</p> "> Figure 4
<p>Sample cropped satellite images.</p> "> Figure 5
<p>Close-up view of pavement segments across condition categories.</p> "> Figure 6
<p>Satellite image-based pavement evaluation.</p> "> Figure 7
<p>Learning curve for different models.</p> "> Figure 8
<p>Confusion matrix of the ensemble model.</p> ">
Abstract
:1. Introduction
2. Literature Review
3. Methodology
- Data preprocessing: We divided the dataset into training and test sets in an 8:2 ratio. For the training dataset, we used an oversampling strategy to create a balanced dataset with an equal distribution across the five condition categories to ensure balanced coverage;
- Individual transfer learning models: To obtain suitable networks, we employed various pre-trained models, such as VGG19, ResNet50, InceptionV3, DenseNet121, InceptionResNetV2, MobileNet, MobileNetV2, and EfficientNetB0. After evaluating their performance, we selected four top models as the base classifiers for the subsequent steps;
- Ensemble learning: We used a weighted voting method in this strategy to further enhance classification performance. The weight for each model was determined based on its accuracy;
- Performance evaluation: to measure the overall classification ability of the algorithm, we used various evaluation indicators such as accuracy, precision, recall, and -score.
3.1. Pre-Trained Models
3.2. Ensemble Learning Model
3.3. Evaluation Indicators
4. Case Study
4.1. Data Collection
4.1.1. Pavement Image Data Collection
4.1.2. Pavement Condition Data Collection
Condition Score | Description |
---|---|
90–100 | Very good |
70–89 | Good |
50–69 | Fair |
35–49 | Poor |
1–34 | Very poor |
4.2. Result and Analysis
- For the “Fair” category, the model correctly identified 160 instances without any misclassifications;
- In the “Good” category, the model correctly predicted 142 instances, with minimal misclassifications (3 instances classified as fair and 6 as very good);
- For the “Poor” category, the model achieved perfect classification with all 147 instances correctly identified;
- The “Very Good” category had the most variance, with 121 correct predictions. There were some misclassifications: 12 instances were labeled as fair, 24 as good, 5 as poor, and 2 as very poor;
- Lastly, for the “Very Poor” category, the model demonstrated perfect classification, correctly identifying all 153 instances.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep Learning-Based Crack Damage Detection Using Convolutional Neural Networks. Comput. Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [Google Scholar] [CrossRef]
- Pan, Y.; Zhang, X.; Cervone, G.; Yang, L. Detection of Asphalt Pavement Potholes and Cracks Based on the Unmanned Aerial Vehicle Multispectral Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3701–3712. [Google Scholar] [CrossRef]
- Fan, Z.; Wu, Y.; Lu, J.; Li, W. Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network. arXiv 2018, arXiv:1802.02208. [Google Scholar]
- Chitale, P.A.; Kekre, K.Y.; Shenai, H.R.; Karani, R.; Gala, J.P. Pothole Detection and Dimension Estimation System Using Deep Learning (YOLO) and Image Processing. In Proceedings of the 2020 35th International Conference on Image and Vision Computing New Zealand (IVCNZ), Wellington, New Zealand, 25–27 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Fan, Z.; Li, C.; Chen, Y.; Mascio, P.D.; Chen, X.; Zhu, G.; Loprencipe, G. Ensemble of Deep Convolutional Neural Networks for Automatic Pavement Crack Detection and Measurement. Coatings 2020, 10, 152. [Google Scholar] [CrossRef]
- Ji, A.; Xue, X.; Wang, Y.; Luo, X.; Wang, L. Image-based Road Crack Risk-informed Assessment Using a Convolutional Neural Network and an Unmanned Aerial Vehicle. Struct. Control Health Monit. 2021, 28, e2749. [Google Scholar] [CrossRef]
- Maniat, M.; Camp, C.V.; Kashani, A.R. Deep Learning-Based Visual Crack Detection Using Google Street View Images. Neural Comput. Appl. 2021, 33, 14565–14582. [Google Scholar] [CrossRef]
- Ahmadi, A.; Khalesi, S.; Golroo, A. An Integrated Machine Learning Model for Automatic Road Crack Detection and Classification in Urban Areas. Int. J. Pavement Eng. 2022, 23, 3536–3552. [Google Scholar] [CrossRef]
- Sholevar, N.; Golroo, A.; Esfahani, S.R. Machine Learning Techniques for Pavement Condition Evaluation. Autom. Constr. 2022, 136, 104190. [Google Scholar] [CrossRef]
- Jiang, S.; Gu, S.; Yan, Z. Pavement Crack Measurement Based on Aerial 3D Reconstruction and Learning-Based Segmentation Method. Meas. Sci. Technol. 2023, 34, 015801. [Google Scholar] [CrossRef]
- Gagliardi, V.; Giammorcaro, B.; Francesco, B.; Sansonetti, G. Deep Neural Networks for Asphalt Pavement Distress Detection and Condition Assessment. In Proceedings of the Earth Resources and Environmental Remote Sensing/GIS Applications XIV; Schulz, K., Nikolakopoulos, K.G., Michel, U., Eds.; SPIE: Amsterdam, The Netherlands, 2023; p. 35. [Google Scholar] [CrossRef]
- Haider, S.W.; Baladi, G.Y.; Chatti, K.; Dean, C.M. Effect of Frequency of Pavement Condition Data Collection on Performance Prediction. Transp. Res. Rec. J. Transp. Res. Board 2010, 2153, 67–80. [Google Scholar] [CrossRef]
- Fagrhi, A.; Li, M.; Ozdem, A. Satellite Assessment and Monitoring for Pavement Management; Technical Report CAIT-UTC-NC4; Delaware Center for Transportation: Newark, DE, USA, 2015. [Google Scholar]
- Li, M.; Faghri, A.; Ozden, A.; Yue, Y. Economic Feasibility Study for Pavement Monitoring Using Synthetic Aperture Radar-Based Satellite Remote Sensing: Cost–Benefit Analysis. Transp. Res. Rec. J. Transp. Res. Board 2017, 2645, 1–11. [Google Scholar] [CrossRef]
- Brewer, E.; Lin, J.; Kemper, P.; Hennin, J.; Runfola, D. Predicting Road Quality Using High Resolution Satellite Imagery: A Transfer Learning Approach. PLoS ONE 2021, 16, e0253370. [Google Scholar] [CrossRef] [PubMed]
- Bashar, M.Z.; Torres-Machi, C. Exploring the Capabilities of Optical Satellite Imagery in Evaluating Pavement Condition. In Proceedings of the Construction Research Congress 2022, Arlington, Virginia, 9–12 March 2022; pp. 108–115. [Google Scholar] [CrossRef]
- Jiang, Y.; Han, S.; Bai, Y. Development of a Pavement Evaluation Tool Using Aerial Imagery and Deep Learning. J. Transp. Eng. Part B Pavements 2021, 147, 04021027. [Google Scholar] [CrossRef]
- Karimzadeh, S.; Ghasemi, M.; Matsuoka, M.; Yagi, K.; Zulfikar, A.C. A Deep Learning Model for Road Damage Detection After an Earthquake Based on Synthetic Aperture Radar (SAR) and Field Datasets. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5753–5765. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17); Google Inc.: Mountain View, CA, USA, 2017; Volume 31. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proc. Mach. Learn. Res. 2019, 97, 6105–6114. [Google Scholar]
- Howard, A.; Sandler, M.; Chen, B.; Wang, W.; Chen, L.C.; Tan, M.; Chu, G.; Vasudevan, V.; Zhu, Y.; Pang, R.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar] [CrossRef]
- Radosavovic, I.; Kosaraju, R.P.; Girshick, R.; He, K.; Dollar, P. Designing Network Design Spaces. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10425–10433. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. EfficientNetV2: Smaller Models and Faster Training. Proc. Mach. Learn. Res. 2021, 139, 10096–10106. [Google Scholar]
- Mehta, S.; Rastegari, M. MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer. arXiv 2022, arXiv:2110.02178. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11966–11976. [Google Scholar] [CrossRef]
- Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; Liu, C. A survey on deep transfer learning. In Proceedings of the Artificial Neural Networks and Machine Learning–ICANN 2018: 27th International Conference on Artificial Neural Networks, Rhodes, Greece, 4–7 October 2018, Proceedings, Part III 27; Springer: Berlin/Heidelberg, Germany, 2018; pp. 270–279. [Google Scholar]
- NOAA. Hurricane Harvey Imagery. 2023. Available online: https://shorturl.at/13dtB (accessed on 5 September 2024).
Model | Parameters (Millions) | Year of Development | Depth (Layers) |
---|---|---|---|
VGG16 [19] | 138 | 2014 | 16 |
VGG19 [19] | 143.7 | 2014 | 19 |
InceptionV3 [20] | 23.9 | 2016 | 159 |
DenseNet121 [21] | 8.1 | 2017 | 121 |
InceptionResNetV2 [22] | 55.9 | 2017 | 572 |
ResNet50 [23] | 25.6 | 2016 | 50 |
ResNet50V2 [23] | 25.6 | 2016 | 50 |
MobileNet [24] | 4.2 | 2017 | 88 |
MobileNetV2 [25] | 3.5 | 2018 | 88 |
EfficientNetB0 [26] | 5.3 | 2019 | 69 |
MobileNetV3Large [27] | 5.4 | 2019 | 88 |
MobileNetV3Small [27] | 2.9 | 2019 | 88 |
RegNetX [28] | Varies | 2020 | Varies |
EfficientNetV2B0 [29] | 7.1 | 2021 | 69 |
Mobile_ViT [30] | 5.6 | 2021 | 20 |
ConvNeXtBase [31] | 88 | 2022 | 53 |
Model | Accuracy | Score |
---|---|---|
VGG16 [19] | 0.75 | 0.74 |
VGG19 [19] | 0.72 | 0.73 |
InceptionV3 [20] | 0.87 | 0.88 |
DenseNet121 [21] | 0.86 | 0.88 |
InceptionResNetV2 [22] | 0.86 | 0.87 |
ResNet50 [23] | 0.39 | 0.32 |
ResNet50V2 [23] | 0.89 | 0.90 |
MobileNet [24] | 0.91 | 0.91 |
MobileNetV2 [25] | 0.87 | 0.88 |
EfficientNetB0 [26] | 0.20 | 0.19 |
MobileNetV3Large [27] | 0.58 | 0.53 |
MobileNetV3Small [27] | 0.31 | 0.32 |
RegNetX [28] | 0.18 | 0.12 |
EfficientNetV2B0 [29] | 0.20 | 0.19 |
Mobile_ViT [30] | 0.55 | 0.54 |
ConvNeXtBase [31] | 0.78 | 0.76 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lebaku, P.K.R.; Gao, L.; Lu, P.; Sun, J. Deep Learning for Pavement Condition Evaluation Using Satellite Imagery. Infrastructures 2024, 9, 155. https://doi.org/10.3390/infrastructures9090155
Lebaku PKR, Gao L, Lu P, Sun J. Deep Learning for Pavement Condition Evaluation Using Satellite Imagery. Infrastructures. 2024; 9(9):155. https://doi.org/10.3390/infrastructures9090155
Chicago/Turabian StyleLebaku, Prathyush Kumar Reddy, Lu Gao, Pan Lu, and Jingran Sun. 2024. "Deep Learning for Pavement Condition Evaluation Using Satellite Imagery" Infrastructures 9, no. 9: 155. https://doi.org/10.3390/infrastructures9090155
APA StyleLebaku, P. K. R., Gao, L., Lu, P., & Sun, J. (2024). Deep Learning for Pavement Condition Evaluation Using Satellite Imagery. Infrastructures, 9(9), 155. https://doi.org/10.3390/infrastructures9090155