Image-Enhanced U-Net: Optimizing Defect Detection in Window Frames for Construction Quality Inspection
<p>The framework of the window frame defect detection system (WFDD). The input comprises RGB images captured by the Spot Robot. The data augmentation module employs geometric operations and applies different image enhancement techniques. The preprocessing module is then employed to enhance the performance of the defect detection model. Within the detection module, defects are identified among all detected window frames, with the output showcasing U-Net-generated segmentation blobs.</p> "> Figure 2
<p>Example from Cellphone Dataset.</p> "> Figure 3
<p>Samples of Construction Site Dataset.</p> "> Figure 4
<p>Example from Lab-1 Dataset.</p> "> Figure 5
<p>Example from Lab-2 Dataset.</p> "> Figure 6
<p>Samples of Demo Site Dataset.</p> "> Figure 7
<p>Example of labeling.</p> "> Figure 8
<p>Comparative sample using the shadow removal technique.</p> "> Figure 9
<p>Comparative sample using the color neutralization technique.</p> "> Figure 10
<p>Comparative sample using the contrast enhancement technique.</p> "> Figure 11
<p>Comparative sample using the intensity neutralization technique.</p> "> Figure 12
<p>Comparative sample using the CLAHE technique.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Traditional Computer Vision Approaches
- Threshold Techniques: Automatic thresholding has been crucial in industries such as glass manufacturing [13,14] and textiles [15]. Dynamic thresholding has found applications in road crack segmentation [16]. The Retinex Algorithm has been used for edge detection [17,18], and innovative approaches like combining morphological processing with genetic algorithms have introduced new dimensions to defect detection strategies [19].
- Edge Detection and Morphological Processing: The Retinex Algorithm has been prominent in edge detection for defect identification. The fusion of morphological processing with genetic algorithms has also introduced innovative dimensions to defect detection strategies.
- Innovative Approaches: Recent innovations, such as impulse/response testing and statistical pattern recognition, have effectively detected defects in concrete plates. However, traditional machine vision systems have limitations in complex, dynamic settings like construction sites, where unpredictable variables like complex lighting challenge their efficacy.
2.2. Challenges in Conventional Machine Vision Methods
- Dynamic Environments: Rapid changes in construction sites challenge machine vision adaptability, reducing accuracy.
- Noise and Interference: Visual noise and electromagnetic interference disrupt defect detection [1].
- Scale and Perspective Variations: Varying object sizes and perspectives require extensive system adjustments.
- Real-Time Demands: Meeting real-time requirements can be challenging for traditional methods.
- Data Annotation: Creating and maintaining labeled datasets is labor-intensive and complex in dynamic environments.
2.3. Machine Learning and Deep Learning-Based Methods
- Supervised object detection supervised defect detection relies on labeled datasets with defect-free and defective samples, resulting in high detection rates. This approach annotates each object with its class label and bounding box coordinates during training. Various datasets are used in supervised learning, including fabric defect datasets [25] and rail defect datasets [26,27].
- Unsupervised object detection unsupervised methods aim to overcome the limitations of supervised learning by leveraging inherent data characteristics for classification. These approaches detect defects and objects in images without using labeled training data. Instead, they rely on patterns, structures, or anomalies in the data to identify objects. Techniques like clustering, anomaly detection, and feature extraction are commonly used for unsupervised defect detection.
- Object detection model current object detection in deep learning falls into two main categories. The first encompasses two-stage object detection models, which include R-CNNs [28], Fast R-CNNs [29], and Faster R-CNNs [30]. For example, U-Net for defect segmentation using synthetic data was used by Boikov et al. [31]. The second category features one-stage models like YOLO [32] and SSD [33].
2.4. Challenges in DL-Based Defect Detection
2.5. Hybrid Models for Defect Detection
3. Proposed Method
3.1. Data Collection
- Cellphone Dataset (441 Images): This dataset comprises color images capturing non-installed window frames both indoors and outdoors (see Figure 2).
- Construction Site Dataset (235 Images): Captured on a real-world construction site using inspector cellphone cameras, this dataset includes various installed window frame types and diverse conditions (see Figure 3).
- Lab-1 Dataset (100 Images): Collected in a controlled lab environment using the Spot Robot’s PTZ camera, this dataset features a range of window frame samples with variations in colors, lighting conditions and angles (see Figure 4).
- Lab-2 Dataset (80 Images): Focused on a single window frame within a cluttered lab setting, this dataset offers images captured at different zoom levels (see Figure 5).
- Demo Site Dataset (500 Images): Captured at a construction site using the Spot Robot, this dataset encompasses multiple window frame types and a variety of lighting conditions (see Figure 6).
3.2. Data Labeling
3.3. Geometric Data Augmentation
3.4. Image Enhancement Techniques (IETs)
- Shadow Removal (SR): To address shadow removal, we adopted the pre-trained dual hierarchical aggregation network (DHAN) developed by Cun et al. [38], which is based on VGG16 with the context aggregation network (CAN). This approach effectively mitigates shadows, a common challenge in image quality.
- Color Neutralization (CN): CN plays a crucial role in ensuring a consistent foundation for subsequent processing by harmonizing color variations across images, promoting uniformity.
- Contrast Enhancement (CE): CE significantly enhances image clarity, making even subtle defects more discernible. This enhancement aids in the accurate identification of defects.
- Intensity Level Neutralization (IN): IN standardizes intensity levels across the dataset, reducing disparities that could otherwise affect the analysis. This step contributes to data consistency.
- CLAHE (Contrast-Limited Adaptive Histogram Equalization): CLAHE, a localized contrast enhancement technique, enhances small-scale details while preserving overall image contrast. It improves the visibility of fine details without oversaturating the image.
3.4.1. Shadow Removal (SR) Process
3.4.2. Color Neutralization (CN)
3.4.3. Contrast Enhancement (CE)
3.4.4. Intensity Level Neutralization (IN)
3.4.5. Contrast-Limited Adaptive Histogram Equalization or CLAHE (CLAHE)
- Divide the input image into non-overlapping tiles of size m × n, resulting in M × N tiles.
- Perform histogram equalization on each tile, using the probability density function (PDF) and cumulative distribution function (CDF) to distribute pixel intensities effectively.
- Apply contrast limiting by clipping the histogram at a predefined limit, CL, to prevent excessive amplification.
- Conduct bilinear interpolation to eliminate artificial boundaries between tiles, resulting in a smoothly enhanced output image.
3.5. IE-Enhanced Data Augmentation
3.6. Defect Detection Model
- Data Preprocessing: Following data augmentation and IPT application, we preprocess the images to prepare them for defect detection. These preprocessed images, post-IPT, were resized to a standardized 500 × 500 pixel format.
- Ground Truth-Guided Learning: During the training phase, our U-Net neural network relied on ground truth masks. These masks serve as invaluable references, guiding the network to detect defects with exceptional precision.
- Intersection of Classes: Our defect detection is specifically tailored to identify defects within window frames. We utilize the concept of the intersection of classes, ensuring that our results exclusively represent defects inside the window frames.
- Neural Network Architecture: Our neural network architecture is a fusion of two powerful components: ResNet152 and U-Net. We employ transfer learning to harness the feature extraction capabilities of ResNet152. Its encoder is utilized, with the last layer discarded and then integrated with a decoder. This fusion results in an expansive feature map that excels in defect localization.
- Semantic Segmentation Model: The architecture of our deep learning-based semantic segmentation model. This model combines the robustness of ResNet152 with the precision of U-Net, providing an ideal balance between feature extraction and localization.
4. Experiments and Results
4.1. Data Collection
4.2. Experimental Setup
4.3. Experiment 1: Integration of Image Enhancement Techniques
4.3.1. Experiment 1 Results
Bend Detection Results
Dent Detection Results
Scratch Detection Results
4.3.2. Experiment 1 Insights
- IE Strategy Effectiveness: The most notable discovery revolves around the substantial enhancements observed in F1 and IoU scores across all defect categories (bend, dent, and scratch) when implementing the ‘Best IE Strategy’. The improvement in dent detection is particularly striking, showcasing a remarkable 9.92% increase in the F1 score and an impressive 10.70% surge in the IoU score. This underscores the pivotal role of tailored image enhancement strategies in effectively addressing specific defect characteristics, particularly those highly susceptible to lighting conditions, as exemplified by the dent defects.
- Overall Improvement: On average, our model featuring the “Best IE Strategy” consistently outperformed the baseline U-Net model, achieving a noteworthy 7.67% improvement in F1 scores and an impressive 8.60% enhancement in IoU scores. This serves as compelling evidence for the efficacy of integrating IE techniques into the defect detection pipeline.
- Performance Variability: It is crucial to acknowledge that the magnitude of improvement varied across defect categories. This variability emphasizes the need for adaptable defect detection systems capable of accommodating the diverse characteristics and challenges associated with different defect types.
- Enhancement Strategies: A consistent trend emerges in our findings, revealing that the combination of image enhancement techniques consistently enhances F1 scores and precision. This underscores the intrinsic value of systematic experimentation in optimizing image enhancement for defect detection.
- CLAHE Success: The “CLAHE” technique consistently played a pivotal role in enhancing F1 scores and precision across various defect categories. This reaffirms its significance in improving detection accuracy and highlights its effectiveness.
- Trade-offs and Context: It is essential to strike a balance between accuracy and localization precision, as certain enhancements may influence IoU values differently. This trade-off consideration underscores the need for a nuanced approach in selecting and fine-tuning enhancement techniques based on specific detection requirements.
- Fine-tuning Opportunities: The results underscore the potential for further customization by exploring enhancement combinations and adjustments to model architectures. This fine-tuning process holds the key to optimizing defect detection systems for specific application contexts.
4.4. Experiment 2: IE-Data Augmentation and Results
4.4.1. Experiment 2 Results
4.4.2. Experiment 2 Insights
- Impact of Image Enhancement Techniques: We assessed the influence of individual image enhancement techniques (e.g., CE, IN, CN, SR, and CLAHE) on object detection accuracy. These techniques exhibited varying effects on IoU scores, highlighting trade-offs between localization accuracy and detection precision.
- Combination Strategies: Combinations of enhancement techniques, such as “CN + CE” and “SR + IN”, were explored to evaluate their impact on detection performance. Different combinations produced diverse outcomes, emphasizing the complexity of selecting the right strategy.
- CLAHE Effectiveness: CLAHE consistently improved IoU values across multiple detection categories, underscoring its importance in enhancing accuracy and precision.
- Comprehensive Combinations: Comprehensive combinations like “SR + CN + IN + CE + CLAHE” were investigated to identify strategies with strong overall performance regarding IoU scores, but their complexity warrants careful evaluation.
- Balancing Trade-offs: IE-based data augmentation involves balancing improved IoU scores and detection accuracies. Some techniques may favor one aspect, requiring thoughtful adaptations to specific detection needs.
- Comparison of F1 and IoU Scores: We compared F1 and IoU scores for bend, dent, and scratch detection under “Normal” and “Best IE” strategies. Our strategy consistently improved both scores, enhancing detection accuracy.
4.5. Experiment 1 vs. Experiment 2: A Comparison
4.5.1. Experiment 1 Insights
- Category-Specific Improvement: Experiment 1 showed significant F1 score improvements for bend (5.70%), dent (9.92%), and scratch (8.11%) detection, highlighting the importance of tailored enhancement strategies.
- IoU Improvement: IoU scores were improved, with bend IoU and dent IoU increasing by 6.49% and 10.70%, respectively, and scratch IoU improving by 9.60%.
- Model Architecture Impact: Our model consistently outperformed U-Net, emphasizing the role of the model’s architecture.
4.5.2. Experiment 2 Insights
- IE-Based Data Augmentation: Experiment 2 introduced IE-based data augmentation, resulting in substantial F1 and IoU score improvements across all defect categories. Notably, the scratch detection F1 score improved by 9.82%.
- Category-Specific Improvement: Category specific enhancement, with bend F1 and dent F1 scores showing notable increases (11.65% and 0.83%, respectively).
- IoU Improvement: IoU scores were improved, with bend IoU and dent IoU increasing by 12.33% and 1.13%, respectively, and scratch IoU showing a remarkable 31.84% improvement.
- Overall Model Performance: Our model enhanced with IE-based data augmentation outperformed the baseline U-Net model, with a 7.43% improvement in the F1 score and a substantial 15.10% improvement in IoU scores.
4.5.3. Comparative Insights
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Silva, W.R.; Lucena, D.S. Concrete Cracks Detection Based on Deep Learning Image Classification. Proceedings 2018, 2, 489. [Google Scholar] [CrossRef]
- Dung, C.V.; Anh, L.D. Autonomous concrete crack detection using deep fully convolutional neural network. Autom. Constr. 2019, 99, 52–58. [Google Scholar] [CrossRef]
- Garcia, J.; Villavicencio, G.; Altamirano, F.; Crawford, B.; Soto, R.; Minatogawa, V.; Franco, M.; Martínez-Muñoz, D.; Yepes, V. Machine learning techniques applied to construction: A hybrid bibliometric analysis of advances and future directions. Autom. Constr. 2022, 142, 104532. [Google Scholar] [CrossRef]
- Zhao, Z.Q.; Zheng, P.; Xu, S.T.; Wu, X. Object Detection with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [PubMed]
- Bang, H.; Min, J.; Jeon, H. Deep learning-based concrete surface damage monitoring method using structured lights and depth camera. Sensors 2021, 21, 2759. [Google Scholar] [CrossRef]
- Yang, J.; Li, S.; Wang, Z.; Dong, H.; Wang, J.; Tang, S. Using deep learning to detect defects in manufacturing: A comprehensive survey and current challenges. Materials 2020, 13, 5755. [Google Scholar] [CrossRef]
- Wang, J.; Perez, L. The Effectiveness of Data Augmentation in Image Classification using Deep Learning. arXiv 2017, arXiv:1712.04621. Available online: https://arxiv.org/abs/1712.04621 (accessed on 1 November 2023).
- Saberironaghi, A.; Ren, J.; El-Gindy, M. Defect Detection Methods for Industrial Products Using Deep Learning Techniques: A Review. Algorithms 2023, 16, 95. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Wu, Z.; Tang, Y.; Hong, B.; Liang, B.; Liu, Y. Enhanced Precision in Dam Crack Width Measurement: Leveraging Advanced Lightweight Network Identification for Pixel-Level Accuracy. Int. J. Intell. Syst. 2023, 2023, 9940881. [Google Scholar] [CrossRef]
- Tang, Y.; Huang, Z.; Chen, Z.; Chen, M.; Zhou, H.; Zhang, H.; Sun, J. Novel visual crack width measurement based on backbone double-scale features for improved detection automation. Eng. Struct. 2023, 274, 115158. [Google Scholar] [CrossRef]
- Einizinab, S.; Khoshelham, K.; Winter, S.; Christopher, P.; Fang, Y.; Windholz, E.; Radanovic, M.; Hu, S. Enabling technologies for remote and virtual inspection of building work. Autom. Constr. 2023, 156, 105096. [Google Scholar] [CrossRef]
- Ng, H.F. Automatic thresholding for defect detection. Pattern Recognit. Lett. 2006, 27, 1644–1649. [Google Scholar] [CrossRef]
- Bandyopadhyay, Y. Glass Defect Detection and Sorting Using Computational Image Processing. Int. J. Emerg. Technol. Innov. Res. 2015, 2, 73–75. [Google Scholar]
- Wakaf, Z.; Jalab, H.A. Defect detection based on extreme edge of defective region histogram. J. King Saud. Univ. Comput. Inf. Sci. 2018, 30, 33–40. [Google Scholar] [CrossRef]
- Oliveira, H.; Correia, P.L. Automatic road crack segmentation using entropy and image dynamic thresholding. In Proceedings of the European Signal Processing Conference (EUSIPCO), Glasgow, UK, 24–28 August 2009. [Google Scholar]
- Pan, Y.; Lu, Y.; Dong, S.; Zhao, Z.; Zhao, Z. Defect detection based on edge detection and connected region filtering algorithm. In Proceedings of the 2019 International Conference on Communications, Information System, and Computer Engineering, (CISCE), Haikou, China, 5–7 July 2019. [Google Scholar]
- Zhao, H.; Qin, G.; Wang, X. Improvement of canny algorithm based on pavement edge detection. In Proceedings of the 3rd International Congress on Image and Signal Processing (CISP), Yantai, China, 16–18 October 2010. [Google Scholar]
- Zheng, H.; Kong, L.X.; Nahavandi, S. Automated visual inspection of metallic surface defects using genetic algorithms. J. Mater. Process Technol. 2002, 125–126, 427–433. [Google Scholar] [CrossRef]
- Altantsetseg, E.; Muraki, Y.; Matsuyama, K.; Konno, K. Feature line extraction from unorganized noisy point clouds using truncated Fourier series. Vis. Comput. 2013, 29, 617–626. [Google Scholar] [CrossRef]
- Hocenski, Z.; Vasilic, S.; Hocenski, V. Improved Canny Edge Detector in Ceramic Tiles Defect Detection. In Proceedings of the 32nd Annual Conference on IEEE Industrial Electronics (IECON), Paris, France, 6–10 November 2006. [Google Scholar]
- Shi, T.; Kong, J.Y.; Wang, X.D.; Liu, Z.; Zheng, G. Improved Sobel algorithm for defect detection of rail surfaces with enhanced efficiency and accuracy. J. Cent. South. Univ. 2016, 23, 2867–2875. [Google Scholar] [CrossRef]
- Fleyeh, H.; Roch, J. Benchmark Evaluation of HOG Descriptors as Features for Classification of Traffic Signs; Högskolan Dalarna: Borlänge, Sweden, 2013. [Google Scholar]
- Sajid, S.; Taras, A.; Chouinard, L. Defect detection in concrete plates with impulse-response test and statistical pattern recognition. Mech. Syst. Signal Process. 2021, 161, 107948. [Google Scholar] [CrossRef]
- Silvestre-Blanes, J.; Albero-Albero, T.; Miralles, I.; Pérez-Llorens, R.; Moreno, J. A Public Fabric Database for Defect Detection Methods and Results. Autex Res. J. 2019, 19, 363–374. [Google Scholar] [CrossRef]
- Gan, J.; Li, Q.; Wang, J.; Yu, H. A Hierarchical Extractor- Based Visual Rail Surface Inspection System. IEEE Sens. J. 2017, 17, 7935–7944. [Google Scholar] [CrossRef]
- Shim, S.; Kim, J.; Lee, S.W.; Cho, G.C. Road surface damage detection based on hierarchical architecture using lightweight auto-encoder network. Autom. Constr. 2021, 130, 103833. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Boikov, A.; Payor, V.; Savelev, R.; Kolesnikov, A. Synthetic data generation for steel defect detection and classification using deep learning. Symmetry 2021, 13, 1176. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
- Liu, J.; Luo, H.; Liu, H. Deep learning-based data analytics for safety in construction. Autom. Constr. 2022, 140, 104302. [Google Scholar] [CrossRef]
- Panella, F.; Lipani, A.; Boehm, J. Semantic segmentation of cracks: Data challenges and architecture. Autom. Constr. 2022, 135, 104110. [Google Scholar] [CrossRef]
- Panella, F.; Boehm, J.; Loo, Y.; Kaushik, A.; Gonzalez, D. Deep learning and image processing for automated crack detection and defect measurement in underground structures. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2018, 42, 829–835. [Google Scholar] [CrossRef]
- Lin, Q.; Ye, G.; Wang, J.; Liu, H. RoboFlow: A Data-centric Workflow Management System for Developing AI- enhanced Robots. In Proceedings of the 5th Conference on Robot Learning, London, UK, 8–11 November 2022. [Google Scholar]
- Cun, X.; Pun, C.M.; Shi, C. Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020. [Google Scholar]
- Lecca, M.; Messelodi, S. Computing von Kries Illuminant Changes by Piecewise Inversion of Cumulative Color Histograms. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2009, 8, 1–17. [Google Scholar] [CrossRef]
- Mustafa, W.A.; Kader, M.M. A Review of Histogram Equalization Techniques in Image Enhancement Application. J. Phys. Conf. Ser. 2018, 1019, 012026. [Google Scholar] [CrossRef]
- Land, E.H.; Mccann, J.J. Lightness and Retinex Theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef]
Process/Test Metrics | Bend | Dent | Scratch |
---|---|---|---|
F1-Score | F1-Score | F1-Score | |
Normal | 0.857 | 0.648 | 0.529 |
Best IET Strategy | 0.906 | 0.713 | 0.572 |
Improvement | 5.70% | 9.92% | 8.11% |
Process/Test Metrics | Bend | Dent | Scratch |
---|---|---|---|
IoU | IoU | IoU | |
Normal | 0.848 | 0.627 | 0.473 |
Best IET Strategy | 0.903 | 0.6945 | 0.518 |
Improvement | 6.49% | 10.70% | 9.60% |
IE Combination | Bend F1 Score |
---|---|
CLAHE + SR + CN + IN | 0.9065 |
CLAHE + SR + IN + CE | 0.8780 |
CLAHE + CE | 0.8779 |
CLAHE + CE + IN | 0.8734 |
CLAHE + SR + CE + IN + CN | 0.8716 |
CLAHE + SR + IN | 0.8702 |
CLAHE + SR | 0.8663 |
CLAHE + SR + CE + CN + IN | 0.8661 |
CE + IN + CN + SR | 0.8661 |
CLAHE + SR + IN + CN | 0.8575 |
IE Combination | Dent F1 Score |
---|---|
CLAHE + SR + IN + CE | 0.7130 |
CLAHE + CN | 0.7022 |
CLAHE + IN | 0.6773 |
CE + CN + IN | 0.6749 |
SR + IN + CE | 0.6699 |
CLAHE + IN + CE | 0.6719 |
CLAHE + CN + IN + CE | 0.6706 |
CE + CN | 0.6683 |
CLAHE + SR + CN + IN | 0.6633 |
SR + IN + CE + CN | 0.6696 |
IE Combination | Scratch F1 Score |
---|---|
CE + CN + IN | 0.5721 |
CLAHE + IN | 0.5721 |
IN + CE | 0.5703 |
SR + IN | 0.5696 |
CLAHE + SR + IN + CE | 0.5689 |
CE + CN | 0.5672 |
IN + CE + CN | 0.5672 |
CN | 0.5672 |
CLAHE + IN + CE | 0.5664 |
SR + IN + CE + CN | 0.5662 |
Process/Test Metrics | Bend F1 | Dent F1 | Scratch F1 |
---|---|---|---|
Normal | 0.877 | 0.706 | 0.570 |
Best IE | 0.980 | 0.7122 | 0.626 |
Improvement | 11.65% | 0.83% | 9.82% |
Process/Test Metrics | Bend IoU | Dent IoU | Scratch IoU |
---|---|---|---|
Normal | 0.872 | 0.683 | 0.507 |
Best IE | 0.980 | 0.691 | 0.669 |
Improvement | 12.33% | 1.13% | 31.84% |
IE Combination | Bend F1 Score |
---|---|
CE | 0.98 |
CN + IN + CE | 0.95855 |
SR | 0.936037 |
SR + CE | 0.926856 |
SR + CN + CE | 0.913967 |
SR + IN + CE | 0.882706 |
SR + CN + IN + CE | 0.889351 |
CN + CLAHE | 0.883122385 |
CN + CE + CLAHE | 0.855731428 |
IN + CLAHE | 0.860007882 |
IE Combination | Scratch F1 Score |
---|---|
CN | 0.547862 |
SR + CE | 0.494919 |
CN + CE | 0.480775 |
CE + CLAHE | 0.453691274 |
SR + IN + CE + CLAHE | 0.54746002 |
SR + CN + IN + CE + CLAHE | 0.564718306 |
SR + CN + CE + CLAHE | 0.538104832 |
SR + CN + IN + CLAHE | 0.577393591 |
IN + CLAHE | 0.585907996 |
CN + IN + CE + CLAHE | 0.590137661 |
IE Combination | Dent F1 Score |
---|---|
CLAHE + SR + IN + CE | 0.7130 |
CLAHE + CN | 0.7022 |
CLAHE + IN | 0.6773 |
CE + CN + IN | 0.6749 |
SR + IN + CE | 0.6699 |
CLAHE + IN + CE | 0.6719 |
CLAHE + CN + IN + CE | 0.6706 |
CE + CN | 0.6683 |
CLAHE + SR + CN + IN | 0.6633 |
SR + IN + CE + CN | 0.6696 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vasquez, J.; Furuhata, T.; Shimada, K. Image-Enhanced U-Net: Optimizing Defect Detection in Window Frames for Construction Quality Inspection. Buildings 2024, 14, 3. https://doi.org/10.3390/buildings14010003
Vasquez J, Furuhata T, Shimada K. Image-Enhanced U-Net: Optimizing Defect Detection in Window Frames for Construction Quality Inspection. Buildings. 2024; 14(1):3. https://doi.org/10.3390/buildings14010003
Chicago/Turabian StyleVasquez, Jorge, Tomotake Furuhata, and Kenji Shimada. 2024. "Image-Enhanced U-Net: Optimizing Defect Detection in Window Frames for Construction Quality Inspection" Buildings 14, no. 1: 3. https://doi.org/10.3390/buildings14010003
APA StyleVasquez, J., Furuhata, T., & Shimada, K. (2024). Image-Enhanced U-Net: Optimizing Defect Detection in Window Frames for Construction Quality Inspection. Buildings, 14(1), 3. https://doi.org/10.3390/buildings14010003