Temporal-Quality Ensemble Technique for Handling Image Blur in Packaging Defect Inspection
<p>Challenges in conveyor systems and packaging defect inspection. (<b>a</b>) Vibration with roller conveyor. (<b>b</b>) Out of focus with machine vision.</p> "> Figure 2
<p>The detailed architecture of our method.</p> "> Figure 3
<p>The image acquisition system has an image acquisition unit, light sources, a backlight, a transparent conveyor belt, and a chamber.</p> "> Figure 4
<p>Samples of collected image. The red circles indicate defects. Left: non-defect. Middle: edge defect. Right: surface defect. (<b>a</b>) High-quality images. (<b>b</b>) Low-quality images.</p> "> Figure 5
<p>Samples of datasets. First row: non-defect. Second row: edge defect. Third row: surface defect. (<b>a</b>) High-quality images. (<b>b</b>) Low-quality images.</p> "> Figure 6
<p>Performance analysis of CNN models on high- and low-quality images. The first row displays the loss curves for high-quality images (training and validation), while the second row shows accuracy curves for both high- and low-quality images (training, validation, and test). (<b>a</b>) ShuffleNetV2 loss. (<b>b</b>) GoogleNet loss. (<b>c</b>) ResNet-34 loss. (<b>d</b>) EfficientNet loss. (<b>e</b>) ECAEfficientNet loss. (<b>f</b>) ShuffleNetV2 accuracy. (<b>g</b>) GoogleNet accuracy. (<b>h</b>) ResNet-34 accuracy. (<b>i</b>) EfficientNet accuracy. (<b>j</b>) ECAEfficientNet accuracy.</p> "> Figure 7
<p>Comparison of the performance and processing speed of CNN models with the ensemble applied. (<b>a</b>) Precision heatmaps. (<b>b</b>) Recall heatmaps. (<b>c</b>) F1-Score heatmaps. (<b>d</b>) FPS bar charts.</p> ">
Abstract
:1. Introduction
- (1)
- The proposed inference technique, termed TQE, combined temporal and quality weight to integrate information from multiple images including blurred images. By leveraging temporal continuity and prioritizing superior clarity, it finally mitigates the effects of image blur and improves overall accuracy for identifying defects. To the best of our knowledge, the proposed approach is the first ensemble technique to overcome image blur for packaging inspection.
- (2)
- Our new private database provided more realistic results for training and evaluating deep learning models since it reflected motion blur in images which are acquired by deploying a real machine vision camera and conveyor belt, etc.
- (3)
- Through comparative experiments with AVE, TQE exhibited effectiveness in terms of accuracy, precision, recall, and F1-Score for identifying defects.
2. Related Work
3. Our Method
3.1. Overall Architecture
3.2. Temporal Ensemble
Algorithm 1 Binary Search for |
|
3.3. Quality Ensemble
3.4. Temporal-Quality Ensemble
Algorithm 2 Temporal-Quality Ensemble Inference |
|
4. Experiments and Results
4.1. Datasets
4.2. Evaluation Metrics
4.3. Implemental Details
4.4. Experimental Results
4.4.1. High vs. Low-Quality Image Performance Using Single CNN Models
4.4.2. Ablation Study on TQE: Fixed and Image Quality Scenarios
4.4.3. Performance of TQE: Integrating Temporal and Quality Ensembles
4.4.4. Performance Comparison of TQE and AVE across CNN Models
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Tavasoli, M.; Lee, E.; Mousavi, Y.; Pasandi, H.B.; Fekih, A. Wipe: A novel web-based intelligent packaging evaluation via machine learning and association mining. IEEE Access 2024, 12, 45936–45947. [Google Scholar] [CrossRef]
- Chen, Y.; Ding, Y.; Zhao, F.; Zhang, E.; Wu, Z.; Shao, L. Surface defect detection methods for industrial products: A review. Appl. Sci. 2021, 11, 7657. [Google Scholar] [CrossRef]
- Shankar, N.G.; Ravi, N.; Zhong, Z.W. A real-time print-defect detection system for web offset printing. Measurement 2009, 42, 645–652. [Google Scholar] [CrossRef]
- Yang, Z.; Bai, J. Vial bottle mouth defect detection based on machine vision. In Proceedings of the 2015 IEEE International Conference on Information and Automation, Lijiang, China, 8–10 August 2015; pp. 2638–2642. [Google Scholar]
- Yun, J.P.; Kim, D.; Kim, K.; Lee, S.J.; Park, C.H.; Kim, S.W. Vision-based surface defect inspection for thick steel plates. Opt. Eng. 2017, 56, 053108. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Zhang, G.; Liu, S.; Nie, S.; Yun, L. YOLO-RDP: Lightweight Steel Defect Detection through Improved YOLOv7-Tiny and Model Pruning. Symmetry 2024, 16, 458. [Google Scholar] [CrossRef]
- Yuan, Z.; Ning, H.; Tang, X.; Yang, Z. GDCP-YOLO: Enhancing Steel Surface Defect Detection Using Lightweight Machine Learning Approach. Electronics 2024, 13, 1388. [Google Scholar] [CrossRef]
- Zhu, Y.; Xu, Z.; Lin, Y.; Chen, D.; Ai, Z.; Zhang, H. A Multi-Source Data Fusion Network for Wood Surface Broken Defect Segmentation. Sensors 2024, 24, 1635. [Google Scholar] [CrossRef]
- Tang, J.; Liu, S.; Zhao, D.; Tang, L.; Zou, W.; Zheng, B. PCB-YOLO: An improved detection algorithm of PCB surface defects based on YOLOv5. Sustainability 2023, 15, 5963. [Google Scholar] [CrossRef]
- Pang, Y.; Zhu, H.; Li, X.; Pan, J. Motion blur detection with an indicator function for surveillance machines. IEEE Trans. Ind. Electron. 2016, 63, 5592–5601. [Google Scholar] [CrossRef]
- Hao, N.; Sun, X.; Zhang, M.; Zhang, Y.; Wang, X.; Yi, X. Vibration and Noise Analysis and Experimental Study of Rail Conveyor. Sensors 2023, 23, 4867. [Google Scholar] [CrossRef] [PubMed]
- Bortnowski, P.; Król, R.; Ozdoba, M. Modelling of transverse vibration of conveyor belt in aspect of the trough angle. Sci. Rep. 2023, 13, 19897. [Google Scholar] [CrossRef]
- Guo, X.; Liu, X.; Królczyk, G.; Sulowicz, M.; Glowacz, A.; Gardoni, P.; Li, Z. Damage detection for conveyor belt surface based on conditional cycle generative adversarial network. Sensors 2022, 22, 3485. [Google Scholar] [CrossRef]
- Zhang, M.; Zhang, Y.; Zhou, M.; Jiang, K.; Shi, H.; Yu, Y.; Hao, N. Application of lightweight convolutional neural network for damage detection of conveyor belt. Appl. Sci. 2021, 11, 7282. [Google Scholar] [CrossRef]
- Inoue, M.; Raut, S.; Takaki, T.; Ishii, I.; Tajima, K. Motion-blur-free high-frame-rate vision system with frame-by-frame visual-feedback control for a resonant mirror. In Proceedings of the 2020 3rd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 26–29 February 2020; pp. 35–40. [Google Scholar]
- Chen, J.; Yu, H.; Xu, G.; Zhang, J.; Liang, B.; Yang, D. Airborne SAR autofocus based on blurry imagery classification. Remote Sens. 2021, 13, 3872. [Google Scholar] [CrossRef]
- Tsomko, E.; Kim, H.J. Efficient method of detecting globally blurry or sharp images. In Proceedings of the 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services, Klagenfurt, Austria, 7–9 May 2008; pp. 171–174. [Google Scholar]
- Li, K.; Liu, W.; Zhao, K.; Zhang, W.; Liu, L. A novel dynamic weight neural network ensemble model. Int. J. Distrib. Sens. Netw. 2015, 11, 862056. [Google Scholar] [CrossRef]
- Praveen, K.; Pandey, A.; Kumar, D.; Rath, S.P.; Bapat, S.S. Dynamically weighted ensemble models for automatic speech recognition. In Proceedings of the 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China, 19–22 January 2021; pp. 111–116. [Google Scholar]
- Ganaie, M.A.; Hu, M.; Malik, A.K.; Tanveer, M.; Suganthan, P.N. Ensemble deep learning: A review. Eng. Appl. Artif. Intell. 2022, 115, 105151. [Google Scholar] [CrossRef]
- Dietterich, T.G. Ensemble methods in machine learning. In International Workshop on Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–15. [Google Scholar]
- Zhou, Y.; Wu, W.; Zou, J.; Qiao, J.; Cheng, J. Weighted ensemble networks for multiview based tiny object quality assessment. Concurr. Comput. Pract. Exp. 2021, 33, E5995. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 6848–6856. [Google Scholar]
- Xu, Z.; Guo, S.; Li, Y.; Wang, J.; Ma, Y.; Henna, L. Cigarette Packaging Quality Inspection Based on Convolutional Neural Network. In International Conference on Adaptive and Intelligent Systems; Springer International Publishing: Cham, Switzerland, 2022; pp. 614–626. [Google Scholar]
- Zhou, W.; Li, R.; Guo, J.; Li, Z.; Zhou, R.; Zhu, H.; Jian, Z.; Lai, Y. Machine Vision-Based Defect Classification Algorithm for Rolled Packages. In International Conference on Applied Intelligence; Springer Nature Singapore: Singapore, 2023; pp. 302–313. [Google Scholar]
- Sheng, Z.; Wang, G. Fast Method of Detecting Packaging Bottle Defects Based on ECA-EfficientDet. J. Sens. 2022, 2022, 9518910. [Google Scholar] [CrossRef]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Park, N.; Lee, T.; Kim, S. Vector quantized bayesian neural network inference for data streams. Proc. AAAI Conf. Artif. Intell. 2021, 35, 9322–9330. [Google Scholar] [CrossRef]
- Qu, D.; Li, L.; Yao, R. Frequency-Separated Attention Network for Image Super-Resolution. Appl. Sci. 2024, 14, 4238. [Google Scholar] [CrossRef]
- Jain, R.; Kasturi, R.; Schunck, B.G. Machine Vision; McGraw-Hill: New York, NY, USA, 1995. [Google Scholar]
- Bansal, R.; Raj, G.; Choudhury, T. Blur image detection using Laplacian operator and OpenCV. In Proceedings of the 2016 International Conference System Modeling & Advancement in Research Trends (SMART), Moradabad, India, 25–27 November 2016; pp. 63–67. [Google Scholar]
- Yang, X.; Han, M.; Tang, H.; Li, Q.; Luo, X. Detecting defects with support vector machine in logistics packaging boxes for edge computing. IEEE Access 2020, 8, 64002–64010. [Google Scholar] [CrossRef]
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Ma, N.; Zhang, X.; Zheng, H.T.; Sun, J. Shufflenet v2: Practical guidelines for efficient CNN architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
Quality | Class | Training Set | Validation Set | Test Set | Total |
---|---|---|---|---|---|
High-quality images | Non-defect | 1000 | 1000 | - | 2000 |
Edge defect | 1000 | 1000 | - | 2000 | |
Surface defect | 1000 | 1000 | - | 2000 | |
Low-quality images | Non-defect | - | - | 1000 | 1000 |
Edge defect | - | - | 1000 | 1000 | |
Surface defect | - | - | 1000 | 1000 | |
Total | 3000 | 3000 | 3000 | 9000 |
Hyperparameter | Values |
---|---|
Optimizer | AdamW |
Loss function | Cross-entropy |
Epoch | 50 |
Batch Size | 100 |
Learning Rate | 1.25 × 10−4 |
Rate Decay | 0.05 |
Models | Class | High-Quality Image | Low-Quality Image | Estimated Total Size (MB) | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Accuracy | Precision | Recall | F1-Score | Accuracy | |||
ShuffleNetV2 | Non-defect | 0.8560 | 0.9930 | 0.9194 | 0.9407 | 0.9090 | 0.0700 | 0.1299 | 0.4537 | 68.15 |
Edge defect | 0.9942 | 0.8520 | 0.9176 | 0.9040 | 0.2920 | 0.4414 | ||||
Surface defect | 0.9940 | 0.9770 | 0.9854 | 0.3842 | 0.9990 | 0.4536 | ||||
Aggregate | 0.9480 | 0.9407 | 0.9408 | 0.7324 | 0.4537 | 0.3755 | ||||
GoogleNet | Non-defect | 0.9802 | 0.9890 | 0.9846 | 0.9893 | 0.9685 | 0.2770 | 0.4308 | 0.6487 | 145.02 |
Edge defect | 0.9919 | 0.9800 | 0.9859 | 0.9295 | 0.6720 | 0.7800 | ||||
Surface defect | 0.9960 | 0.9990 | 0.9975 | 0.5008 | 0.9970 | 0.6667 | ||||
Aggregate | 0.9894 | 0.9893 | 0.9893 | 0.7996 | 0.6487 | 0.6258 | ||||
ResNet-34 | Non-defect | 0.9919 | 0.9830 | 0.9874 | 0.9890 | 0.7525 | 0.9030 | 0.8209 | 0.7670 | 207.70 |
Edge defect | 0.9880 | 0.9850 | 0.9865 | 0.9599 | 0.4550 | 0.6174 | ||||
Surface defect | 0.9872 | 0.9990 | 0.9930 | 0.7112 | 0.9430 | 0.8108 | ||||
Aggregate | 0.9890 | 0.9890 | 0.9890 | 0.8079 | 0.7670 | 0.7497 | ||||
EfficientNet | Non-defect | 0.9889 | 0.9810 | 0.9849 | 0.9767 | 0.8774 | 0.7370 | 0.8011 | 0.7913 | 242.78 |
Edge defect | 0.9460 | 0.9990 | 0.9718 | 0.6882 | 0.9270 | 0.7899 | ||||
Surface defect | 0.9979 | 0.9500 | 0.9734 | 0.8733 | 0.7100 | 0.7832 | ||||
Aggregate | 0.9776 | 0.9767 | 0.9767 | 0.8130 | 0.7913 | 0.7914 | ||||
ECAEfficientNet | Non-defect | 0.9889 | 0.9780 | 0.9834 | 0.9753 | 0.9209 | 0.8270 | 0.8714 | 0.8447 | 242.78 |
Edge defect | 0.9407 | 1.0000 | 0.9695 | 0.7663 | 0.9180 | 0.8353 | ||||
Surface defect | 1.0000 | 0.9480 | 0.9733 | 0.8728 | 0.7890 | 0.8288 | ||||
Aggregate | 0.9765 | 0.9753 | 0.9754 | 0.8533 | 0.8447 | 0.8452 |
Scenarios | AVE | TE | QE | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | ||
K = 2 | HH | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 |
HL | 0.9820 | 0.9820 | 0.9820 | 0.9900 | 0.9900 | 0.9900 | 0.9854 | 0.9853 | 0.9853 | |
LH | 0.9820 | 0.9817 | 0.9817 | 0.9438 | 0.9393 | 0.9389 | 0.9854 | 0.9853 | 0.9853 | |
LL | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | |
Aggregate | 0.9535 | 0.9493 | 0.9479 | 0.9460 | 0.9407 | 0.9392 | 0.9552 | 0.9510 | 0.9497 | |
K = 3 | HHH | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 |
HHL | 0.9913 | 0.9913 | 0.9913 | 0.9920 | 0.9920 | 0.9920 | 0.9917 | 0.9917 | 0.9917 | |
HLH | 0.9913 | 0.9913 | 0.9913 | 0.9917 | 0.9917 | 0.9917 | 0.9917 | 0.9917 | 0.9917 | |
HLL | 0.9195 | 0.9120 | 0.9108 | 0.9541 | 0.9510 | 0.9508 | 0.9652 | 0.9640 | 0.9638 | |
LHH | 0.9913 | 0.9913 | 0.9913 | 0.9878 | 0.9877 | 0.9877 | 0.9917 | 0.9917 | 0.9917 | |
LHL | 0.9195 | 0.9120 | 0.9108 | 0.9170 | 0.9090 | 0.9076 | 0.9652 | 0.9640 | 0.9638 | |
LLH | 0.9195 | 0.9120 | 0.9108 | 0.8984 | 0.8883 | 0.8861 | 0.9652 | 0.9640 | 0.9638 | |
LLL | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | |
Aggregate | 0.9478 | 0.9429 | 0.9418 | 0.9489 | 0.9441 | 0.9430 | 0.9651 | 0.9626 | 0.9618 | |
K = 4 | HHHH | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 |
HHHL | 0.9917 | 0.9917 | 0.9917 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | |
HHLH | 0.9917 | 0.9917 | 0.9917 | 0.9910 | 0.9910 | 0.9910 | 0.9907 | 0.9907 | 0.9907 | |
HHLL | 0.9820 | 0.9817 | 0.9817 | 0.9913 | 0.9913 | 0.9913 | 0.9854 | 0.9853 | 0.9853 | |
HLHH | 0.9917 | 0.9917 | 0.9917 | 0.9917 | 0.9917 | 0.9917 | 0.9907 | 0.9907 | 0.9907 | |
HLHL | 0.9820 | 0.9817 | 0.9817 | 0.9900 | 0.9900 | 0.9900 | 0.9854 | 0.9853 | 0.9853 | |
HLLH | 0.9820 | 0.9817 | 0.9817 | 0.9865 | 0.9863 | 0.9863 | 0.9854 | 0.9853 | 0.9853 | |
HLLL | 0.9015 | 0.8917 | 0.8897 | 0.9339 | 0.9280 | 0.9274 | 0.9452 | 0.9420 | 0.9415 | |
LHHH | 0.9917 | 0.9917 | 0.9917 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | 0.9907 | |
LHHL | 0.9820 | 0.9817 | 0.9817 | 0.9647 | 0.9627 | 0.9626 | 0.9854 | 0.9853 | 0.9853 | |
LHLH | 0.9820 | 0.9817 | 0.9817 | 0.9438 | 0.9393 | 0.9389 | 0.9854 | 0.9853 | 0.9853 | |
LHLL | 0.9015 | 0.8917 | 0.8897 | 0.9071 | 0.8977 | 0.8957 | 0.9452 | 0.9420 | 0.9415 | |
LLHH | 0.9820 | 0.9817 | 0.9817 | 0.9222 | 0.9150 | 0.9139 | 0.9854 | 0.9853 | 0.9853 | |
LLHL | 0.9015 | 0.8917 | 0.8897 | 0.8942 | 0.8840 | 0.8815 | 0.9452 | 0.9420 | 0.9415 | |
LLLH | 0.9015 | 0.8917 | 0.8897 | 0.8816 | 0.869 | 0.8656 | 0.9452 | 0.9420 | 0.9415 | |
LLLL | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | |
Aggregate | 0.9572 | 0.9536 | 0.9527 | 0.9518 | 0.9475 | 0.9466 | 0.9691 | 0.9672 | 0.9668 |
Scenarios | TQE | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
= 1 | =3 | = 5 | ||||||||
Precision | Recall | F1-Score | Precision | Recall | F1-Score | Precision | Recall | F1-Score | ||
K = 2 | HH | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 |
HL | 0.9923 | 0.9923 | 0.9923 | 0.9910 | 0.9910 | 0.9910 | 0.9897 | 0.9897 | 0.9897 | |
LH | 0.9608 | 0.9590 | 0.9588 | 0.9813 | 0.9810 | 0.9810 | 0.9829 | 0.9827 | 0.9826 | |
LL | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | |
Aggregate | 0.9508 | 0.9462 | 0.9448 | 0.9556 | 0.9513 | 0.9500 | 0.9557 | 0.9514 | 0.9501 | |
K = 3 | HHH | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9604 | 0.9597 | 0.9604 |
HHL | 0.9904 | 0.9903 | 0.9903 | 0.9910 | 0.9910 | 0.9910 | 0.9914 | 0.9913 | 0.9913 | |
HLH | 0.9917 | 0.9917 | 0.9917 | 0.9920 | 0.9920 | 0.9920 | 0.9920 | 0.9920 | 0.9920 | |
HLL | 0.9871 | 0.9870 | 0.9870 | 0.9633 | 0.9617 | 0.9615 | 0.9572 | 0.9550 | 0.9548 | |
LHH | 0.9861 | 0.9860 | 0.9860 | 0.9923 | 0.9923 | 0.9923 | 0.9920 | 0.9920 | 0.9920 | |
LHL | 0.9331 | 0.9273 | 0.9265 | 0.9442 | 0.9403 | 0.9399 | 0.9454 | 0.9417 | 0.9412 | |
LLH | 0.9116 | 0.9030 | 0.9012 | 0.9313 | 0.9253 | 0.9245 | 0.9381 | 0.9333 | 0.9327 | |
LLL | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | |
Aggregate | 0.9563 | 0.9523 | 0.9513 | 0.9580 | 0.9545 | 0.9537 | 0.9583 | 0.9548 | 0.9540 | |
K = 4 | HHHH | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 | 0.9890 |
HHHL | 0.9900 | 0.9900 | 0.9900 | 0.9904 | 0.9903 | 0.9903 | 0.9904 | 0.9903 | 0.9903 | |
HHLH | 0.9903 | 0.9903 | 0.9903 | 0.9910 | 0.9910 | 0.9910 | 0.9910 | 0.9910 | 0.9910 | |
HHLL | 0.9910 | 0.991 | 0.9909 | 0.9917 | 0.9917 | 0.9917 | 0.9914 | 0.9913 | 0.9913 | |
HLHH | 0.9910 | 0.991 | 0.9909 | 0.9913 | 0.9913 | 0.9913 | 0.9913 | 0.9913 | 0.9913 | |
HLHL | 0.9923 | 0.992 | 0.9923 | 0.9910 | 0.9910 | 0.9910 | 0.9897 | 0.9897 | 0.9897 | |
HLLH | 0.9920 | 0.9920 | 0.9920 | 0.9877 | 0.9877 | 0.9876 | 0.9868 | 0.9867 | 0.9866 | |
HLLL | 0.9781 | 0.9777 | 0.9776 | 0.9403 | 0.936 | 0.9354 | 0.9316 | 0.9257 | 0.9248 | |
LHHH | 0.9904 | 0.9903 | 0.9903 | 0.9920 | 0.9920 | 0.9920 | 0.9917 | 0.9917 | 0.9917 | |
LHHL | 0.9690 | 0.9680 | 0.9679 | 0.9850 | 0.9850 | 0.9850 | 0.9858 | 0.9857 | 0.9856 | |
LHLH | 0.9608 | 0.9590 | 0.9588 | 0.9810 | 0.9810 | 0.9810 | 0.9829 | 0.9827 | 0.9826 | |
LHLL | 0.9182 | 0.9107 | 0.9093 | 0.9233 | 0.9160 | 0.9152 | 0.9228 | 0.9157 | 0.9145 | |
LLHH | 0.9382 | 0.9333 | 0.9327 | 0.97149 | 0.97067 | 0.97055 | 0.9797 | 0.9793 | 0.9793 | |
LLHL | 0.9043 | 0.8947 | 0.8926 | 0.91349 | 0.9050 | 0.90332 | 0.9166 | 0.9087 | 0.9072 | |
LLLH | 0.8979 | 0.8873 | 0.8849 | 0.90757 | 0.8983 | 0.89647 | 0.9119 | 0.9033 | 0.9016 | |
LLLL | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | 0.8610 | 0.8443 | 0.8390 | |
Aggregate | 0.9596 | 0.9563 | 0.9555 | 0.9630 | 0.9600 | 0.9594 | 0.9634 | 0.9604 | 0.9597 |
Scenarios | Models | AVE | TQE | ||||||
---|---|---|---|---|---|---|---|---|---|
Aggregate | Aggregate | ||||||||
Precision | Recall | F1-Score | FPS | Precision | Recall | F1-Score | FPS | ||
K = 2 | ShuffleNetV2 | 0.8240 | 0.6783 | 0.6566 | 592 | 0.8392 | 0.7261 | 0.7075 | 336 |
GoogleNet | 0.9328 | 0.8942 | 0.8885 | 494 | 0.9348 | 0.8965 | 0.8908 | 301 | |
ResNet-34 | 0.9535 | 0.9493 | 0.9479 | 586 | 0.9556 | 0.9513 | 0.9500 | 298 | |
EfficientNet | 0.9338 | 0.9275 | 0.9275 | 602 | 0.9355 | 0.9293 | 0.9293 | 303 | |
ECAEfficientNet | 0.9444 | 0.9412 | 0.9413 | 603 | 0.9453 | 0.9422 | 0.9424 | 303 | |
K = 3 | ShuffleNetV2 | 0.8451 | 0.7169 | 0.6950 | 397 | 0.8564 | 0.7493 | 0.7346 | 230 |
GoogleNet | 0.9163 | 0.8643 | 0.8605 | 356 | 0.9334 | 0.9005 | 0.8980 | 214 | |
ResNet-34 | 0.9478 | 0.9429 | 0.9418 | 433 | 0.9580 | 0.9545 | 0.9537 | 216 | |
EfficientNet | 0.9418 | 0.9376 | 0.9376 | 422 | 0.9499 | 0.9460 | 0.9460 | 217 | |
ECAEfficientNet | 0.9614 | 0.9593 | 0.9594 | 433 | 0.9545 | 0.9518 | 0.9520 | 221 | |
K = 4 | ShuffleNetV2 | 0.7231 | 0.8236 | 0.6840 | 302 | 0.8594 | 0.7617 | 0.7501 | 177 |
GoogleNet | 0.9330 | 0.8952 | 0.8924 | 281 | 0.9406 | 0.9120 | 0.9104 | 166 | |
ResNet-34 | 0.9572 | 0.9536 | 0.9527 | 307 | 0.9630 | 0.9600 | 0.9594 | 166 | |
EfficientNet | 0.9516 | 0.9486 | 0.9487 | 303 | 0.9571 | 0.9543 | 0.9543 | 170 | |
ECAEfficientNet | 0.9679 | 0.9660 | 0.9661 | 306 | 0.9592 | 0.9567 | 0.9568 | 168 |
Scenarios | Models | Difference (TQE–Single CNN Model) | Difference (TQE–AVE) | ||||
---|---|---|---|---|---|---|---|
Aggregate | Aggregate | ||||||
Precision | Recall | F1-Score | Precision | Recall | F1-Score | ||
K = 2 | ShuffleNetV2 | 0.0705 | 0.2009 | 0.2542 | 0.0203 | 0.0637 | 0.0679 |
GoogleNet | 0.1170 | 0.2169 | 0.2322 | 0.0027 | 0.0031 | 0.0031 | |
ResNet-34 | 0.1366 | 0.1717 | 0.1873 | 0.0028 | 0.0027 | 0.0028 | |
EfficientNet | 0.1085 | 0.1222 | 0.1221 | 0.0023 | 0.0024 | 0.0024 | |
ECAEfficientNet | 0.0816 | 0.0865 | 0.0862 | 0.0012 | 0.0013 | 0.0015 | |
Aggregate | 0.1028 | 0.1596 | 0.1764 | 0.0058 | 0.0146 | 0.0155 | |
K = 3 | ShuffleNetV2 | 0.1109 | 0.2683 | 0.3296 | 0.0129 | 0.0370 | 0.0453 |
GoogleNet | 0.1258 | 0.2391 | 0.2592 | 0.0195 | 0.0414 | 0.0429 | |
ResNet-34 | 0.1457 | 0.1826 | 0.1990 | 0.0117 | 0.0133 | 0.0136 | |
EfficientNet | 0.1329 | 0.1503 | 0.1502 | 0.0093 | 0.0096 | 0.0096 | |
ECAEfficientNet | 0.0981 | 0.1037 | 0.1035 | −0.0079 | −0.0086 | −0.0085 | |
Aggregate | 0.1227 | 0.1888 | 0.2083 | 0.0091 | 0.0185 | 0.0206 | |
K = 4 | ShuffleNetV2 | 0.1211 | 0.2961 | 0.3619 | 0.1454 | −0.0660 | 0.0705 |
GoogleNet | 0.1377 | 0.2581 | 0.2793 | 0.0081 | 0.0179 | 0.0192 | |
ResNet-34 | 0.1534 | 0.1911 | 0.2077 | 0.0062 | 0.0068 | 0.0071 | |
EfficientNet | 0.1427 | 0.1615 | 0.1614 | 0.0059 | 0.0061 | 0.0060 | |
ECAEfficientNet | 0.1047 | 0.1108 | 0.1104 | −0.0093 | −0.0099 | −0.0099 | |
Aggregate | 0.1319 | 0.2035 | 0.2241 | 0.0313 | -0.0090 | 0.0186 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Son, G.-J.; Jung, H.-C.; Kim, Y.-D. Temporal-Quality Ensemble Technique for Handling Image Blur in Packaging Defect Inspection. Sensors 2024, 24, 4438. https://doi.org/10.3390/s24144438
Son G-J, Jung H-C, Kim Y-D. Temporal-Quality Ensemble Technique for Handling Image Blur in Packaging Defect Inspection. Sensors. 2024; 24(14):4438. https://doi.org/10.3390/s24144438
Chicago/Turabian StyleSon, Guk-Jin, Hee-Chul Jung, and Young-Duk Kim. 2024. "Temporal-Quality Ensemble Technique for Handling Image Blur in Packaging Defect Inspection" Sensors 24, no. 14: 4438. https://doi.org/10.3390/s24144438