Integrated Circuit Bonding Distance Inspection via Hierarchical Measurement Structure
<p>Example of an integrated circuit. (<b>a</b>) Zoomed-in view of wire bonding. The bonding distance is the projected distance between the centers of two bonding spots. (<b>b</b>) Image of an integrated circuit, with a resolution of <math display="inline"><semantics> <mrow> <mn>2448</mn> <mo>×</mo> <mn>2048</mn> </mrow> </semantics></math>. (<b>c</b>) Zoomed-in views of a bonding spot and a gold wire.</p> "> Figure 2
<p>Overview of our hierarchical measurement structure for the integrated circuit bonding distance inspection task. First, the bonding regions are coarsely located by several convolution layers. Second, the bonding regions are fed into two parallel modules for bonding spot detection and gold wire segmentation. Third, the detected bonding spots are matched using the gold wire segmentation information in the BDMM. Last, the bonding distance is measured by extracting the center of each bonding spot and calculating the distance corresponding to the center coordinates. The output comprises the bonding distances of all wire bondings in the image.</p> "> Figure 3
<p>Structure of MWBINet, containing the fine location branch and the wire segmentation branch. The two branches are used to extract the bonding spot and gold wire information. The fine location branch consists of the feature extraction module, spatial correlation sensing module, and the bidirectional feature fusion module. The gold wire segmentation branch contains an edge module and an ASPP module.</p> "> Figure 4
<p>Structure of the bottleneck. The orange circles indicate the input and output of the bottlenecks. (<b>a</b>) The original bottleneck, which consists of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolutions. (<b>b</b>) The dilated bottleneck, which consists of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> dilated convolutions. (<b>c</b>) The dilated bottleneck with a <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> convolution projection.</p> "> Figure 5
<p>Pipeline of the bonding distance measurement module, which consists of three steps: (1) bonding spot matching; (2) bonding spot center extraction; (3) bonding distance calculation. Different colors indicate different detection box for bonding spots.</p> "> Figure 6
<p>Four gold wire bonding distribution models: (<b>a</b>) line shape, (<b>b</b>) X shape, (<b>c</b>) Y shape, (<b>d</b>) V shape.</p> "> Figure 7
<p>Some detection results for tiny bonding spots. Red boxes indicate the bonding region from the coarse location; green boxes indicate the bonding spots detected by the fine location branch.</p> "> Figure 8
<p>Some segmentation results for gold wires. Red boxes indicate the bonding region from the coarse location; red masks indicate the gold wire.</p> "> Figure 9
<p>An integrated circuit image was used to evaluate the bonding distance measurement method. (<b>a</b>) Integrated circuit image. (<b>b</b>) Partial bonding region. “#1–#10” denotes ten gold wires numbered from 1 to 10. (<b>c</b>) Measurement of bonding distance.</p> ">
Abstract
:1. Introduction
- We design a hierarchical measurement structure framework based on deep learning from 2D images to measure bonding distances in integrated circuits with high precision.
- We propose a novel multi-branch wire bonding inspection network (MWBINet) for wire bonding locations; each branch in the network provides auxiliary spatial correlation information for the others, which strongly enhances the feature representation, thereby solving the problem of limited target information when detecting very small targets.
- We propose the BDMM to implement bonding spot matching, thus achieving accurate bonding distance measurement.
2. Related Work
2.1. Wire Bonding Inspection
2.2. Learning-Based Object Detection
2.3. Learning-Based Semantic Segmentation
3. Method
3.1. Overview
3.2. Multi-Branch Wire Bonding Inspection Framework
3.2.1. Fine Location Branch
3.2.2. Gold Wire Segmentation Branch
3.3. Bonding Distance Measurement Module
4. Experiments and Results
4.1. Experimental Settings
4.2. Comparisons
4.3. Evaluation of Bonding Distance Measurement
4.4. Ablation of Key Component Modules
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Feng, W.; Chen, X.; Wang, C.; Shi, Y. Application research on the time–frequency analysis method in the quality detection of ultrasonic wire bonding. Int. J. Distrib. Sens. Netw. 2021, 17, 15501477211018346. [Google Scholar] [CrossRef]
- Perng, D.B.; Chou, C.C.; Lee, S.M. Design and development of a new machine vision wire bonding inspection system. Int. J. Adv. Manuf. Technol. 2007, 34, 323–334. [Google Scholar] [CrossRef]
- Xie, Q.; Long, K.; Lu, D.; Li, D.; Zhang, Y.; Wang, J. Integrated Circuit Gold Wire Bonding Measurement via 3D Point Cloud Deep Learning. IEEE Trans. Ind. Electron. 2021, 69, 11807–11815. [Google Scholar] [CrossRef]
- Xiang, R.; He, W.; Zhang, X.; Wang, D.; Shan, Y. Size measurement based on a two-camera machine vision system for the bayonets of automobile brake pads. Measurement 2018, 122, 106–116. [Google Scholar] [CrossRef]
- Min, J. Measurement method of screw thread geometric error based on machine vision. Meas. Control 2018, 51, 304–310. [Google Scholar] [CrossRef]
- Zhang, M.; Xing, X.; Wang, W. Smart Sensor-Based Monitoring Technology for Machinery Fault Detection. Sensors 2024, 24, 2470. [Google Scholar] [CrossRef]
- Kim, G.; Kim, S. A road defect detection system using smartphones. Sensors 2024, 24, 2099. [Google Scholar] [CrossRef]
- Egodawela, S.; Khodadadian Gostar, A.; Buddika, H.S.; Dammika, A.; Harischandra, N.; Navaratnam, S.; Mahmoodian, M. A Deep Learning Approach for Surface Crack Classification and Segmentation in Unmanned Aerial Vehicle Assisted Infrastructure Inspections. Sensors 2024, 24, 1936. [Google Scholar] [CrossRef] [PubMed]
- Jo, C.M.; Jang, W.K.; Seo, Y.H.; Kim, B.H. In Situ Surface Defect Detection in Polymer Tube Extrusion: AI-Based Real-Time Monitoring Approach. Sensors 2024, 24, 1791. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Li, D.; Li, Y.; Xie, Q.; Wu, Y.; Yu, Z.; Wang, J. Tiny Defect Detection in High-Resolution Aero-Engine Blade Images via a Coarse-to-Fine Framework. IEEE Trans. Instrum. Meas. 2021, 70, 1–12. [Google Scholar] [CrossRef]
- Zhou, X.; Fang, H.; Liu, Z.; Zheng, B.; Sun, Y.; Zhang, J.; Yan, C. Dense Attention-guided Cascaded Network for Salient Object Detection of Strip Steel Surface Defects. IEEE Trans. Instrum. Meas. 2021, 71, 5004914. [Google Scholar] [CrossRef]
- Li, X.; Yang, Y.; Ye, Y.; Ma, S.; Hu, T. An online visual measurement method for workpiece dimension based on deep learning. Measurement 2021, 185, 110032. [Google Scholar] [CrossRef]
- Long, Z.; Zhou, X.; Zhang, X.; Wang, R.; Wu, X. Recognition and classification of wire bonding joint via image feature and SVM model. IEEE Trans. Components Packag. Manuf. Technol. 2019, 9, 998–1006. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, Z.; Wu, F. A data-driven method for enhancing the image-based automatic inspection of IC wire bonding defects. Int. J. Prod. Res. 2021, 59, 4779–4793. [Google Scholar] [CrossRef]
- Chan, K.Y.; Yiu, K.F.C.; Lam, H.K.; Wong, B.W. Ball bonding inspections using a conjoint framework with machine learning and human judgement. Appl. Soft Comput. 2021, 102, 107115. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [PubMed]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Ko, K.W.; Kim, D.H.; Lee, J.; Lee, S. 3D Measurement System of Wire for Automatic Pull Test of Wire Bonding. J. Inst. Control Robot. Syst. 2015, 21, 1130–1135. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. Yolov4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Terven, J.; Córdova-Esparza, D.M.; Romero-González, J.A. A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS. Mach. Learn. Knowl. Extr. 2023, 5, 1680–1716. [Google Scholar] [CrossRef]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Semantic image segmentation with deep convolutional nets and fully connected CRFs. arXiv 2014, arXiv:1412.7062. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Sun, P.; Zhang, R.; Jiang, Y.; Kong, T.; Xu, C.; Zhan, W.; Tomizuka, M.; Li, L.; Yuan, Z.; Wang, C.; et al. Sparse R-CNN: End-to-end object detection with learnable proposals. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14454–14463. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and efficient object detection. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Duan, K.; Bai, S.; Xie, L.; Qi, H.; Huang, Q.; Tian, Q. CenterNet: Keypoint triplets for object detection. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6569–6578. [Google Scholar]
- Lv, W.; Xu, S.; Zhao, Y.; Wang, G.; Wei, J.; Cui, C.; Du, Y.; Dang, Q.; Liu, Y. DETRs beat YOLOs on real-time object detection. arXiv 2023, arXiv:2304.08069. [Google Scholar]
- Hou, X.; Liu, M.; Zhang, S.; Wei, P.; Chen, B. Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement. arXiv 2024, arXiv:2403.16131. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Wan, Q.; Huang, Z.; Lu, J.; Yu, G.; Zhang, L. Seaformer: Squeeze-enhanced axial transformer for mobile semantic segmentation. arXiv 2023, arXiv:2301.13156. [Google Scholar]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 4015–4026. [Google Scholar]
- Ma, X.; Ni, Z.; Chen, X. Semantic and Spatial Adaptive Pixel-level Classifier for Semantic Segmentation. arXiv 2024, arXiv:2405.06525. [Google Scholar]
- Wei, Z.; Chen, L.; Jin, Y.; Ma, X.; Liu, T.; Lin, P.; Wang, B.; Chen, H.; Zheng, J. Stronger, fewer, & superior: Harnessing vision foundation models for domain generalized semantic segmentation. arXiv 2023, arXiv:2312.04265. [Google Scholar]
Method | Years | Accuracy (%) | Precision (%) | Recall (%) | F-Measure (%) |
---|---|---|---|---|---|
RetinaNet | 2017 [37] | 88.7 | 86.3 | 85.2 | 87.5 |
CenterNet | 2019 [38] | 90.6 | 89.5 | 89.2 | 89.3 |
EfficientDet [35] | 2020 [35] | 90.4 | 91.2 | 89.9 | 90.5 |
Sparse-RCNN | 2021 [34] | 84.5 | 86.2 | 87.3 | 86.7 |
RT-DETR | 2023 [39] | 78.5 | 73.7 | 76.2 | 77.3 |
YOLOv8 | 2023 [26] | 83.7 | 82.5 | 86.5 | 84.4 |
Salience DETR | 2024 [40] | 89.6 | 87.7 | 91.3 | 86.8 |
Our method | - | 94.5 | 93.0 | 94.8 | 93.9 |
Method | Years | MPA (%) | MIOU (%) |
---|---|---|---|
FCN | 2015 [20] | 73.6 | 70.2 |
U-Net | 2015 [11] | 75.6 | 79.3 |
Seg-Net | 2017 [28] | 71.5 | 79.2 |
PSP-Net | 2017 [41] | 79.1 | 80.9 |
DeepLabv3+ | 2018 [27] | 84.3 | 83.1 |
SeaFormer | 2023 [42] | 85.2 | 81.7 |
SAM | 2023 [43] | 85.8 | 84.8 |
SSA | 2024 [44] | 84.5 | 86.3 |
Rein | 2024 [45] | 85.2 | 85.9 |
Our method | - | 86.2 | 87.3 |
Gold Wire | GT | Mean Value (m) | Mean Error (m) | Maximum Error (m) | Relative Error (%) |
---|---|---|---|---|---|
#1 | 2147.9 | 2157.3 | 9.4 | 13.5 | 0.44 |
#2 | 1003.6 | 1014.9 | 11.3 | 14.5 | 1.13 |
#3 | 2110.6 | 2106.2 | −4.4 | −9 | 0.21 |
#4 | 1077.9 | 1085.7 | 7.8 | 11.6 | 0.72 |
#5 | 2140.1 | 2146.4 | 6.3 | 10.1 | 0.29 |
#6 | 1258.1 | 1267.9 | 9.8 | 15.1 | 0.78 |
#7 | 2175.1 | 2164.7 | −10.4 | −14.3 | 0.48 |
#8 | 2294.8 | 2303.1 | 8.3 | 13.6 | 0.36 |
#9 | 1158.2 | 1165.1 | 6.9 | 10.6 | 0.60 |
#10 | 1454.6 | 1446.1 | −8.5 | −13.1 | 0.59 |
Average | - | - | 8.3 | 12.5 | 0.56 |
Method Error (%) | Precision (%) | Recall (%) | F-Measure (%) | MPA (%) | MIOU (%) | Mean Error (%) | Relative |
---|---|---|---|---|---|---|---|
MWBINet w/o CL | 81.2 | 80.6 | 80.3 | 74.3 | 71.2 | 42.3 | 6.7 |
MWBINet w/o SE | 90.1 | 89.7 | 90.6 | 86.1 | 87.1 | 9.1 | 0.59 |
MWBINet w/o BFFM | 89.3 | 89.6 | 90.3 | 86.0 | 87.0 | 9.3 | 0.62 |
MWBINet w/o EB | 90.3 | 90.4 | 88.7 | 83.2 | 82.9 | 12.5 | 0.89 |
MWBINet w/o GSWL | 90.1 | 90.2 | 89.3 | 81.1 | 81.8 | 13.6 | 0.94 |
MWBINet | 93.0 | 94.8 | 93.9 | 86.2 | 87.3 | 8.3 | 0.56 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Pu, C.; Zhang, Y.; Niu, M.; Hao, L.; Wang, J. Integrated Circuit Bonding Distance Inspection via Hierarchical Measurement Structure. Sensors 2024, 24, 3933. https://doi.org/10.3390/s24123933
Zhang Y, Pu C, Zhang Y, Niu M, Hao L, Wang J. Integrated Circuit Bonding Distance Inspection via Hierarchical Measurement Structure. Sensors. 2024; 24(12):3933. https://doi.org/10.3390/s24123933
Chicago/Turabian StyleZhang, Yuan, Chenghan Pu, Yanming Zhang, Muyuan Niu, Lifeng Hao, and Jun Wang. 2024. "Integrated Circuit Bonding Distance Inspection via Hierarchical Measurement Structure" Sensors 24, no. 12: 3933. https://doi.org/10.3390/s24123933
APA StyleZhang, Y., Pu, C., Zhang, Y., Niu, M., Hao, L., & Wang, J. (2024). Integrated Circuit Bonding Distance Inspection via Hierarchical Measurement Structure. Sensors, 24(12), 3933. https://doi.org/10.3390/s24123933