The Effect of Annotation Quality on Wear Semantic Segmentation by CNN
<p>(<b>a</b>) Important geometric parameters of an end mill, such as relief face, end angle on the axial rake, rake face, axial relief face, and helix angle. (<b>b</b>) Illustration of light reflection on a TiN-coated end mill when illuminated by standard direct diffuse lighting. Notably, the most intense reflection is observed along the cutter’s edges, while shadowing is evident within the inner rake space. (<b>c</b>) Image captured by the IAS.</p> "> Figure 2
<p>Schematic representation of the measurement setup for capturing high-quality images of end mills. The acquisition system consists of (<b>1</b>) a hemisphere with barium sulfate coating, (<b>2</b>) 12 LEDs located at the edge of the hemisphere, (<b>3</b>) a tool to be examined, which is held by (<b>4</b>) a three-jaw chuck, (<b>5</b>) represents the rotating plate for a 360° recording, (<b>6</b>) is a camera with an interface (<b>7</b>) connected to the computer.</p> "> Figure 3
<p>Positive annotation examples. The wear classification includes two primary categories: “yellow” represents typical wear and “green” denotes abnormal wear and the two additional categories: “red” for the background and “black” for the tool.</p> "> Figure 4
<p>Negative examples of annotations. Each sub-figure highlights a distinct type of incorrect annotation. The classifications include: “yellow” for normal wear, “green” for abnormal wear, “red” for the background, and “black” for the tool itself. (<b>a</b>) shows mislabeling abnormal wear as normal wear, (<b>b</b>) incorrect annotations mistakenly marking the background as a part of the tool surface, (<b>c</b>) impurities have been labeled as abnormal wear, (<b>d</b>) material removal due to chip residue at chipping space has been marked as normal wear.</p> "> Figure 5
<p>CNN architecture for normal and abnormal wear segmentation. Encoding blocks are colored in blue. The bottleneck layer is colored in brown. Decoding blocks are colored in green.</p> "> Figure 6
<p>Illustrative images captured via the acquisition system: (<b>a</b>) TiCN-coated endmill and (<b>b</b>) TiN-coated endmill.</p> "> Figure 7
<p>Masks of endmill wear annotations for comparison of a TiCN-coated endmill dataset. The annotations were performed by three annotators: Annotator 1, Annotator 2, and Annotator 3. The dataset includes four classes: normal wear in green, abnormal wear in yellow, background in red, and tool in black. The critical annotations have been marked in red.</p> "> Figure 8
<p>Masks of Endmill wear annotations for comparison of a TiN-coated endmill dataset. The annotations were performed by three annotators: Annotator 1, Annotator 2, and Annotator 3. The dataset includes four classes: normal wear in green, abnormal wear in yellow, background in red, and tool in black. The critical annotations have been marked in red.</p> "> Figure 9
<p>mIoU results of various models for classes of interest: (<b>a</b>) normal wear and (<b>b</b>) abnormal wear. These models were trained using the same dataset but labeled by different annotators. The LR was set to 0.001 and 0.0001, and hyperparameters such as BS and LR varied, as detailed in <a href="#sensors-24-04777-t0A1" class="html-table">Table A1</a>. The dataset originates from a TiN-coated end mill. The standard deviation is depicted to illustrate the performance variation among annotators 1, 2, and 3.</p> "> Figure 9 Cont.
<p>mIoU results of various models for classes of interest: (<b>a</b>) normal wear and (<b>b</b>) abnormal wear. These models were trained using the same dataset but labeled by different annotators. The LR was set to 0.001 and 0.0001, and hyperparameters such as BS and LR varied, as detailed in <a href="#sensors-24-04777-t0A1" class="html-table">Table A1</a>. The dataset originates from a TiN-coated end mill. The standard deviation is depicted to illustrate the performance variation among annotators 1, 2, and 3.</p> "> Figure 10
<p>mIoU results of various models for classes of interest: (<b>a</b>) normal wear and (<b>b</b>) abnormal wear. These models were trained using the same dataset but labeled by different annotators. The LR was set to 0.001 and 0.0001, and hyperparameters such as BS and LR varied, as detailed in <a href="#sensors-24-04777-t0A2" class="html-table">Table A2</a>. The dataset originates from a TiCN-coated end mill. The standard deviation is depicted to illustrate the performance variation among annotators 1, 2, and 3.</p> "> Figure 11
<p>Prediction results and corresponding masks on test images from a TiN-coated milling tool, predicted by the best-performing models of the three annotators, 1, 2 and 3, as detailed and bold in <a href="#sensors-24-04777-t0A1" class="html-table">Table A1</a>. The prediction includes the four classes: normal wear in green, abnormal wear in yellow, background in red, and tool in black. Wrong annotations in the GT Mask are marked red.</p> "> Figure 12
<p>Prediction results and corresponding masks on test images from a TiN-coated milling tool, predicted by the best-performing models of the three annotators, 1, 2 and 3, as detailed and bold in <a href="#sensors-24-04777-t0A2" class="html-table">Table A2</a>. The prediction includes the four classes: normal wear in green, abnormal wear in yellow, background in red, and tool in black. Critical regions, such as wrong predictions or missed wear recognition, are marked in red.</p> ">
Abstract
:1. Introduction
2. State of the Art
3. Materials and Methods
3.1. Structure Parameter-Related Annotation Challenges
3.2. Acquisition System
3.3. Annotation Guideline
- Definition: Normal wear is characterized by wear without fractures. In contrast, abnormal wear signifies wear with fractures. Both types of wear are considered contiguous surfaces.
- Positive Examples (refer to Figure 3):
- Negative Examples (please see Figure 4):
- Additional Guidelines:
- (a)
- Only label damage present on the cutting edges or phase, excluding the chipping space.
- (b)
- Wear that is ambiguous and cannot be distinctly labeled should be excluded from the dataset.
- (c)
- Instances can appear overlapped, but in effect, they do have finer boundaries that can merge into one another, especially at the cutting edges. Here, careful annotation is required.
3.4. Cnn Model
3.5. Dataset Characteristics
- Tool Diversity and Wear Patterns: Our experimental framework leverages two distinct datasets to ensure a comprehensive evaluation of various wear patterns.
- Dataset 1: encompasses tools coated with Titanium Nitride (TiN).
- Dataset 2: incorporates tools coated with Titanium Carbonitride (TiCN).
- Optimizing CNN Models: Images from the datasets were strategically resized to dimensions of 512 × 512 pixels, facilitating compatibility with our CNN model and optimizing computational performance.
- Data Partitioning: The assembled images are systematically divided into training, validation, and testing segments, following a 8:1:1 distribution. A detailed enumeration of the instances in the dataset is presented in Table 2.
3.6. Annotators
- Annotator 1: with more than two decades of experience in the field, this person embodies the highest level of expertise and experienced insight into this topic.
- Annotator 2: with 2 years of hands-on experience, this participant represents the middle tier, bridging the gap between novices and veterans.
- Annotator 3: as a newcomer to the field of machining technology, this participant offered a fresh perspective without deep-rooted biases or ingrained expertise.
3.7. Evaluation Indicators
- Determine the class frequencies by counting the occurrences of each class in the dataset to obtain and .
- Calculate the inverse frequencies for each class as follows:
- Normalize the weights by summing all the inverse frequencies and then divide each inverse frequency by this sum to obtain weights and that add up to 1:
- Apply the weights to calculate the weighted mean IoU:
4. Results and Discussion
4.1. Comparison of Annotation by Different Annotators
- Ambiguity in wear assessment: in particular, minute wear features on cutting edges, such as on the edge of the TiCN cutter, presented a challenge in definitive categorization, but still shows consistency in annotation (marked green and yellow in Figure 7b).
4.2. Performance Comparison of Various CNN Models on Diverse Datasets from Multiple Annotators
4.3. Impact of Hyperparameters on Model Sensitivity to Annotation Quality
4.4. Visual Analysis
4.5. Coefficient of Variation Analysis of the Segmentation Results across Annotators, Classes, and Hyperparameter Variations
5. Conclusions
6. Patents
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
(CNN) | Convolutional Neural Networks |
(TiN) | Titanium Nitride |
(TiCN) | Titanium Carbonitride |
(CV) | Coefficient of Cariation |
(DCNN) | Deep Convolutional Neural Networks |
(IAS) | Image Acquisition System |
(LR) | Learning Rate |
(BS) | Batch Size |
(DO) | Dropout Rate |
(mIoU) | mean Intersection over Union |
(wmIoU) | weighted mean Intersection over Union |
Appendix A
Annotator 1 | Background [mIoU] | Tool [mIoU] | Abnormal Wear [mIoU] | Normal Wear [mIoU] | wmIoU [mIoU] | LR | BS | DO |
---|---|---|---|---|---|---|---|---|
A1MTiN 0 | 0.9987 | 0.9895 | 0.7537 | 0.6134 | 0.6472 | 0.001 | 8 | 0 |
A1MTiN 1 | 0.9986 | 0.9979 | 0.8153 | 0.7120 | 0.7369 | 0.001 | 8 | 0.3 |
A1MTiN 2 | 0.9980 | 0.9890 | 0.6866 | 0.7120 | 0.7067 | 0.001 | 8 | 0.5 |
A1MTiN 3 | 0.9983 | 0.9459 | 0.6361 | 0.7120 | 0.6947 | 0.001 | 16 | 0 |
A1MTiN 4 | 0.9988 | 0.9982 | 0.8105 | 0.5935 | 0.6453 | 0.001 | 16 | 0.3 |
A1MTiN 5 | 0.9978 | 0.9962 | 0.5122 | 0.7039 | 0.6596 | 0.001 | 16 | 0.5 |
A1MTiN 6 | 0.9986 | 0.9410 | 0.6686 | 0.3597 | 0.4335 | 0.0001 | 8 | 0 |
A1MTiN 7 | 0.9978 | 0.9724 | 0.6243 | 0.5875 | 0.5970 | 0.0001 | 8 | 0.3 |
A1MTiN 8 | 0.9979 | 0.9888 | 0.6623 | 0.6619 | 0.6627 | 0.0001 | 8 | 0.5 |
A1MTiN 9 | 0.9986 | 0.9816 | 0.7119 | 0.4792 | 0.5350 | 0.0001 | 16 | 0 |
A1MTiN 10 | 0.9981 | 0.9800 | 0.6458 | 0.5161 | 0.5476 | 0.0001 | 16 | 0.3 |
A1MTiN 11 | 0.9981 | 0.9640 | 0.5671 | 0.4312 | 0.4643 | 0.0001 | 16 | 0.5 |
Annotator 2 | Background [mIoU] | Tool [mIoU] | Abnormal Wear [mIoU] | Normal Wear [mIoU] | wmIoU [mIoU] | LR | BS | DO |
A2MTiN 0 | 0.9958 | 0.9845 | 0.6288 | 0.4853 | 0.5201 | 0.001 | 8 | 0 |
A2MTiN 1 | 0.9964 | 0.9518 | 0.6870 | 0.5933 | 0.6148 | 0.001 | 8 | 0.3 |
A2MTiN 2 | 0.9970 | 0.9969 | 0.6794 | 0.5933 | 0.6144 | 0.001 | 8 | 0.5 |
A2MTiN 3 | 0.9945 | 0.6453 | 0.5893 | 0.5933 | 0.5926 | 0.001 | 16 | 0 |
A2MTiN 4 | 0.9970 | 0.9626 | 0.6780 | 0.5933 | 0.6141 | 0.001 | 16 | 0.3 |
A2MTiN 5 | 0.9971 | 0.9969 | 0.6780 | 0.5933 | 0.6141 | 0.001 | 16 | 0.5 |
A2MTiN 6 | 0.9977 | 0.9980 | 0.8082 | 0.4618 | 0.5443 | 0.0001 | 8 | 0 |
A2MTiN 7 | 0.9980 | 0.9980 | 0.7778 | 0.5881 | 0.6335 | 0.0001 | 8 | 0.3 |
A2MTiN 8 | 0.9972 | 0.9386 | 0.6073 | 0.5775 | 0.5853 | 0.0001 | 8 | 0.5 |
A2MTiN 9 | 0.9979 | 0.9818 | 0.7234 | 0.4504 | 0.5157 | 0.0001 | 16 | 0 |
A2MTiN 10 | 0.9973 | 0.9647 | 0.5908 | 0.4646 | 0.4954 | 0.0001 | 16 | 0.3 |
A2MTiN 11 | 0.9972 | 0.9717 | 0.5334 | 0.3949 | 0.4287 | 0.0001 | 16 | 0.5 |
Annotator 3 | Background [mIoU] | Tool [mIoU] | Abnormal Wear [mIoU] | Normal Wear [mIoU] | wmIoU [mIoU] | LR | BS | DO |
A3MTiN 0 | 0.9982 | 0.9895 | 0.7047 | 0.5400 | 0.5797 | 0.001 | 8 | 0 |
A3MTiN 1 | 0.9985 | 0.9981 | 0.7538 | 0.5679 | 0.6125 | 0.001 | 8 | 0.3 |
A3MTiN 2 | 0.9974 | 0.9968 | 0.6435 | 0.5679 | 0.5866 | 0.001 | 8 | 0.5 |
A3MTiN 3 | 0.9983 | 0.9470 | 0.7347 | 0.4478 | 0.5163 | 0.001 | 16 | 0 |
A3MTiN 4 | 0.9983 | 0.9976 | 0.6954 | 0.5681 | 0.5990 | 0.001 | 16 | 0.3 |
A3MTiN 5 | 0.9976 | 0.9883 | 0.5852 | 0.5599 | 0.5668 | 0.001 | 16 | 0.5 |
A3MTiN 6 | 0.9983 | 0.9899 | 0.6508 | 0.3757 | 0.4417 | 0.0001 | 8 | 0 |
A3MTiN 7 | 0.9981 | 0.9803 | 0.6432 | 0.4918 | 0.5285 | 0.0001 | 8 | 0.3 |
A3MTiN 8 | 0.9978 | 0.9299 | 0.5534 | 0.5599 | 0.5592 | 0.0001 | 8 | 0.5 |
A3MTiN 9 | 0.9983 | 0.9649 | 0.4452 | 0.3112 | 0.3442 | 0.0001 | 16 | 0 |
A3MTiN 10 | 0.9980 | 0.9467 | 0.6381 | 0.5037 | 0.5363 | 0.0001 | 16 | 0.3 |
A3MTiN 11 | 0.9977 | 0.9302 | 0.4966 | 0.4207 | 0.4397 | 0.0001 | 16 | 0.5 |
Annotator 1 | Background [mIoU] | Tool [mIoU] | Abnormal Wear [mIoU] | Normal Wear [mIoU] | wmIoU [mIoU] | LR | BS | DO |
---|---|---|---|---|---|---|---|---|
A1MTiCN 0 | 0.9961 | 0.8439 | 0.5229 | 0.4586 | 0.5305 | 0.001 | 8 | 0 |
A1MTiCN 1 | 0.9964 | 0.9548 | 0.6568 | 0.5847 | 0.6536 | 0.001 | 8 | 0.3 |
A1MTiCN 2 | 0.9958 | 0.9262 | 0.4596 | 0.5072 | 0.5842 | 0.001 | 8 | 0.5 |
A1MTiCN 3 | 0.9969 | 0.9551 | 0.6136 | 0.5445 | 0.6209 | 0.001 | 16 | 0 |
A1MTiCN 4 | 0.9959 | 0.8173 | 0.2934 | 0.2174 | 0.3290 | 0.001 | 16 | 0.3 |
A1MTiCN 5 | 0.9955 | 0.8322 | 0.5638 | 0.5934 | 0.6375 | 0.001 | 16 | 0.5 |
A1MTiCN 6 | 0.9975 | 0.8219 | 0.4585 | 0.4561 | 0.5239 | 0.0001 | 8 | 0 |
A1MTiCN 7 | 0.9975 | 0.8984 | 0.5759 | 0.4794 | 0.5576 | 0.0001 | 8 | 0.3 |
A1MTiCN 8 | 0.9966 | 0.8050 | 0.4100 | 0.4402 | 0.5076 | 0.0001 | 8 | 0.5 |
A1MTiCN 9 | 0.9971 | 0.7031 | 0.3886 | 0.3675 | 0.4301 | 0.0001 | 16 | 0 |
A1MTiCN 10 | 0.9964 | 0.5983 | 0.3486 | 0.3672 | 0.4104 | 0.0001 | 16 | 0.3 |
A1MTiCN 11 | 0.9963 | 0.7507 | 0.3119 | 0.2970 | 0.3812 | 0.0001 | 16 | 0.5 |
Annotator 2 | Background [mIoU] | Tool [mIoU] | Abnormal Wear [mIoU] | Normal Wear [mIoU] | wmIoU [mIoU] | LR | BS | DO |
A2MTiCN 0 | 0.8983 | 0.3533 | 0.1599 | 0.2571 | 0.2750 | 0.001 | 8 | 0 |
A2MTiCN 1 | 0.9961 | 0.7911 | 0.4870 | 0.4249 | 0.4933 | 0.001 | 8 | 0.3 |
A2MTiCN 2 | 0.9951 | 0.9407 | 0.5947 | 0.5617 | 0.6320 | 0.001 | 8 | 0.5 |
A2MTiCN 3 | 0.9962 | 0.7667 | 0.4076 | 0.1881 | 0.2970 | 0.001 | 16 | 0 |
A2MTiCN 4 | 0.9970 | 0.9378 | 0.5878 | 0.5555 | 0.6264 | 0.001 | 16 | 0.3 |
A2MTiCN 5 | 0.9958 | 0.9005 | 0.3705 | 0.5934 | 0.6485 | 0.001 | 16 | 0.5 |
A2MTiCN 6 | 0.9955 | 0.7428 | 0.3519 | 0.1957 | 0.2984 | 0.0001 | 8 | 0 |
A2MTiCN 7 | 0.9974 | 0.9396 | 0.5853 | 0.3547 | 0.4646 | 0.0001 | 8 | 0.3 |
A2MTiCN 8 | 0.9970 | 0.8960 | 0.5699 | 0.4499 | 0.5333 | 0.0001 | 8 | 0.5 |
A2MTiCN 9 | 0.9968 | 0.7905 | 0.4986 | 0.2600 | 0.3601 | 0.0001 | 16 | 0 |
A2MTiCN 10 | 0.9959 | 0.9018 | 0.5048 | 0.3617 | 0.4627 | 0.0001 | 16 | 0.3 |
A2MTiCN 11 | 0.9965 | 0.8019 | 0.3385 | 0.3434 | 0.4283 | 0.0001 | 16 | 0.5 |
Annotator 3 | Background [mIoU] | Tool [mIoU] | Abnormal Wear [mIoU] | Normal Wear [mIoU] | wmIoU [mIoU] | LR | BS | DO |
A3MTiCN 0 | 0.9915 | 0.8627 | 0.4124 | 0.5363 | 0.5957 | 0.001 | 8 | 0 |
A3MTiCN 1 | 0.9896 | 0.9716 | 0.5922 | 0.5314 | 0.6132 | 0.001 | 8 | 0.3 |
A3MTiCN 2 | 0.9859 | 0.8933 | 0.6410 | 0.3969 | 0.4906 | 0.001 | 8 | 0.5 |
A3MTiCN 3 | 0.9893 | 0.7773 | 0.4876 | 0.3245 | 0.4097 | 0.001 | 16 | 0 |
A3MTiCN 4 | 0.9866 | 0.8350 | 0.5773 | 0.3129 | 0.4116 | 0.001 | 16 | 0.3 |
A3MTiCN 5 | 0.9843 | 0.7744 | 0.4834 | 0.3475 | 0.4277 | 0.001 | 16 | 0.5 |
A3MTiCN 6 | 0.9904 | 0.6380 | 0.2249 | 0.3151 | 0.3746 | 0.0001 | 8 | 0 |
A3MTiCN 7 | 0.9909 | 0.8976 | 0.6325 | 0.4854 | 0.5628 | 0.0001 | 8 | 0.3 |
A3MTiCN 8 | 0.9880 | 0.7684 | 0.4272 | 0.2561 | 0.3523 | 0.0001 | 8 | 0.5 |
A3MTiCN 9 | 0.9914 | 0.7691 | 0.2361 | 0.3842 | 0.4545 | 0.0001 | 16 | 0 |
A3MTiCN 10 | 0.9886 | 0.7510 | 0.5341 | 0.4391 | 0.4978 | 0.0001 | 16 | 0.3 |
A3MTiCN 11 | 0.9870 | 0.7921 | 0.2646 | 0.2860 | 0.3796 | 0.0001 | 16 | 0.5 |
References
- Shad, R.; Cunningham, J.P.; Ashley, E.A.; Langlotz, C.P.; Hiesinger, W. Designing clinically translatable artificial intelligence systems for high-dimensional medical imaging. Nat. Mach. Intell. 2021, 3, 929–935. [Google Scholar] [CrossRef]
- Peiffer-Smadja, N. Machine learning for COVID-19 needs global collaboration and data-sharing. Nat. Mach. Intell. 2020, 2, 293–294. [Google Scholar] [CrossRef]
- Hu, Y. The challenges of deploying artificial intelligence models in a rapidly evolving pandemic. Nat. Mach. Intell. 2020, 2, 298–300. [Google Scholar] [CrossRef]
- Willemink, M.J. Preparing medical imaging data for machine learning. Radiology 2020, 295, 4–15. [Google Scholar] [CrossRef] [PubMed]
- Northcutt, C.G.; Athalye, A.; Mueller, J. Pervasive label errors in test sets destabilize machine learning benchmarks. arXiv 2021, arXiv:2103.14749. [Google Scholar]
- Rottmann, M.; Reese, M. Automated detection of label errors in semantic segmentation datasets via deep learning and uncertainty quantification. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2023, Waikoloa, HI, USA, 2–7 January 2023; pp. 3214–3223. [Google Scholar]
- Paullada, A.; Raji, I.D.; Bender, E.M.; Denton, E.; Hanna, A. Data and its (dis)contents: A survey of dataset development and use in machine learning research. Patterns 2021, 2, 100336. [Google Scholar] [CrossRef] [PubMed]
- Peng, R.; Liu, J.; Fu, X.; Liu, C.; Zhao, L. Application of machine vision method in tool wear monitoring. Int. J. Adv. Manuf. Technol. 2021, 116, 1357–1372. [Google Scholar] [CrossRef]
- Peres, R.S.; Guedes, M.; Miranda, F.; Barata, J. Simulation-Based Data Augmentation for the Quality Inspection of Structural Adhesive With Deep Learning. IEEE Access 2021, 9, 76532–76541. [Google Scholar] [CrossRef]
- Survey: 96% Enterprises Encounter Training Data Quality. Available online: https://www.businesswire.com/news/home/20190523005183/en/Survey-96-Enterprises-Encounter-Training-Data-Quality (accessed on 10 May 2024).
- Su, H.; Deng, J.; Fei-Fei, L. Crowdsourcing annotations for visual object detection. In Proceedings of the Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012. [Google Scholar]
- Bhuiyan, M.; Choudhury, I.A.; Dahari, M.; Nukman, Y.; Dawal, S.Z. Application of acoustic emission sensor to investigate the frequency of tool wear and plastic deformation in tool condition monitoring. Measurement 2016, 92, 208–217. [Google Scholar] [CrossRef]
- Sun, W.H.; Yeh, S.S. Using the Machine Vision Method to Develop an On-machine Insert Condition Monitoring System for Computer Numerical Control Turning Machine Tools. Materials 2018, 11, 1977. [Google Scholar] [CrossRef]
- Bilal, M.; Mayer, C. Objektbeleuchtung; EP4130720; European Patent Office: Munich, Germany, 2023. [Google Scholar]
- Kumar, S.; Datta, S.; Singh, V.; Singh, S.K.; Sharma, R. Opportunities and Challenges in Data-Centric AI. IEEE Access 2024, 12, 33173–33189. [Google Scholar] [CrossRef]
- Chen, W.; Iyer, A.; Bostanabad, R. Data centric design: A new approach to design of microstructural material systems. Engineering 2022, 10, 89–98. [Google Scholar] [CrossRef]
- Fang, C.; Xu, Y.; Rockmore, D.N. Unbiased metric learning: On the utilization of multiple datasets and web images for softening bias. In Proceedings of the IEEE International Conference on Computer Vision 2013, Sydney, NSW, Australia, 1–8 December 2013; pp. 1657–1664. [Google Scholar]
- Recht, B.; Roelofs, R.; Schmidt, L.; Shankar, V. Do imagenet classifiers generalize to imagenet? In Proceedings of the International Conference on Machine Learning. PMLR 2019, Long Beach, CA, USA, 9–15 June 2019; pp. 5389–5400. [Google Scholar]
- Shankar, V.; Roelofs, R.; Mania, H.; Fang, A.; Recht, B.; Schmidt, L. Evaluating machine accuracy on imagenet. In Proceedings of the International Conference on Machine Learning. PMLR, 2020, Virtual, 13–18 July 2020; pp. 8634–8644. [Google Scholar]
- van Horn, G.; Branson, S.; Farrell, R.; Haber, S.; Barry, J.; Ipeirotis, P.; Perona, P.; Belongie, S. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, Boston, MA, USA, 7–12 June 2015; pp. 595–604. [Google Scholar]
- Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-ucsd Birds-200-2011 Dataset; Caltech: Pasadena, CA, USA, 2011. [Google Scholar]
- Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; Dean, J. A guide to deep learning in healthcare. Nat. Med. 2019, 25, 24–29. [Google Scholar] [CrossRef] [PubMed]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 3213–3223. [Google Scholar]
- Taran, V.; Gordienko, Y.; Rokovyi, A.; Alienin, O.; Stirenko, S. Impact of ground truth annotation quality on performance of semantic image segmentation of traffic conditions. In Advances in Computer Science for Engineering and Education II; Springer: Berlin/Heidelberg, Germany, 2020; pp. 183–193. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Wu, X.; Liu, Y.; Zhou, X.; Mou, A. Automatic identification of tool wear based on convolutional neural network in face milling process. Sensors 2019, 19, 3817. [Google Scholar] [CrossRef]
- Holst, C.; Yavuz, T.B.; Gupta, P.; Ganser, P.; Bergs, T. Deep learning and rule-based image processing pipeline for automated metal cutting tool wear detection and measurement. IFAC-PapersOnLine 2022, 55, 534–539. [Google Scholar] [CrossRef]
- Bergs, T.; Holst, C.; Gupta, P.; Augspurger, T. Digital image processing with deep learning for automated cutting tool wear detection. Procedia Manuf. 2020, 48, 947–958. [Google Scholar] [CrossRef]
- Lutz, B.; Kisskalt, D.; Regulin, D.; Hauser, T.; Franke, J. Material Identification for Smart Manufacturing Systems: A Review. In Proceedings of the 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS), Victoria, BC, Canada, 10–12 May 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 353–360. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef]
- Allen Goodman, Anne Carpenter, Elizabeth Park, jlefman-nvidia, Josette BoozAllen, Kyle, Maggie, Nilofer, Peter Sedivec, Will Cukierski. 2018 Data Science Bowl. Kaggle. 2018. Available online: https://kaggle.com/competitions/data-science-bowl-2018 (accessed on 1 May 2024).
- Jacobkie. Data Science Bowl 2nd Place Solution. 2018. Available online: https://github.com/jacobkie/2018DSB (accessed on 12 June 2024).
- Cai, L.; Long, T.; Dai, Y.; Huang, Y. Mask R-CNN-Based Detection and Segmentation for Pulmonary Nodule 3D Visualization Diagnosis. IEEE Access 2020, 8, 44400–44409. [Google Scholar] [CrossRef]
- Jain, A.K.; Lad, B.K. A novel integrated tool condition monitoring system. J. Intell. Manuf. 2019, 30, 1423–1436. [Google Scholar] [CrossRef]
- Su, Z.; Li, W.; Ma, Z.; Gao, R. An improved U-Net method for the semantic segmentation of remote sensing images. Appl. Intell. 2022, 52, 3276–3288. [Google Scholar] [CrossRef]
- Pfeifer, T.; Wiegers, L. Reliable tool wear monitoring by optimized image and illumination control in machine vision. Measurement 2000, 28, 209–218. [Google Scholar] [CrossRef]
- Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A database and web-based tool for image annotation. Int. J. Comput. Vis. 2008, 77, 157–173. [Google Scholar] [CrossRef]
- Grigoriev, S.N.; Migranov, M.S.; Melnik, Y.A.; Okunkova, A.A.; Fedorov, S.V.; Gurin, V.D.; Volosova, M.A. Application of adaptive materials and coatings to increase cutting tool performance: Efficiency in the case of composite powder high speed steel. Coatings 2021, 11, 855. [Google Scholar] [CrossRef]
- Lutz, B.; Reisch, R.; Kisskalt, D.; Avci, B.; Regulin, D.; Knoll, A.; Franke, J. Benchmark of Automated Machine Learning with State-of-the-Art Image Segmentation Algorithms for Tool Condition Monitoring. Procedia Manuf. 2020, 51, 215–221. [Google Scholar] [CrossRef]
Parameters | Value |
---|---|
Image Size | 512 × 512 × 3 |
Image Format | Jpeg |
BS | 8, 16 |
DO | 0.0, 0.3, 0.5 |
Epochs | 70 |
GPU’s | 1 |
Trainable Parameters | 2,140,740 |
Loss | Sparse Categorical Cross Entropy |
Optimizer | RMS Prop |
Metric | IoU |
Train/Valid/Test | 0.8/0.1/0.1 |
Tool Coating | Class Background | Class Normal Wear | Class Abnormal Wear | Class Tool |
---|---|---|---|---|
TiCN | 432 | 404 | 806 | 768 |
TiN | 432 | 770 | 532 | 768 |
Model | Background CV [%] | Tool CV [%] | Abnormal Wear CV [%] | Normal Wear CV [%] | wmIoU CV [%] | LR | BS | DO |
---|---|---|---|---|---|---|---|---|
MTiN 0 | 0.13 | 0.24 | 7.39 | 9.61 | 8.91 | 0.001 | 8 | 0 |
MTiN 1 | 0.10 | 2.22 | 7.49 | 10.05 | 8.90 | 0.001 | 8 | 0.3 |
MTiN 2 | 0.04 | 0.37 | 2.82 | 10.05 | 8.07 | 0.001 | 8 | 0.5 |
MTiN 3 | 0.18 | 16.78 | 9.27 | 18.49 | 12.16 | 0.001 | 16 | 0 |
MTiN 4 | 0.08 | 1.69 | 8.07 | 2.04 | 3.12 | 0.001 | 16 | 0.3 |
MTiN 5 | 0.03 | 0.39 | 11.47 | 9.94 | 6.17 | 0.001 | 16 | 0.5 |
MTiN 6 | 0.04 | 2.58 | 9.92 | 11.24 | 10.66 | 0.0001 | 8 | 0 |
MTiN 7 | 0.01 | 1.09 | 10.02 | 8.14 | 7.43 | 0.0001 | 8 | 0.3 |
MTiN 8 | 0.03 | 2.73 | 7.32 | 7.42 | 7.30 | 0.0001 | 8 | 0.5 |
MTiN 9 | 0.03 | 0.81 | 20.50 | 17.74 | 18.45 | 0.0001 | 16 | 0 |
MTiN 10 | 0.03 | 1.41 | 3.89 | 4.43 | 4.26 | 0.0001 | 16 | 0.3 |
MTiN 11 | 0.04 | 1.89 | 5.41 | 3.67 | 3.35 | 0.0001 | 16 | 0.5 |
Mean CV | 0.07 | 2.75 | 8.25 | 9.48 | 8.61 | - | - | - |
Model | Background CV [%] | Tool CV [%] | Abnormal Wear CV [%] | Normal Wear CV [%] | wmIoU CV [%] | LR | BS | DO |
---|---|---|---|---|---|---|---|---|
MTiCN 0 | 4.69 | 34.34 | 41.62 | 28.19 | 29.63 | 0.001 | 8 | 0 |
MTiCN 1 | 0.31 | 8.99 | 12.09 | 12.93 | 11.60 | 0.001 | 8 | 0.3 |
MTiCN 2 | 0.45 | 2.16 | 13.62 | 14.03 | 10.32 | 0.001 | 8 | 0.5 |
MTiCN 3 | 0.35 | 10.38 | 16.86 | 41.67 | 30.34 | 0.001 | 16 | 0 |
MTiCN 4 | 0.47 | 6.15 | 28.05 | 39.32 | 27.51 | 0.001 | 16 | 0.3 |
MTiCN 5 | 0.54 | 6.17 | 16.78 | 22.67 | 17.78 | 0.001 | 16 | 0.5 |
MTiCN 6 | 0.30 | 10.26 | 27.67 | 33.01 | 23.48 | 0.0001 | 8 | 0 |
MTiCN 7 | 0.31 | 2.15 | 4.14 | 13.69 | 8.54 | 0.0001 | 8 | 0.3 |
MTiCN 8 | 0.42 | 6.52 | 15.28 | 23.34 | 17.21 | 0.0001 | 8 | 0.5 |
MTiCN 9 | 0.26 | 4.93 | 28.74 | 16.33 | 9.64 | 0.0001 | 16 | 0 |
MTiCN 10 | 0.36 | 16.51 | 17.60 | 9.06 | 7.86 | 0.0001 | 16 | 0.3 |
MTiCN 11 | 0.44 | 2.84 | 10.02 | 8.05 | 5.69 | 0.0001 | 16 | 0.5 |
Mean CV | 0.76 | 9.97 | 18.53 | 20.24 | 16.32 | - | - | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bilal, M.; Podishetti, R.; Koval, L.; Gaafar, M.A.; Grossmann, D.; Bregulla, M. The Effect of Annotation Quality on Wear Semantic Segmentation by CNN. Sensors 2024, 24, 4777. https://doi.org/10.3390/s24154777
Bilal M, Podishetti R, Koval L, Gaafar MA, Grossmann D, Bregulla M. The Effect of Annotation Quality on Wear Semantic Segmentation by CNN. Sensors. 2024; 24(15):4777. https://doi.org/10.3390/s24154777
Chicago/Turabian StyleBilal, Mühenad, Ranadheer Podishetti, Leonid Koval, Mahmoud A. Gaafar, Daniel Grossmann, and Markus Bregulla. 2024. "The Effect of Annotation Quality on Wear Semantic Segmentation by CNN" Sensors 24, no. 15: 4777. https://doi.org/10.3390/s24154777
APA StyleBilal, M., Podishetti, R., Koval, L., Gaafar, M. A., Grossmann, D., & Bregulla, M. (2024). The Effect of Annotation Quality on Wear Semantic Segmentation by CNN. Sensors, 24(15), 4777. https://doi.org/10.3390/s24154777