Research on the Vision-Based Dairy Cow Ear Tag Recognition Method
<p>Some samples of data from CEID-D. Capture angles: frontal, lateral, and overhead views of cows. Weather conditions during shooting: overcast and sunny days. Captured cow poses: standing, feeding, and lying down.</p> "> Figure 2
<p>Ear tag image quality assessment.</p> "> Figure 3
<p>Preprocessing of ear tag images. From left to right, the original ear tag, the ear tag after bilateral filtering, the ear tag after edge sharpening, and the ear tag after grayscaling.</p> "> Figure 4
<p>Ear tag images annotated with Paddlelabel.</p> "> Figure 5
<p>Technology Roadmap.</p> "> Figure 6
<p>The structure of YOLOV5s.</p> "> Figure 7
<p>The structure of Small-YOLOV5s.</p> "> Figure 8
<p>The structure of CA.</p> "> Figure 9
<p>The structure of DBNet.</p> "> Figure 10
<p>The structure of the CRNN.</p> "> Figure 11
<p>Comparison of cow ear tag detection results. (<b>a</b>) The results of ear tag detection using the color threshold method, with the original image on the left and the detection results on the right. (<b>b</b>,<b>c</b>) The detection results of cow ear tags in different scenarios using Small-YOLOV5s.</p> "> Figure 12
<p>Loss decay and recognition accuracy in CRNN training.</p> ">
Abstract
:1. Introduction
- A lightweight Small-YOLOv5s, specifically designed for small object detection, is proposed for direct application in dairy cow ear tag detection. It improves the detection accuracy and speed while significantly decreasing the number of parameters and computational overhead, and its hardware deployment to meet the real-time requirements of practical application scenarios is easy. The model is robust to light, tilt, occlusion, and other types of interference, thus overcoming the existing limitations concerning the use of a single color in ear tag detection with traditional image processing techniques, and it can simultaneously detect ear tags of various colors with multiple poses of dairy cows, unaffected by background color interference.
- The ear tag recognition method using DBNet [4] combined with CRNN [5] can quickly and accurately locate and recognize a multi-line regions of text on the same ear tag. Preprocessing operations such as character-level segmentation and image orientation correction are avoided, and images of handwritten and printed numbers on ear tags are mixed into the model during the model training stage so that it has a high recognition accuracy and generalization ability for numbering in the two types of fonts, as well as distortion, deformation, and other irregularities.
- Two sets of annotated experimental datasets in standardized format are released for ear tag detection and ear tag recognition; they are not limited to these two uses, and they can also be used for individual cow detection, behavioral recognition, and the training of other text recognition models. Currently, there is no unified and standardized experimental dataset for cow ear tag identification; in order to promote the modernization and development of dairy farming, the datasets in this study are free and open to the world.
2. Related Work
3. Dataset
3.1. Cow Ear Tag Detection Dataset (CEID-D)
3.2. Cow Ear Tag Recognition Dataset (CEGD-R)
4. Method
4.1. Ear Tag Detection
- Reduction in the number of convolutional layers
- Addition of coordinate attention (CA) [24]
4.2. Ear Tag Number Detection
4.3. Ear Tag Recognition
5. Experiment
5.1. Evaluation Metrics
5.2. Experimental Setup
5.3. Experimental Results and Analysis
5.4. Discussion
6. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, C.Q. Review of the economic situation of China’s dairy industry in 2021 and outlook for 2022. Chin. J. Anim. Husb. 2022, 58, 232–238. [Google Scholar]
- Zhang, N. Comprehensive technical analysis of precision dairy farming for cows. Chin. Sci. Technol. Period. Database Agric. Sci. 2022, 2022, 0081–0083. [Google Scholar]
- Liu, H.; Peng, H.; Wang, C.; Zhu, W.; Dong, X. Comparative analysis of dairy farming efficiency in different dairy production areas in china—Based on survey data from 266 farms. Inst. Agric. Inf. Chin. Acad. Agric. Sci. 2020, 41, 110–119. [Google Scholar]
- Liao, M.; Zhu, Z.; Shi, B.; Xia, G.; Bai, X.; Yuille, A.L. Real-time scene text detection with differentiable binarization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 19 June 2020; pp. 12026–12035. [Google Scholar]
- Shi, B.; Bai, X.; Yao, C. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 2298–2304. [Google Scholar] [CrossRef] [PubMed]
- Sun, Y.K.; Wang, Y.J.; Huo, P.J.; Cui, Z.Q.; Zhang, Y.G. Research progress on methods and application of dairy cow identification. J. China Agric. Univ. 2019, 24, 62–70. (In Chinese) [Google Scholar]
- Awad, S.; Ismail, A. From classical methods to animal biometrics: A review on cattle identification and tracking. Comput. Electron. Agric. 2016, 123, 423–435. [Google Scholar] [CrossRef]
- Ebert, B.; Whittenburg, B. Identification of Beef Animals (Tech. Rep. YANR-0170); Alabama A & M University: Huntsville, AL, USA; Auburn University: Huntsville, AL, USA, 2006. [Google Scholar]
- Barron, U.G.; Butler, F.; McDonnell, K.; Ward, S. The end of the identity crisis? Advances in biometric markers for animal identification. Ir. Vet. J. 2009, 62, 204–208. [Google Scholar]
- Ruiz-Garcia, L.; Lunadei, L. The role of RFID in agriculture: Applications, limitations, and challenges. Comput. Electron. Agric. 2011, 79, 42–50. [Google Scholar] [CrossRef]
- Arcidiacono, C.; Porto, S.M.C.; Mancino, M.; Cascone, G. Development of a threshold-based classifier for real-time recognition of cow feeding and standing behavioral activities from accelerometer data. Comput. Electron. Agric. 2017, 134, 124–134. [Google Scholar] [CrossRef]
- Hossain, M.E.; Kabir, A.; Zheng, L.; Swain, D.; McGrath, S.; Medway, J. A systematic review of machine learning techniques for cattle identification: Datasets, methods and future directions. Artif. Intell. Agric. 2022, 6, 138–155. [Google Scholar] [CrossRef]
- Qiao, Y.; Su, D.; Kong, H.; Sukkarieh, S.; Lomax, S.; Clark, C.E. Individual Cattle Identification Using a Deep Learning-Based Framework. IFAC-PapersOnLine. 2019. Available online: https://api.semanticscholar.org/CorpusID:213360366 (accessed on 26 March 2024).
- Tassinari, P.; Bovo, M.; Benni, S.; Franzoni, S.; Poggi, M.; Mammi, L.M.; Mattoccia, S.; Stefano, L.D.; Bonora, F.; Barbaresi, A.; et al. A computer vision approach based on deep learning for the detection of dairy cows in free stall barn. Comput. Electron. Agric. 2021, 182, 106030. [Google Scholar] [CrossRef]
- Chen, S.; Wang, S.; Zuo, X.; Yang, R. Angus Cattle Recognition Using Deep Learning. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 4169–4175. [Google Scholar]
- Hao, W.; Ren, C.; Han, M.; Zhang, L.; Li, F.; Liu, Z. Cattle Body Detection Based on YOLOv5-EMA for Precision Livestock Farming. Animals 2023, 13, 3535. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.H.; Zhao, K.X.; Ji, J.T.; Zhu, X.F. Automatic location and recognition of cow’s collar ID based on machine learning. J. Nanjing Agric. Univ. 2021, 44, 586–595. [Google Scholar] [CrossRef]
- Ilestrand, M. Automatic Ear Tag Recognition on Dairy Cows in Real Barn Environment.Agricultural and Food Sciences, Engineering. 2017. Available online: https://api.semanticscholar.org/CorpusID:102490549 (accessed on 26 March 2024).
- Zin, T.T.; Pwint, M.Z.; Seint, P.T.; Thant, S.; Misawa, S.; Sumi, K.; Yoshida, K. Automatic cow location tracking system using ear tag visual analysis. Sensors 2020, 20, 3564. [Google Scholar] [CrossRef] [PubMed]
- Bastiaansen, J.W.M.; Hulsegge, B.; Schokker, D.; Ellen, E.D.; Klermans, G.G.J.; Taghavirazavizadeh, M.; Kamphuis, C. Continuous Real-Time Cow Identification by Reading Ear Tags from Live-Stream Video. Front. Anim. Sci. 2022, 3. [Google Scholar] [CrossRef]
- Glenn, J. YOLOv5. 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 6 March 2024).
- Chen, C.; Liu, M.; Tuzel, O.; Xiao, J. R-CNN for Small Object Detection. In Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan, 20–24 November 2016. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar] [CrossRef]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3–19. [Google Scholar]
Term | Configurations |
---|---|
Operating System | CentOS Linux 7.7.1908 |
GPU | NVIDIA Quadro P5000 (NVIDIA Corporation, Santa Clara, CA, USA) |
Memory | 64 GB |
Python | 3.8.17 |
Pytorch | 1.9.1 |
CUDA | 11.2 |
CUDNN | 10.0.130 |
Models | P (%) | R (%) | [email protected] (%) | Time (ms) | Parameters |
---|---|---|---|---|---|
YOLOV5s | 90.1 | 88.5 | 91.6 | 2.4 | 7,053,910 |
YOLOV5m | 89.1 | 88.8 | 91.6 | 3.1 | 21,037,638 |
YOLOV5l | 90.6 | 87.7 | 90.6 | 3.2 | 46,600,566 |
YOLOV5x | 89.6 | 88.0 | 91.9 | 4.6 | 87,198,694 |
Small-YOLOV5s | 88.8 | 90.0 | 92.5 | 1.9 | 1,606,108 |
Models | P (%) | R (%) | [email protected] (%) | Time (ms) | Parameters |
---|---|---|---|---|---|
YOLOV5s | 90.1 | 88.5 | 91.6 | 2.4 | 7,053,910 |
YOLOV5s-Conv | 90.8 | 87.8 | 92.1 | 2.1 | 1,599,428 |
YOLOV5s-Conv+CA | 88.8 | 90.0 | 92.5 | 1.9 | 1,606,108 |
Models | P (%) | R (%) | [email protected] (%) | Time (ms) | Parameters |
---|---|---|---|---|---|
YOLOV5s-Conv+SE | 90.2 | 87.8 | 92.0 | 1.7 | 1,608,004 |
YOLOV5s-Conv+CBAM | 89.5 | 87.8 | 91.0 | 2.0 | 1,608,102 |
YOLOV5s-Conv+CA | 88.8 | 90.0 | 92.5 | 1.9 | 1,606,108 |
Models | Precision (%) | Recall (%) | F1_Score (%) |
---|---|---|---|
DBNet (RGB) | 93.5 | 94.8 | 94.1 |
DBNet (Gray) | 94.0 | 96.1 | 95.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, T.; Fan, D.; Wu, H.; Chen, X.; Song, S.; Sun, Y.; Tian, J. Research on the Vision-Based Dairy Cow Ear Tag Recognition Method. Sensors 2024, 24, 2194. https://doi.org/10.3390/s24072194
Gao T, Fan D, Wu H, Chen X, Song S, Sun Y, Tian J. Research on the Vision-Based Dairy Cow Ear Tag Recognition Method. Sensors. 2024; 24(7):2194. https://doi.org/10.3390/s24072194
Chicago/Turabian StyleGao, Tianhong, Daoerji Fan, Huijuan Wu, Xiangzhong Chen, Shihao Song, Yuxin Sun, and Jia Tian. 2024. "Research on the Vision-Based Dairy Cow Ear Tag Recognition Method" Sensors 24, no. 7: 2194. https://doi.org/10.3390/s24072194