Nothing Special   »   [go: up one dir, main page]

You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = ear tag recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 8563 KiB  
Article
Research on the Vision-Based Dairy Cow Ear Tag Recognition Method
by Tianhong Gao, Daoerji Fan, Huijuan Wu, Xiangzhong Chen, Shihao Song, Yuxin Sun and Jia Tian
Sensors 2024, 24(7), 2194; https://doi.org/10.3390/s24072194 - 29 Mar 2024
Cited by 1 | Viewed by 1413
Abstract
With the increase in the scale of breeding at modern pastures, the management of dairy cows has become much more challenging, and individual recognition is the key to the implementation of precision farming. Based on the need for low-cost and accurate herd management [...] Read more.
With the increase in the scale of breeding at modern pastures, the management of dairy cows has become much more challenging, and individual recognition is the key to the implementation of precision farming. Based on the need for low-cost and accurate herd management and for non-stressful and non-invasive individual recognition, we propose a vision-based automatic recognition method for dairy cow ear tags. Firstly, for the detection of cow ear tags, the lightweight Small-YOLOV5s is proposed, and then a differentiable binarization network (DBNet) combined with a convolutional recurrent neural network (CRNN) is used to achieve the recognition of the numbers on ear tags. The experimental results demonstrated notable improvements: Compared to those of YOLOV5s, Small-YOLOV5s enhanced recall by 1.5%, increased the mean average precision by 0.9%, reduced the number of model parameters by 5,447,802, and enhanced the average prediction speed for a single image by 0.5 ms. The final accuracy of the ear tag number recognition was an impressive 92.1%. Moreover, this study introduces two standardized experimental datasets specifically designed for the ear tag detection and recognition of dairy cows. These datasets will be made freely available to researchers in the global dairy cattle community with the intention of fostering intelligent advancements in the breeding industry. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Some samples of data from CEID-D. Capture angles: frontal, lateral, and overhead views of cows. Weather conditions during shooting: overcast and sunny days. Captured cow poses: standing, feeding, and lying down.</p>
Full article ">Figure 2
<p>Ear tag image quality assessment.</p>
Full article ">Figure 3
<p>Preprocessing of ear tag images. From left to right, the original ear tag, the ear tag after bilateral filtering, the ear tag after edge sharpening, and the ear tag after grayscaling.</p>
Full article ">Figure 4
<p>Ear tag images annotated with Paddlelabel.</p>
Full article ">Figure 5
<p>Technology Roadmap.</p>
Full article ">Figure 6
<p>The structure of YOLOV5s.</p>
Full article ">Figure 7
<p>The structure of Small-YOLOV5s.</p>
Full article ">Figure 8
<p>The structure of CA.</p>
Full article ">Figure 9
<p>The structure of DBNet.</p>
Full article ">Figure 10
<p>The structure of the CRNN.</p>
Full article ">Figure 11
<p>Comparison of cow ear tag detection results. (<b>a</b>) The results of ear tag detection using the color threshold method, with the original image on the left and the detection results on the right. (<b>b</b>,<b>c</b>) The detection results of cow ear tags in different scenarios using Small-YOLOV5s.</p>
Full article ">Figure 12
<p>Loss decay and recognition accuracy in CRNN training.</p>
Full article ">
18 pages, 6634 KiB  
Article
A Large Benchmark Dataset for Individual Sheep Face Recognition
by Yue Pang, Wenbo Yu, Chuanzhong Xuan, Yongan Zhang and Pei Wu
Agriculture 2023, 13(9), 1718; https://doi.org/10.3390/agriculture13091718 - 30 Aug 2023
Viewed by 1413
Abstract
The mutton sheep breeding industry has transformed significantly in recent years, from traditional grassland free-range farming to a more intelligent approach. As a result, automated sheep face recognition systems have become vital to modern breeding practices and have gradually replaced ear tagging and [...] Read more.
The mutton sheep breeding industry has transformed significantly in recent years, from traditional grassland free-range farming to a more intelligent approach. As a result, automated sheep face recognition systems have become vital to modern breeding practices and have gradually replaced ear tagging and other manual tracking techniques. Although sheep face datasets have been introduced in previous studies, they have often involved pose or background restrictions (e.g., fixing of the subject’s head, cleaning of the face), which restrict data collection and have limited the size of available sample sets. As a result, a comprehensive benchmark designed exclusively for the evaluation of individual sheep recognition algorithms is lacking. To address this issue, this study developed a large-scale benchmark dataset, Sheepface-107, comprising 5350 images acquired from 107 different subjects. Images were collected from each sheep at multiple angles, including front and back views, in a diverse collection that provides a more comprehensive representation of facial features. In addition to the dataset, an assessment protocol was developed by applying multiple evaluation metrics to the results produced by three different deep learning models: VGG16, GoogLeNet, and ResNet50, which achieved F1-scores of 83.79%, 89.11%, and 93.44%, respectively. A statistical analysis of each algorithm suggested that accuracy and the number of parameters were the most informative metrics for use in evaluating recognition performance. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>An illustration of posture determination using images from an overhead camera. An ellipse was fitted to the body of each sheep using histogram segmentation and used to determine when the sheep was oriented in a favorable position, at which point either the left or right camera would activate. This approach avoids the need to restrict subject poses, as has often been done in previous studies. A and B indicated that the ellipse of the figure on the back of the sheep was divided into two areas with the middle horizontal line as the boundary.</p>
Full article ">Figure 2
<p>A still frame collected by the mounted cameras. This perspective (&lt;30°) was empirically determined to provide favorable viewing angles.</p>
Full article ">Figure 3
<p>The structure of the fixed fence channel. Labels includes (1) the cameras, (2) the fixed fence aisle, (3) the fence channel entrance, and (4) the fence channel exit.</p>
Full article ">Figure 4
<p>Debugging the scene diagram of the non-contact sheep face detection channel.</p>
Full article ">Figure 5
<p>An example of images collected from the 23rd Dupo sheep.</p>
Full article ">Figure 6
<p>A comparison of images before and after data enhancement, including (<b>a</b>) the original sheep face image and (<b>b</b>) sample results after augmentation.</p>
Full article ">Figure 7
<p>The VGG16 network architecture.</p>
Full article ">Figure 8
<p>The Inception module.</p>
Full article ">Figure 9
<p>The residual structure.</p>
Full article ">Figure 10
<p>An example of a characteristic feature map for a sheep face model.</p>
Full article ">Figure 11
<p>The output of individual network layers applied to specific facial features. The highlighted regions demonstrate a focus on the eyes, ears, nose, and mouth.</p>
Full article ">Figure 12
<p>A comparison of training and test accuracy for three different networks.</p>
Full article ">Figure 12 Cont.
<p>A comparison of training and test accuracy for three different networks.</p>
Full article ">
24 pages, 13569 KiB  
Article
YOLOv5-KCB: A New Method for Individual Pig Detection Using Optimized K-Means, CA Attention Mechanism and a Bi-Directional Feature Pyramid Network
by Guangbo Li, Guolong Shi and Jun Jiao
Sensors 2023, 23(11), 5242; https://doi.org/10.3390/s23115242 - 31 May 2023
Cited by 11 | Viewed by 2527
Abstract
Individual identification of pigs is a critical component of intelligent pig farming. Traditional pig ear-tagging requires significant human resources and suffers from issues such as difficulty in recognition and low accuracy. This paper proposes the YOLOv5-KCB algorithm for non-invasive identification of individual pigs. [...] Read more.
Individual identification of pigs is a critical component of intelligent pig farming. Traditional pig ear-tagging requires significant human resources and suffers from issues such as difficulty in recognition and low accuracy. This paper proposes the YOLOv5-KCB algorithm for non-invasive identification of individual pigs. Specifically, the algorithm utilizes two datasets—pig faces and pig necks—which are divided into nine categories. Following data augmentation, the total sample size was augmented to 19,680. The distance metric used for K-means clustering is changed from the original algorithm to 1-IOU, which improves the adaptability of the model’s target anchor boxes. Furthermore, the algorithm introduces SE, CBAM, and CA attention mechanisms, with the CA attention mechanism being selected for its superior performance in feature extraction. Finally, CARAFE, ASFF, and BiFPN are used for feature fusion, with BiFPN selected for its superior performance in improving the detection ability of the algorithm. The experimental results indicate that the YOLOv5-KCB algorithm achieved the highest accuracy rates in pig individual recognition, surpassing all other improved algorithms in average accuracy rate (IOU = 0.5). The accuracy rate of pig head and neck recognition was 98.4%, while the accuracy rate for pig face recognition was 95.1%, representing an improvement of 4.8% and 13.8% over the original YOLOv5 algorithm. Notably, the average accuracy rate of identifying pig head and neck was consistently higher than pig face recognition across all algorithms, with YOLOv5-KCB demonstrating an impressive 2.9% improvement. These results emphasize the potential for utilizing the YOLOv5-KCB algorithm for precise individual pig identification, facilitating subsequent intelligent management practices. Full article
(This article belongs to the Special Issue Intelligent Sensing and Machine Vision in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Acquisition flow chart.</p>
Full article ">Figure 2
<p>The quantity of similar images removed at different SSIM thresholds.</p>
Full article ">Figure 3
<p>Example of annotated images from different angles made for collecting data of faces and necks of live pigs.</p>
Full article ">Figure 4
<p>Example of comparisons between the features of faces and necks of live pigs.</p>
Full article ">Figure 5
<p>Data enhancement legend.</p>
Full article ">Figure 6
<p>Distribution map of examples of pig categories.</p>
Full article ">Figure 7
<p>YOLOv5 network structure.</p>
Full article ">Figure 8
<p>YOLOv5-KCB network structure.</p>
Full article ">Figure 9
<p>Coordinate attention mechanism.</p>
Full article ">Figure 10
<p>Introduction of the CA attention module.</p>
Full article ">Figure 11
<p>Feature-fusion graph of PANET (<b>a</b>) and BiFPN (<b>b</b>).</p>
Full article ">Figure 12
<p>BiFPN feature-fusion module process.</p>
Full article ">Figure 13
<p>Improved BiFPN feature fusion module.</p>
Full article ">Figure 14
<p>Increase in the performance of recognition of pig necks compared to pig faces.</p>
Full article ">Figure 15
<p>Loss curve.</p>
Full article ">Figure 16
<p>Detection of a single pig in a pen.</p>
Full article ">Figure 17
<p>Detection of multiple pigs in a pen.</p>
Full article ">Figure 18
<p>Detection of dense and distant live pigs.</p>
Full article ">
18 pages, 19377 KiB  
Article
Automatic Cow Location Tracking System Using Ear Tag Visual Analysis
by Thi Thi Zin, Moe Zet Pwint, Pann Thinzar Seint, Shin Thant, Shuhei Misawa, Kosuke Sumi and Kyohiro Yoshida
Sensors 2020, 20(12), 3564; https://doi.org/10.3390/s20123564 - 23 Jun 2020
Cited by 29 | Viewed by 13820
Abstract
Nowadays, for numerous reasons, smart farming systems focus on the use of image processing technologies and 5G communications. In this paper, we propose a tracking system for individual cows using an ear tag visual analysis. By using ear tags, the farmers can track [...] Read more.
Nowadays, for numerous reasons, smart farming systems focus on the use of image processing technologies and 5G communications. In this paper, we propose a tracking system for individual cows using an ear tag visual analysis. By using ear tags, the farmers can track specific data for individual cows such as body condition score, genetic abnormalities, etc. Specifically, a four-digit identification number is used, so that a farm can accommodate up to 9999 cows. In our proposed system, we develop an individual cow tracker to provide effective management with real-time upgrading enforcement. For this purpose, head detection is first carried out to determine the cow’s position in its related camera view. The head detection process incorporates an object detector called You Only Look Once (YOLO) and is then followed by ear tag detection. The steps involved in ear tag recognition are (1) finding the four-digit area, (2) digit segmentation using an image processing technique, and (3) ear tag recognition using a convolutional neural network (CNN) classifier. Finally, a location searching system for an individual cow is established by entering the ID numbers through the application’s user interface. The proposed searching system was confirmed by performing real-time experiments at a feeding station on a farm at Hokkaido prefecture, Japan. In combination with our decision-making process, the proposed system achieved an accuracy of 100% for head detection, and 92.5% for ear tag digit recognition. The results of using our system are very promising in terms of effectiveness. Full article
(This article belongs to the Special Issue Advanced Sensors in Agriculture)
Show Figures

Figure 1

Figure 1
<p>A sample ear tag that is attached to the cow’s ear in Japan.</p>
Full article ">Figure 2
<p>The architecture of the proposed system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Flowchart of cow head detection, (<b>b</b>) detected and localized cow heads.</p>
Full article ">Figure 4
<p>Ear tag region detection from the head image.</p>
Full article ">Figure 5
<p>Sample ear tag images after ear tag extraction.</p>
Full article ">Figure 6
<p>Ear tag filtering.</p>
Full article ">Figure 7
<p>Normalization of ear tag image.</p>
Full article ">Figure 8
<p>Illustration of skew image correction.</p>
Full article ">Figure 9
<p>Illustration of step-by-step preprocessing: (<b>a</b>) Original Red, Green, and Blue (RGB) image, (<b>b</b>) g-scale image, (<b>c</b>) inverted image, (<b>d</b>) histogram equalized image, and (<b>e</b>) binarized image.</p>
Full article ">Figure 10
<p>Removal of unnecessary borders: (<b>a</b>) Preprocessed image with two horizontal crop lines, (<b>b</b>) horizontally cropped original image, (<b>c</b>) horizontally cropped preprocessed image with two vertical crop lines, (<b>d</b>) unnecessary border removed from original image, and (<b>e</b>) preprocessed image with unnecessary borders removed.</p>
Full article ">Figure 11
<p>Barcode area detection process.</p>
Full article ">Figure 12
<p>Determination of the digit start point: (<b>a</b>) Correctly taken as a start point, (<b>b</b>) not taken as a start point.</p>
Full article ">Figure 13
<p>Digit area start point calculation.</p>
Full article ">Figure 14
<p>Segmentation of individual digits.</p>
Full article ">Figure 15
<p>Digit object determination: (<b>a</b>) Elimination of mini digits and (<b>b</b>) division of closed digits.</p>
Full article ">Figure 16
<p>Step-by-step preprocessing of training data: (<b>a</b>) Manually cropped RGB image, (<b>b</b>) gray-scale image, (<b>c</b>) complemented image, (<b>d</b>) histogram equalized image, and (<b>e</b>) resized image.</p>
Full article ">Figure 17
<p>Check lists using the three types of ground truth data: (<b>a</b>) Four-digit list, (<b>b</b>) one-digit list, and (<b>c</b>) three-digit list.</p>
Full article ">Figure 18
<p>Cutting process for ear tags with length of more than ‘3’.</p>
Full article ">Figure 19
<p>System flow for ear tag confirmation.</p>
Full article ">Figure 20
<p>Design of user interface for the proposed system.</p>
Full article ">Figure 21
<p>Finished process view of the system.</p>
Full article ">Figure 22
<p>Design for the search system user interface.</p>
Full article ">Figure 23
<p>The search system form with detail information.</p>
Full article ">
Back to TopTop