Research on Cattle Behavior Recognition and Multi-Object Tracking Algorithm Based on YOLO-BoT
<p>Schematic diagram of the cowshed. Camera 1, positioned near the entrance of the barn, is responsible for collecting behavioral data of the cattle in the blue area. Camera 2, located farther from the entrance, is responsible for collecting behavioral data of the cattle in the red area.</p> "> Figure 2
<p>Examples of cattle data in different activity areas: (<b>a</b>) morning scene, (<b>b</b>) well-lit environment, (<b>c</b>) light interference, (<b>d</b>) night scene, (<b>e</b>) outdoor activity area, and (<b>f</b>) indoor activity area. The time in the top-left corner of the image represents the capture time of the data.</p> "> Figure 3
<p>Analysis of the cattle behavior dataset: (<b>a</b>) analysis of cattle behavior labels, and (<b>b</b>) distribution of cattle count in each image.</p> "> Figure 4
<p>iRMB structure and C2f-iRMB structure.</p> "> Figure 5
<p>ADown downsampling structure.</p> "> Figure 6
<p>DyHead structure.</p> "> Figure 7
<p>Dynamic convolution. The “*” represents element-wise multiplication of each convolution output with its attention weight.</p> "> Figure 8
<p>The improved YOLOv8n network architecture.</p> "> Figure 9
<p>Flowchart for multi-object tracking of cattle.</p> "> Figure 10
<p>Schematic representation of the tracking process leading to object loss due to occlusion: The red solid line denotes the detection frame, while the yellow dashed line represents the predicted frame.</p> "> Figure 11
<p>Ablation experiment results.</p> "> Figure 12
<p>Comparison of algorithm improved cattle instance detection. In scenario 1, standing cattle are mistakenly detected as walking; in scenario 2, some behavioral features of lying cattle are missed and walking behavior is repeatedly detected; and in scenario 3, some features of walking behavior are missed.</p> "> Figure 13
<p>Variation curve of re-identification model accuracy.</p> "> Figure 14
<p>Comparison of the improved results of replacing DIoU, (<b>a</b>,<b>c</b>) denote the tracking results of the original algorithm, and (<b>b</b>,<b>d</b>) denote the tracking results of the improved algorithm. The green circle indicates the part of the target extending beyond the detection box, while the red circle indicates the detection box containing extra background information.</p> "> Figure 15
<p>Comparison between before and after the tracking algorithm improvement at frame 50, frame 652, and frame 916, respectively. The white dotted line in the image indicates the untracked object.</p> "> Figure 16
<p>Comparison between before and after the tracking algorithm improvement at frame 22, frame 915, and frame 1504, respectively. The white dotted line in the image indicates the untracked object.</p> "> Figure 17
<p>Performance comparison of tracking algorithms.</p> "> Figure 18
<p>Tracking results for multiple tracking algorithms. White dashed lines in the image indicate untracked objects, while red dashed lines indicate incorrectly tracked objects. The time in the top-left corner of the image represents the capture time of the data.</p> "> Figure 19
<p>Behavioral duration data from the herd are displayed in one minute, focusing on the incidence of the behavior (<b>a</b>) and the number of individual cattle (<b>b</b>). Expanded to the entire 10 min video (<b>c</b>) to fully demonstrate behavioral changes in the herd over time.</p> "> Figure 20
<p>Time series statistics for each cattle over a one-minute period. Four cattle with both active and quiet behavior were specifically chosen to demonstrate these variations. The numbers 2, 4, 7, and 10 indicate the scaling of the selected cattle IDS assigned by the model in the initial frame.</p> ">
1. Introduction
2. Materials and Methods
2.1. Materials
2.1.1. Data Acquisition
2.1.2. Data Preprocessing
2.2. Cattle Object Detection
2.2.1. YOLOv8 Object Detection
2.2.2. Feature Extraction C2f-iRMB Module
2.2.3. ADown Downsampling
2.2.4. DyHead, a Dynamic Detection Head Based on Attention Mechanism
2.2.5. Dynamic Convolution DyConv
2.2.6. Object Detection Evaluation Metrics
2.3. Cattle Multi-Object Tracking
2.3.1. Multi-Object Tracking Algorithm for Cattle
2.3.2. Improved Tracking Algorithm
2.3.3. Multi-Object Tracking Evaluation Metrics
3. Experimental Results
3.1. Experimental Platform and Parameter Settings
3.2. Analysis of Cattle Testing Results and Accuracy Evaluation
3.2.1. Cross-Validation Experiments
3.2.2. Comparison Experiments of Different Models
3.2.3. Network Improvement Ablation Experiments
3.3. Experiments and Results Analysis of Multi-Object Tracking of Cattle
3.3.1. Results and Analysis of Re-Identification Experiments
3.3.2. Comparison of Effects before and after Improvement of the Algorithm
3.3.3. Comparison of Different Multi-Object Tracking Algorithms
3.3.4. Analysis of the Duration of Long-Term Behavior of Cattle
3.4. Discussion
4. Conclusions
- The improved YOLOv8n algorithm is designed to address issues such as uneven cattle distribution, significant changes in occlusion and object scale, and low tracking accuracy due to frequent identity switches. To tackle uneven distribution and object scale variations, a dynamic convolution (DyConv) module is integrated into the model’s backbone. The C2f-iRMB structure is utilized in the neck part to enhance feature characterization and reduce computational complexity. Additionally, the downsampling ADown module is introduced to expand the sensory field for improved feature fusion, while the dynamic object detection head (DyHead) module is replaced to better integrate contextual information and enhance multi-scale object detection performance.
- The proposed enhanced BoTSORT algorithm, which reclassifies high-confidence detection boxes and eliminates low-scoring ones, improves matching accuracy while reducing false alarms and identity switching issues. Additionally, the introduction of DIoU distance calculation further refines matching accuracy. To address inaccurate trajectory predictions caused by object loss, a virtual trajectory update mechanism is employed to minimize the accumulation of prediction errors. Once the activity trajectories of the cattle are obtained, statistical analysis and examination of their long-term behavioral changes are conducted.
- The experimental results demonstrate that the proposed model achieves a mean average precision (mAP) of 91.7% on the cattle object detection dataset. It shows improvements of 4.4% in precision (P) and 1% in recall (R) over the original algorithm. Regarding tracking performance, the model exhibits a 4.4% improvement in HOTA, a 7% increase in MOTA, a 1.7% rise in MOTP, a 4.3% enhancement in IDF1, and a 30.9% reduction in IDS. The system operates at a frame rate of 31.2 frames per second (FPS). These results indicate that the proposed method effectively handles multi-object tracking of cattle in complex environments and offers valuable technical support for analyzing long-term behaviors and enabling non-contact automatic monitoring.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Myat Noe, S.; Zin, T.T.; Tin, P.; Kobayashi, I. Comparing State-of-the-Art Deep Learning Algorithms for the Automated Detection and Tracking of Black Cattle. Sensors 2023, 23, 532. [Google Scholar] [CrossRef] [PubMed]
- Sadeghian, A.; Alahi, A.; Savarese, S. Tracking the Untrackable: Learning to Track Multiple Cues with Long-Term Dependencies. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 300–311. [Google Scholar]
- Jiang, B.; Song, H.; Wang, H.; Li, C. Dairy Cow Lameness Detection Using a Back Curvature Feature. Comput. Electron. Agric. 2022, 194, 106729. [Google Scholar] [CrossRef]
- Wang, R.; Bai, Q.; Gao, R.; Li, Q.; Zhao, C.; Li, S.; Zhang, H. Oestrus Detection in Dairy Cows by Using Atrous Spatial Pyramid and Attention Mechanism. Biosyst. Eng. 2022, 223, 259–276. [Google Scholar] [CrossRef]
- Wang, Z.; Zheng, L.; Liu, Y.; Li, Y.; Wang, S. Towards Real-Time Multi-Object Tracking. In Proceedings of the 16th European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 107–122. [Google Scholar]
- Adam, A.; Rivlin, E.; Shimshoni, I. Robust Fragments-Based Tracking Using the Integral Histogram. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), New York, NY, USA, 17–22 June 2006; Volume 1, pp. 798–805. [Google Scholar]
- Ross, D.A.; Lim, J.; Lin, R.-S.; Yang, M.-H. Incremental Learning for Robust Visual Tracking. Int. J. Comput. Vis. 2008, 77, 125–141. [Google Scholar] [CrossRef]
- Bao, C.; Wu, Y.; Ling, H.; Ji, H. Real Time Robust L1 Tracker Using Accelerated Proximal Gradient Approach. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA, 16–21 June 2012; pp. 1830–1837. [Google Scholar]
- Babenko, B.; Yang, M.-H.; Belongie, S. Robust Object Tracking with Online Multiple Instance Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 1619–1632. [Google Scholar] [CrossRef]
- Li, Z.; Mu, Y.; Li, T.; Su, J. Multi-Object Tracking Based on Improved YOLO. In Proceedings of the 2023 8th International Conference on Intelligent Computing and Signal Processing (ICSP), Hybrid, Xi’an, China, 23–24 April 2023; pp. 2021–2025. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Yu, R.; Wei, X.; Liu, Y.; Yang, F.; Shen, W.; Gu, Z. Research on Automatic Recognition of Dairy Cow Daily Behaviors Based on Deep Learning. Animals 2024, 14, 458. [Google Scholar] [CrossRef]
- Hu, T.; Yan, R.; Jiang, C.; Chand, N.V.; Bai, T.; Guo, L.; Qi, J. Grazing Sheep Behaviour Recognition Based on Improved Yolov5. Sensors 2023, 23, 4752. [Google Scholar] [CrossRef]
- Li, G.; Shi, G.; Zhu, C. Dynamic Serpentine Convolution with Attention Mechanism Enhancement for Beef Cattle Behavior Recognition. Animals 2024, 14, 466. [Google Scholar] [CrossRef]
- Wei, J.; Tang, X.; Liu, J.; Zhang, Z. Detection of Pig Movement and Aggression Using Deep Learning Approaches. Animals 2023, 13, 3074. [Google Scholar] [CrossRef]
- Zheng, Z.; Li, J.; Qin, L. YOLO-BYTE: An Efficient Multi-Object Tracking Algorithm for Automatic Monitoring of Dairy Cows. Comput. Electron. Agric. 2023, 209, 107857. [Google Scholar] [CrossRef]
- Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; Wang, X. Bytetrack: Multi-Object Tracking by Associating Every Detection Box. In Proceedings of the 17th European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2022; pp. 1–21. [Google Scholar]
- Zheng, Z.; Qin, L. PrunedYOLO-Tracker: An Efficient Multi-Cows Basic Behavior Recognition and Tracking Technique. Comput. Electron. Agric. 2023, 213, 108172. [Google Scholar] [CrossRef]
- Huang, Y.; Xiao, D.; Liu, J.; Tan, Z.; Liu, K.; Chen, M. An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model. Sensors 2023, 23, 6309. [Google Scholar] [CrossRef] [PubMed]
- Wojke, N.; Bewley, A.; Paulus, D. Simple Online and Realtime Tracking with a Deep Association Metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
- Zhang, H.; Wang, R.; Dong, P.; Sun, H.; Li, S.; Wang, H. Beef Cattle Multi-Target Tracking Based on DeepSORT Algorithm. Trans. Chin. Soc. Agric. Mach. 2021, 52, 248–256. [Google Scholar]
- Fuentes, A.; Han, S.; Nasir, M.F.; Park, J.; Yoon, S.; Park, D.S. Multiview Monitoring of Individual Cattle Behavior Based on Action Recognition in Closed Barns Using Deep Learning. Animals 2023, 13, 2020. [Google Scholar] [CrossRef] [PubMed]
- Fu, C.; Ren, L.; Wang, F. Method for Improved Cattle Behavior Recognition and Tracking Based on YOLO v8. Trans. Chin. Soc. Agric. Mach. 2024, 55, 290–301. [Google Scholar]
- Wang, X.; Yan, J.; Zhang, D. Improved Bot-SORT for Slope Rockfall Monitoring. J. Shenyang Ligong Univ. 2024, 43, 19–26. [Google Scholar] [CrossRef]
- Milan, A.; Leal-Taixé, L.; Reid, I.; Roth, S.; Schindler, K. MOT16: A Benchmark for Multi-Object Tracking. arXiv 2016, arXiv:160300831. [Google Scholar]
- Zhang, J.; Li, X.; Li, J.; Liu, L.; Xue, Z.; Zhang, B.; Jiang, Z.; Huang, T.; Wang, Y.; Wang, C. Rethinking Mobile Block for Efficient Attention-Based Models. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2–6 October 2023; pp. 1389–1400. [Google Scholar]
- Wang, C.; Yeh, I.; Liao, H. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv 2024, arXiv:240213616. [Google Scholar]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic Head: Unifying Object Detection Heads with Attentions. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 7369–7378. [Google Scholar]
- Chen, Y.; Dai, X.; Liu, M.; Chen, D.; Yuan, L.; Liu, Z. Dynamic Convolution: Attention over Convolution Kernels. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 11030–11039. [Google Scholar]
- Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; Meng, H. Strongsort: Make Deepsort Great Again. IEEE Trans. Multimed. 2023, 25, 8725–8737. [Google Scholar] [CrossRef]
- Yang, F.; Odashima, S.; Masui, S.; Jiang, S. Hard to Track Objects with Irregular Motions and Similar Appearances? Make It Easier by Buffering the Matching Space. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–7 January 2023; pp. 4799–4808. [Google Scholar]
- Cao, J.; Pang, J.; Weng, X.; Khirodkar, R.; Kitani, K. Observation-Centric SORT: Rethinking SORT for Robust Multi-Object Tracking. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 9686–9696. [Google Scholar]
- McKinley, S.; Levine, M. Cubic Spline Interpolation. Coll. Redw. 1998, 45, 1049–1060. [Google Scholar]
- Lin, Q.; Gu, X.; Chen, X.; Xiong, Y.; Zhang, G.; Wang, F.; Zhang, S.; Lu, M. Method for the automatic calculation of newborn lambs activity using ByteTrack algorithm enhanced by state vector. Trans. Chin. Soc. Agric. Eng. 2024, 40, 146–155. [Google Scholar]
- Foszner, P.; Szczęsna, A.; Ciampi, L.; Messina, N.; Cygan, A.; Bizoń, B.; Cogiel, M.; Golba, D.; Macioszek, E.; Staniszewski, M. Development of a Realistic Crowd Simulation Environment for Fine-Grained Validation of People Tracking Methods. arXiv 2023, arXiv:230413403. [Google Scholar]
- Qin, Z.; Zhou, S.; Wang, L.; Duan, J.; Hua, G.; Tang, W. Learning Robust Short-Term and Long-Term Motions for Multi-Object Tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 17939–17948. [Google Scholar]
- Wang, J.; Chen, D.; Wu, Z.; Luo, C.; Dai, X.; Yuan, L.; Jiang, Y.-G. Omnitracker: Unifying Object Tracking by Tracking-with-Detection. arXiv 2023, arXiv:230312079. [Google Scholar]
- Lai, P.; Cheng, G.; Zhang, M.; Ning, J.; Zheng, X.; Han, J. NCSiam: Reliable Matching via Neighborhood Consensus for Siamese-Based Object Tracking. IEEE Trans. Image Process. 2023, 32, 6168–6182. [Google Scholar] [CrossRef]
- Yang, F.; Odashima, S.; Yamao, S.; Fujimoto, H.; Masui, S.; Jiang, S. A Unified Multi-View Multi-Person Tracking Framework. Comput. Vis. Media 2024, 10, 137–160. [Google Scholar] [CrossRef]
No. | Weather | Period | Sparse | Dense | Interference Factors |
---|---|---|---|---|---|
01 | Cloudy | Sunrise | √ | Wall shading, low light, cattle occlusion (medium) | |
02 | Sunny | Sunrise | √ | Moderate light, artificial push grass disturbance, cattle occlusion (medium) | |
03 | Sunny | Morning | √ | Manual push grass disturbance, cattle occlusion (light) | |
04 | Cloudy | Evening | √ | Walls, behavioral changes, and cattle occlusion (Medium) | |
05 | Sunny | Night (with lights) | √ | Strong light, cattle occlusion (light) | |
06 | Sunny | Night (no lights) | √ | Low light, camera wobbles slightly, cattle occlusion (light) | |
07 | Sunny | Evening | √ | Varying behavior, cattle occlusion (heavy) | |
08 | Cloudy | Morning | √ | Varying behavior, cattle occlusion (heavy) | |
09 | Sunny | Night (with lights) | √ | Strong light, cattle occlusion (heavy) | |
10 | Cloudy | Night (no lights) | √ | Low light, cattle occlusion (heavy) |
Fold Number | P/% | R/% | mAP/% | F1/% |
---|---|---|---|---|
Fold 1 | 87.0 | 92.0 | 93.1 | 89.4 |
Fold 2 | 89.6 | 88.9 | 93.4 | 89.2 |
Fold 3 | 92.0 | 87.5 | 92.0 | 89.7 |
Fold 4 | 91.2 | 86.9 | 92.1 | 89.0 |
Fold 5 | 90.6 | 86.9 | 90.9 | 88.7 |
Average | 90.1 | 88.4 | 92.3 | 89.2 |
Standard deviation | 1.7 | 1.9 | 0.9 | 0.3 |
Dataset | P/% | R/% | mAP/% | F1/% |
---|---|---|---|---|
Fold 2 | 89.6 | 88.9 | 93.4 | 89.2 |
Test Set | 90.9 | 84.7 | 91.7 | 87.7 |
Models | P/% | R/% | mAP/% | FPS | Params/ 106 | GFLOPs/ 109 | Model Size/MB |
---|---|---|---|---|---|---|---|
YOLOv3-tiny | 88.8 | 82.1 | 89.9 | 123.4 | 12.1 | 18.9 | 24.4 |
YOLOv5n | 87.0 | 83.0 | 90.0 | 74.6 | 2.5 | 7.1 | 5.3 |
YOLOv6n | 88.1 | 85.4 | 90.7 | 83.3 | 4.2 | 11.8 | 8.7 |
YOLOv7-tiny | 88.6 | 87.5 | 91.3 | 27.8 | 6.0 | 13.3 | 12.3 |
RTDETR-r18 | 90.1 | 88.4 | 90.0 | 36.2 | 19.8 | 57.0 | 40.5 |
YOLOv8n | 86.5 | 83.7 | 90.2 | 76.9 | 3.0 | 8.1 | 6.2 |
YOLOv9t | 82.8 | 84.8 | 90.7 | 35.3 | 1.8 | 7.1 | 4.5 |
YOLOv10n | 87.0 | 79.3 | 88.3 | 48.3 | 2.7 | 8.2 | 5.5 |
Improved YOLOv8n | 90.9 | 84.7 | 91.7 | 55.1 | 3.4 | 7.5 | 7.2 |
Models | mAP | Ap | ||||||
---|---|---|---|---|---|---|---|---|
Feeding | Drinking | Standing | Lying | Walking | Climbing | Fighting | ||
YOLOv8n | 90.2 | 85.8 | 94.6 | 83.3 | 92.4 | 82.2 | 97.0 | 96.2 |
YOLOv8n + C2f-DyConv | 90.5 | 83.8 | 94.5 | 83.3 | 90.2 | 83.3 | 99.5 | 99.5 |
YOLOv8n + C2f-iRMB | 90.5 | 93.4 | 94.1 | 84.5 | 90.0 | 84.3 | 97.5 | 99.5 |
YOLOv8n + DyHead | 91.3 | 85.6 | 96.6 | 85.0 | 89.3 | 83.9 | 99.4 | 99.5 |
YOLOv8n + Adown | 91.2 | 86.5 | 94.4 | 85.3 | 90.4 | 85.4 | 97.1 | 99.5 |
Improved YOLOv8n | 91.7 | 87.4 | 95.5 | 85.1 | 88.4 | 86.8 | 99.1 | 99.5 |
No. | HOTA/% (↑) | MOTA/% (↑) | MOTP/% (↑) | IDF1/% (↑) | MTR/% (↑) | MLR/% (↓) | IDS (↓) | FPS(f/s) (↑) |
---|---|---|---|---|---|---|---|---|
01 | 66.4 | 69.2 | 85.2 | 79.5 | 43.5 | 21.7 | 10 | 30.8 |
02 | 83.4 | 96.5 | 84.9 | 97.9 | 90 | 0 | 1 | 32.3 |
03 | 80.9 | 85.5 | 89.5 | 92.2 | 85.7 | 14.3 | 0 | 31.1 |
04 | 47.6 | 53.1 | 74.6 | 69.4 | 44.4 | 22.2 | 0 | 31.8 |
05 | 81.7 | 88.7 | 87.3 | 94.7 | 85.7 | 0 | 0 | 31.4 |
06 | 83.6 | 96.5 | 84.9 | 98.2 | 90 | 0 | 0 | 29.9 |
07 | 74.5 | 84.3 | 83.1 | 88.3 | 69.0 | 3.4 | 44 | 33.4 |
08 | 71.1 | 76.9 | 83.8 | 86.1 | 72.7 | 13.6 | 2 | 29.4 |
09 | 70.9 | 77.6 | 84.6 | 84.3 | 70.8 | 25 | 18 | 30.7 |
10 | 70.7 | 75.3 | 85.8 | 83.9 | 73.7 | 26.3 | 3 | 30.8 |
Overall | 73.1 | 80.4 | 84.4 | 87.5 | 72.6 | 12.7 | 78 | 31.2 |
Models | HOTA/% (↑) | MOTA/% (↑) | MOTP/% (↑) | IDF1/% (↑) | MTR/% (↑) | MLR/% (↓) | IDS (↓) | FPS(f/s) (↑) |
---|---|---|---|---|---|---|---|---|
▲ + IoU | 72.9 | 80.1 | 82.4 | 87.2 | 72.0 | 12.2 | 82 | 33.7 |
▲ + BIoU | 72.9 | 80.1 | 82.5 | 86.9 | 72.5 | 11.9 | 82 | 33.2 |
▲ + GIoU | 73.0 | 80.2 | 83.1 | 87.5 | 72.2 | 12.7 | 79 | 31.7 |
▲ + DIoU | 73.1 | 80.4 | 84.4 | 87.5 | 72.6 | 12.7 | 78 | 31.2 |
Models | HOTA/% (↑) | MOTA/% (↑) | MOTP/% (↑) | IDF1/% (↑) | MTR/% (↑) | MLR/% (↓) | IDS (↓) | FPS(f/s) (↑) |
---|---|---|---|---|---|---|---|---|
YOLOv8n + BoTSORT | 68.7 | 73.4 | 82.7 | 83.2 | 66.3 | 16.8 | 113 | 63.9 |
Improved YOLOv8n + BoTSORT | 72.9 | 80.1 | 82.4 | 87.2 | 72.0 | 12.2 | 82 | 33.7 |
YOLO-BoT | 73.1 | 80.4 | 84.4 | 87.5 | 72.6 | 12.7 | 78 | 31.2 |
Models | HOTA/% (↑) | MOTA/% (↑) | MOTP/% (↑) | IDF1/% (↑) | MTR/% (↑) | MLR/% (↓) | IDS (↓) | FPS(f/s) (↑) |
---|---|---|---|---|---|---|---|---|
YOLOv8n + ByteTrack | 68.1 | 72.2 | 82.6 | 81.8 | 65.9 | 20 | 104 | 57.2 |
YOLOv8n + BoTSORT | 68.7 | 73.4 | 82.7 | 83.2 | 66.3 | 16.8 | 113 | 63.9 |
YOLOv8n + StrongSORT | 67.7 | 72.2 | 82.7 | 80.8 | 65.5 | 21.8 | 258 | 44.7 |
YOLOv8n + OCSORT | 67.4 | 72.2 | 82.6 | 80.1 | 65.5 | 19.7 | 476 | 45.0 |
YOLOv8n + DeepOCSORT | 66.8 | 72.2 | 82.6 | 79.1 | 65.5 | 19.7 | 428 | 50.6 |
YOLOv8n + C-BIoU Tracker | 67.0 | 70.5 | 82.8 | 81.9 | 64.8 | 22.8 | 64 | 11.8 |
YOLO-BoT | 73.1 | 80.4 | 84.4 | 87.5 | 72.6 | 12.7 | 78 | 31.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tong, L.; Fang, J.; Wang, X.; Zhao, Y. Research on Cattle Behavior Recognition and Multi-Object Tracking Algorithm Based on YOLO-BoT. Animals 2024, 14, 2993. https://doi.org/10.3390/ani14202993
Tong L, Fang J, Wang X, Zhao Y. Research on Cattle Behavior Recognition and Multi-Object Tracking Algorithm Based on YOLO-BoT. Animals. 2024; 14(20):2993. https://doi.org/10.3390/ani14202993
Chicago/Turabian StyleTong, Lei, Jiandong Fang, Xiuling Wang, and Yudong Zhao. 2024. "Research on Cattle Behavior Recognition and Multi-Object Tracking Algorithm Based on YOLO-BoT" Animals 14, no. 20: 2993. https://doi.org/10.3390/ani14202993
APA StyleTong, L., Fang, J., Wang, X., & Zhao, Y. (2024). Research on Cattle Behavior Recognition and Multi-Object Tracking Algorithm Based on YOLO-BoT. Animals, 14(20), 2993. https://doi.org/10.3390/ani14202993