Detection of Camellia oleifera Fruit in Complex Scenes by Using YOLOv7 and Data Augmentation
<p>Examples of <span class="html-italic">Camellia oleifera</span> fruit images acquired under different conditions. (<b>a</b>) slight occlusion, (<b>b</b>) heavy occlusion, (<b>c</b>) overlapped <span class="html-italic">Camellia oleifera</span> fruit, (<b>d</b>) natural light angle, (<b>e</b>), sidelight angle and (<b>f</b>) backlight angle.</p> "> Figure 2
<p>Examples of labeled <span class="html-italic">Camellia oleifera</span> fruit images.</p> "> Figure 3
<p>Distribution of the augmented training set.</p> "> Figure 4
<p>Architecture of YOLOv7 network.</p> "> Figure 5
<p>Structure of each module. (<b>a</b>) ELAN module; (<b>b</b>) Repconv module; (<b>c</b>) SPPCSPC module.</p> "> Figure 6
<p>Workflow of the proposed study.</p> "> Figure 7
<p>Visualisation of CIoU calculation between model prediction box and the ground truth. Yellow box is the calibration box, blue box is the prediction box.</p> "> Figure 8
<p>Training and validation loss.</p> "> Figure 9
<p>Examples of the detection results of four network models in a variety of complex scenes in test set. (<b>a</b>) sidelight angle; (<b>b</b>) backlight angle; (<b>c</b>) Slightly occlusion; (<b>d</b>) Heavy occlusion.</p> "> Figure 10
<p>Examples of the detection results between YOLOv7and the DA-YOLOv7models. (<b>a</b>) sidelight angle; (<b>b</b>) backlight angle; (<b>c</b>) Slight occlusion; (<b>d</b>) Heavy occlusion.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Acquisition of Camellia oleifera Fruit Images
2.2. Image Preprocessing and Dataset Partitioning
2.3. Data Augmentation
2.4. YOLO v7 Network Architecture
2.5. Training Platform and Parameter Settings
2.6. Establishment and Evaluation Indicators of Model
2.6.1. Establishment of Model
2.6.2. Evaluation Indicators of Model
3. Results and Discussion
3.1. Dataset Training of YOLO v7
3.2. Comparison of Models
3.3. Influence of Data Augmentation
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Wu, D.L.; Fu, L.Q.; Cao, C.M.; Li, C.; Xu, Y.P.; Ding, D. Design and Experiment of Shaking-branch Fruit Picking Machine for Camellia Fruit. Trans. Chin. Soc. Agric. Mach. 2020, 51, 176–182. [Google Scholar]
- Wu, D.; Ding, D.; Cui, B.; Jiang, S.; Zhao, E.; Liu, Y.; Cao, C. Design and experiment of vibration plate type camellia fruit picking machine. Int. J. Agric. Biol. Eng. 2022, 15, 130–138. [Google Scholar] [CrossRef]
- Wu, D.L.; Zhao, E.L.; Fang, D.; Jiang, S.; Wu, C.; Wang, W.W.; Wang, R.Y. Determination of Vibration Picking Parameters of Camellia oleifera Fruit Based on Acceleration and Strain Response of Branches. Agriculture 2022, 12, 1222. [Google Scholar] [CrossRef]
- Wu, D.L.; Zhao, E.L.; Jiang, S.; Wang, W.W.; Yuan, J.H.; Wang, K. Optimization and Experiment of Canopy Vibration Parameters of Camellia oleifera Based on Energy Transfer Characteristics. Trans. Chin. Soc. Agric. Mach. 2022, 53, 23–33. [Google Scholar]
- Liu, J.Z.; Yuan, Y.; Zhou, Y.; Zhu, X.X.; Syed, T.N. Experiments and Analysis of Close-Shot Identification of On-Branch Citrus Fruit with RealSense. Sensors 2018, 18, 1510. [Google Scholar] [CrossRef] [Green Version]
- Zhang, W.; Ma, H.; Li, X.; Liu, X.; Jiao, J.; Zhang, P.; Gu, L.; Wang, Q.; Bao, W.; Cao, S. Imperfect Wheat Grain Recognition Combined with an Attention Mechanism and Residual Network. Appl. Sci. 2021, 11, 5139. [Google Scholar]
- Gill, H.S.; Murugesan, G.; Khehra, B.S.; Sajja, G.S.; Gupta, G.; Bhatt, A. Fruit recognition from images using deep learning applications. Multimed. Tools Appl. 2022, 81, 33269–33290. [Google Scholar] [CrossRef]
- Wei, X.Q.; Jia, K.; Lan, J.H.; Li, Y.W.; Zeng, Y.L.; Wang, C.M. Automatic method of fruit object extraction under complex agricultural background for vision system of fruit picking robot. Optik 2014, 125, 5684–5689. [Google Scholar] [CrossRef]
- Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
- Koirala, A.; Walsh, K.B.; Wang, Z.L.; McCarthy, C. Deep learning—Method overview and review of use for fruit detection and yield estimation. Comput Electron. Agric. 2019, 162, 219–234. [Google Scholar]
- Pu, J.Y.; Yu, C.J.; Chen, X.Y.; Zhang, Y.; Yang, X.; Li, J. Research on Chengdu Ma Goat Recognition Based on Computer Vison. Animals 2022, 12, 1746. [Google Scholar] [CrossRef] [PubMed]
- Yu, Y.; Zhang, K.L.; Yang, L.; Zhang, D.X. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
- Wu, L.; Ma, J.; Zhao, Y.H.; Liu, H. Apple Detection in Complex Scene Using the Improved YOLOv4 Model. Agronomy 2021, 11, 476. [Google Scholar]
- Tian, Y.N.; Yang, G.D.; Wang, Z.; Li, E.; Liang, Z.Z. Instance segmentation of apple flowers using the improved mask R-CNN model. Biosyst. Eng. 2020, 193, 264–278. [Google Scholar] [CrossRef]
- Kuznetsova, A.; Maleva, T.; Soloviev, V. Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot. Agronomy 2020, 10, 1016. [Google Scholar]
- Han, W.; Jiang, F.; Zhu, Z.Y. Detection of Cherry Quality Using YOLOV5 Model Based on Flood Filling Algorithm. Foods 2022, 11, 1127. [Google Scholar] [CrossRef] [PubMed]
- Li, S.L.; Zhang, S.J.; Xue, J.X.; Sun, H.X.; Ren, R. A Fast Neural Network Based on Attention Mechanisms for Detecting Field Flat Jujube. Agriculture 2022, 12, 717. [Google Scholar] [CrossRef]
- Halstead, M.; McCool, C.; Denman, S.; Perez, T.; Fookes, C. Fruit Quantity and Ripeness Estimation Using a Robotic Vision System. IEEE Robot Autom. Let. 2018, 3, 2995–3002. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Li, G.; Suo, R.; Zhao, G.A.; Gao, C.Q.; Fu, L.S.; Shi, F.X.; Dhupia, J.; Li, R.; Cui, Y.J. Real-time detection of kiwifruit flower and bud simultaneously in orchard using YOLOv4 for robotic pollination. Comput. Electron. Agric. 2022, 193, 106641. [Google Scholar]
- Yan, B.; Fan, P.; Lei, X.Y.; Liu, Z.J.; Yang, F.Z. A Real-Time Apple Targets Detection Method for Picking Robot Based on Improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar] [CrossRef]
- Lu, S.Y.; Wang, B.Z.; Wang, H.J.; Chen, L.H.; Ma, L.J.; Zhang, X.Y. A real-time object detection algorithm for video. Comput Electr. Eng. 2019, 77, 398–408. [Google Scholar] [CrossRef]
- Wang, C.; Bochkovskiy, A.; Liao, H.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Tian, Y.N.; Yang, G.D.; Wang, Z.; Wang, H.; Li, E.; Liang, Z.Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Zulkifley, M.A.; Moubark, A.M.; Saputro, A.H.; Abdani, S.R. Automated Apple Recognition System Using Semantic Segmentation Networks with Group and Shuffle Operators. Agriculture 2022, 12, 756. [Google Scholar] [CrossRef]
- Chen, L.Y.; Zheng, M.C.; Duan, S.Q.; Luo, W.L.; Yao, L.G. Underwater Target Recognition Based on Improved YOLOv4 Neural Network. Electronics 2021, 10, 1634. [Google Scholar] [CrossRef]
- Li, R.; Wu, Y.P. Improved YOLOv5 Wheat Ear Detection Algorithm Based on Attention Mechanism. Electronics 2022, 11, 1673. [Google Scholar] [CrossRef]
- Gu, Y.; Wang, S.C.; Yan, Y.; Tang, S.J.; Zhao, S.D. Identification and Analysis of Emergency Behavior of Cage-Reared Laying Ducks Based on YoloV5. Agriculture 2022, 12, 485. [Google Scholar] [CrossRef]
- Jintasuttisak, T.; Edirisinghe, E.; Elbattay, A. Deep neural network based date palm tree detection in drone imagery. Comput Electron. Agric. 2022, 192, 106560. [Google Scholar] [CrossRef]
- Kisantal, M.; Wojna, Z.; Murawski, J.; Naruniec, J.; Cho, K. Augmentation for small object detection. arXiv 2019, arXiv:1902.07296. [Google Scholar]
- Ahmad, I.; Yang, Y.; Yue, Y.; Ye, C.; Hassan, M.; Cheng, X.; Wu, Y.; Zhang, Y. Deep Learning Based Detector YOLOv5 for Identifying Insect Pests. Appl. Sci. 2022, 12, 10167. [Google Scholar]
- Ding, X.H.; Zhang, X.Y.; Ma, N.N.; Han, J.G.; Ding, G.G.; Sun, J. RepVGG: Making VGG-style ConvNets Great Again. arXiv 2021, arXiv:2101.03697. [Google Scholar]
- Ding, X.H.; Hao, T.X.; Tan, J.C.; Liu, J.; Han, J.G.; Guo, Y.C.; Ding, G.G. ResRep: Lossless CNN Pruning via Decoupling Remembering. arXiv 2021, arXiv:2007.03260. [Google Scholar]
- Zheng, Z.H.; Wang, P.; Liu, W.; Li, J.Z.; Ye, R.G.; Ren, D.W. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. arXiv 2019, arXiv:1911.08287. [Google Scholar]
- Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE T Pattern Anal. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
Target Detection Networks | mAP (%) | Precision (%) | Recall (%) | F1 Score (%) | Average Detection Speed (s/Image) |
---|---|---|---|---|---|
Faster RCNN | 91.50 | 59.35 | 93.59 | 73.00 | 5.167 |
YOLO v3-spp | 84.30 | 89.70 | 87.50 | 86.90 | 0.072 |
YOLOv5s | 94.76 | 93.81 | 89.18 | 91.44 | 0.054 |
YOLOv7 | 95.74 | 94.21 | 93.13 | 93.67 | 0.025 |
Objects Number | Number of Actual Objects | Target Detection Networks | |||
---|---|---|---|---|---|
Faster RCNN | YOLO v3-spp | YOLO v5 | YOLO v7 | ||
Number of detected objects | 1401 | 1426 | 1437 | 1577 | 1588 |
Number of right objects | 1401 | 1256 | 1081 | 1308 | 1327 |
Number of wrong objects | 0 | 170 | 356 | 269 | 261 |
Number of missed objects | 0 | 145 | 320 | 93 | 74 |
Models | Evaluation Index | Detection Results | Numbers | |||
---|---|---|---|---|---|---|
mAP (%) | Precision (%) | Recall (%) | F1 Score (%) | |||
YOLOv7 | 95.74 | 94.21 | 93.13 | 93.67 | Detected objects | 1588 |
Right objects | 1327 | |||||
Wrong objects | 261 | |||||
Missed objects | 74 | |||||
DA-YOLOv7 | 96.03 | 94.76 | 95.54 | 95.15 | Detected objects | 1560 |
Right objects | 1340 | |||||
Wrong objects | 220 | |||||
Missed objects | 61 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, D.; Jiang, S.; Zhao, E.; Liu, Y.; Zhu, H.; Wang, W.; Wang, R. Detection of Camellia oleifera Fruit in Complex Scenes by Using YOLOv7 and Data Augmentation. Appl. Sci. 2022, 12, 11318. https://doi.org/10.3390/app122211318
Wu D, Jiang S, Zhao E, Liu Y, Zhu H, Wang W, Wang R. Detection of Camellia oleifera Fruit in Complex Scenes by Using YOLOv7 and Data Augmentation. Applied Sciences. 2022; 12(22):11318. https://doi.org/10.3390/app122211318
Chicago/Turabian StyleWu, Delin, Shan Jiang, Enlong Zhao, Yilin Liu, Hongchun Zhu, Weiwei Wang, and Rongyan Wang. 2022. "Detection of Camellia oleifera Fruit in Complex Scenes by Using YOLOv7 and Data Augmentation" Applied Sciences 12, no. 22: 11318. https://doi.org/10.3390/app122211318
APA StyleWu, D., Jiang, S., Zhao, E., Liu, Y., Zhu, H., Wang, W., & Wang, R. (2022). Detection of Camellia oleifera Fruit in Complex Scenes by Using YOLOv7 and Data Augmentation. Applied Sciences, 12(22), 11318. https://doi.org/10.3390/app122211318