Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot
<p>Intersection over Union for fruits detection. (<b>a</b>) Ground-truth fruit bounding box and detected fruit bounding box; (<b>b</b>) Intersection; (<b>c</b>) Union.</p> "> Figure 2
<p>Examples of apple detection results by using You Look Only Once (YOLOv3) without pre- and post-processing.</p> "> Figure 3
<p>Detecting apples with shadows, glare, and overlapping leaves by YOLOv3 without pre-processing (<b>a</b>) and with pre-processing (<b>b</b>).</p> "> Figure 4
<p>Detecting apples on images with backlight by YOLOv3 without pre-processing (<b>a</b>) and with pre-processing (<b>b</b>).</p> "> Figure 5
<p>Examples of detected apples with dark spots and overlapping thin branches.</p> "> Figure 6
<p>Examples of yellow leaves and gaps between leaves mistaken for apples.</p> "> Figure 7
<p>Examples of red apple detection.</p> "> Figure 8
<p>Examples of green apple detection.</p> "> Figure 9
<p>Two (<b>a</b>) and 4 (<b>b</b>) apples detected in the far-view canopy images without pre-processing.</p> "> Figure 10
<p>Fifty-seven (<b>a</b>) and 48 (<b>b</b>) apples found in the far-view canopy images after pre-processing.</p> "> Figure 11
<p><span class="html-italic">Precision–Recall</span> curve.</p> "> Figure 12
<p>Examples of a partial detection of apples in clusters.</p> "> Figure 13
<p>Detecting oranges on images by YOLOv3 without pre-processing (<b>a</b>) and with pre-processing (<b>b</b>).</p> "> Figure 14
<p>Detecting tomatoes on images by YOLOv3 without pre-processing (<b>a</b>) and with pre-processing (<b>b</b>).</p> ">
Abstract
:1. Introduction
1.1. Color-Based Fruit Detection Techniques
1.2. Shape-Based Fruit Detection Techniques
1.3. Texture-Based Fruit Detection Techniques
1.4. Early Stage of Using Machine Learning Algorithms for Fruit Detection
1.5. Using Deep Neural Networks for Fruit Detection
2. Materials and Methods
2.1. Apple Harvesting Robot Design
2.2. Image Acquisition
- 553 far-view canopy images (4365 apples in total, 7.89 apples per image on average);
- 274 close-up images (533 apples in total, 1.95 apples per image on average).
2.3. Apple Detection Quality Evaluation
2.4. Using YOLOv3 without Pre- and Post-Processing for Apple Detection
2.5. Basic Pre- and Post-Processing of Images for YOLOv3-Based Apple Detection Efficiency Improvement
- contrast increasing by applying histogram normalization and contrast limited adaptive histogram alignment (CLAHE) [73] with 4 × 4 grid size and clip limit set to 3;
- slight blur by applying the median filter with 3 × 3 kernel;
- thickening of the borders by use of morphological opening with a flat 5 × 5 square structuring element.
- backlight;
- existence of dark spots on apples and/or noticeable perianths;
- existence of empty gaps between the leaves, which the network mistook for small apples;
- the proximity of the green apple shade to leaves shade;
- overlapping apples by other apples, branches, and leaves.
2.6. Special Pre-Processing for Detecting Apples in Far-View Canopy Images
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Bechar, A.; Vigneault, C. Agricultural robots for field operations Concepts and components. Biosyst. Eng. 2016, 149, 94–111. [Google Scholar] [CrossRef]
- Sistler, F.E. Robotics and intelligent machines in agriculture. IEEE J. Robot. Autom. 1987, 3, 3–6. [Google Scholar] [CrossRef]
- Ceres, R.; Pons, J.; Jiménez, A.; Martín, J.; Calderón, L. Design and implementation of an aided fruit-harvesting robot (Agribot). Industrial Robot. 1998, 25, 337–346. [Google Scholar] [CrossRef]
- Edan, Y.; Han, S.F.; Kondo, N. Automation in Agriculture. In Springer Handbook of Automation; Springer: Berlin/Heidelberg, Germany, 2009; pp. 1095–1128. [Google Scholar]
- Grift, T.; Zhang, Q.; Kondo, N.; Ting, K.C. A review of automation and robotics for the bio-industry. J. BioMechatron. Eng. 2008, 1, 37–54. [Google Scholar]
- Kuznetsova, A.; Maleva, T.; Soloviev, V. Detecting Apples in Orchards using YOLO-v3. In Proceedings of the 20th International Conference on Computational Science and Its Applications—ICCSA 2020, Cagliari, Italy, 1–4 July 2020; pp. 1–12. [Google Scholar]
- Huang, L.W.; He, D.J. Ripe Fuji apple detection model analysis in natural tree canopy. TELKOMNIKA 2012, 10, 1771–1778. [Google Scholar] [CrossRef]
- Yin, H.; Chai, Y.; Yang, S.X.; Mittal, G.S. Ripe Tomato Extraction for a Harvesting Robotic System. In Proceedings of the 2009 IEEE International Conference on Systems, Man and Cybernetics—SMC 2009, San Antonio, TX, USA, 11–14 October 2009; pp. 2984–2989. [Google Scholar]
- Yin, H.; Chai, Y.; Yang, S.X.; Mittal, G.S. Ripe Tomato Recognition and Localization for a Tomato Harvesting Robotic System. In Proceedings of the International Conference on Soft Computing and Pattern Recognition—SoCPaR 2009, Malacca, Malaysia, 4–7 December 2009; pp. 557–562. [Google Scholar]
- Mao, W.H.; Ji, B.P.; Zhan, J.C.; Zhang, X.C.; Hu, X.A. Apple Location Method for the Apple Harvesting Robot. In Proceedings of the 2nd International Congress on Image and Signal Processing—CIPE 2009, Tianjin, China, 7–19 October 2009; pp. 17–19. [Google Scholar]
- Bulanon, D.M.; Kataoka, T. A fruit detection system and an end effector for robotic harvesting of Fuji apples. Agric. Eng. Int. CIGR J. 2010, 12, 203–210. [Google Scholar]
- Wei, X.; Jia, K.; Lan, J.; Li, Y.; Zeng, Y.; Wang, C. Automatic method of fruit object extraction under complex agricultural background for vision system of fruit picking robot. Optics 2014, 125, 5684–5689. [Google Scholar] [CrossRef]
- Zhao, Y.S.; Gong, L.; Huang, Y.X.; Liu, C.L. A review of key techniques of vision-based control for harvesting robot. Comput. Electron. Agric. 2016, 127, 311–323. [Google Scholar] [CrossRef]
- Bulanon, D.M.; Burks, T.F.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosyst. Eng. 2009, 103, 12–22. [Google Scholar] [CrossRef]
- Wachs, J.P.; Stern, H.I.; Burks, T.; Alchanatis, V. Apple Detection in Natural Tree Canopies from Multimodal Images. In Proceedings of the 7th Joint International Agricultural Conference—JIAC 2009, Wageningen, The Netherlands, 6–8 July 2009; pp. 293–302. [Google Scholar]
- Wachs, J.P.; Stern, H.I.; Burks, T.; Alchanatis, V. Low and high-level visual feature-based apple detection from multi-modal images. Precis. Agric. 2010, 11, 717–735. [Google Scholar] [CrossRef]
- Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
- Illingworth, J.; Kittler, J. A survey of the Hough transform. Comput. Vis. Graph. Image Process. 1988, 44, 87–116. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 679–698. [Google Scholar] [CrossRef] [PubMed]
- Whittaker, A.D.; Miles, G.E.; Mitchell, O.R. Fruit location in a partially occluded image. Trans. Am. Soc. Agric. Eng. 1987, 30, 591–596. [Google Scholar] [CrossRef]
- Xie, Z.Y.; Zhang, T.Z.; Zhao, J.Y. Ripened strawberry recognition based on Hough transform. Trans. Chin. Soc. Agric. Mach. 2007, 38, 106–109. [Google Scholar]
- Xie, Z.; Ji, C.; Guo, X.; Zhu, S. An object detection method for quasi-circular fruits based on improved Hough transform. Trans. Chin. Soc. Agric. Mach. 2010, 26, 157–162. [Google Scholar]
- Kelman, E.E.; Linker, R. Vision-based localization of mature apples in tree images using convexity. Biosyst. Eng. 2014, 118, 174–185. [Google Scholar] [CrossRef]
- Xie, Z.; Ji, C.; Guo, X.; Zhu, S. Detection and location algorithm for overlapped fruits based on concave spots searching. Trans. Chin. Soc. Agric. Mach. 2011, 42, 191–196. [Google Scholar]
- Patel, H.N.; Jain, R.K.; Joshi, M.V. Fruit detection using improved multiple features based algorithm. Int. J. Comput. Appl. 2011, 13, 1–5. [Google Scholar] [CrossRef]
- Hannan, M.W.; Burks, T.F.; Bulanon, D.M. A machine vision algorithm combining adaptive segmentation and shape analysis for orange fruit detection. Agric. Eng. Int. CIGR J. 2009, 11, 1–17. [Google Scholar]
- Lu, J.; Sang, N.; Hu, Y. Detecting citrus fruits with highlight on tree based on fusion of multi-map. Optics 2014, 125, 1903–1907. [Google Scholar] [CrossRef]
- OpenCV—Open Source Computer Vision Library. Available online: https//opencv.org (accessed on 30 April 2020).
- Jian, L.; Chengyan, Z.; Shujuan, C. Positioning Technology of Apple-Picking Robot Based on OpenCV. In Proceedings of the 2012 Third International Conference on Digital Manufacturing and Automation, Guilin, China, 31 July—2 August 2012; pp. 618–621. [Google Scholar]
- Zhang, Q.R.; Peng, P.; Jin, Y.M. Cherry Picking Robot Vision Recognition System Based on OpenCV. In Proceedings of the 2016 International Conference on Mechatronics, Manufacturing and Materials Engineering—MMME 2016, Hong Kong, China, 11–12 June 2016; pp. 1–4. [Google Scholar]
- Zhao, J.; Tow, J.; Katupitiya, J. On-Tree Fruit Recognition Using Texture Properties and Color Data. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 263–268. [Google Scholar]
- Rakun, J.; Stajnko, D.; Zazula, D. Detecting fruits in natural scenes by using spatial-frequency based texture analysis and multiview geometry. Comput. Electron. Agric. 2011, 76, 80–88. [Google Scholar] [CrossRef]
- Kurtulmus, F.; Lee, W.S.; Vardar, A. Green citrus detection using ‘eigenfruit’, color and circular Gabor texture features under natural outdoor conditions. Comput. Electron. Agric. 2011, 78, 140–149. [Google Scholar] [CrossRef]
- Kurtulmus, F.; Lee, W.S.; Vardar, A. An advanced green citrus detection algorithm using color images and neural networks. J. Agric. Mach. Sci. 2011, 7, 145–151. [Google Scholar]
- Parrish, E.A.; Goksel, J.A.K. Pictorial pattern recognition applied to fruit harvesting. Trans. Am. Soc. Agric. Eng. 1977, 20, 822–827. [Google Scholar] [CrossRef]
- Sites, P.W.; Delwiche, M.J. Computer vision to locate fruit on a tree. Trans. Am. Soc. Agric. Eng. 1988, 31, 257–263. [Google Scholar] [CrossRef]
- Bulanon, D.M.; Kataoka, T.; Okamoto, H.; Hata, S. Development of a Real-Time Machine Vision System for Apple Harvesting Robot. In Proceedings of the Society of Instrument and Control Engineers Annual Conference, Sapporo, Japan, 4–6 August 2004; pp. 595–598. [Google Scholar]
- Seng, W.C.; Mirisaee, S.H. A New Method for Fruits Recognition System. In Proceedings of the 2009 International Conference on Electrical Engineering and Informatics—ICEEI 2009, Selangor, Malaysia, 5–7 August 2009; Volume 1, pp. 130–134. [Google Scholar]
- Linker, R.; Cohen, O.; Naor, A. Determination of the number of green apples in RGB images recorded in orchards. Comput. Electron. Agric. 2011, 81, 45–57. [Google Scholar] [CrossRef]
- Ji, W.; Zhao, D.; Cheng, F.Y.; Xu, B.; Zhang, Y.; Wang, J. Automatic recognition vision system guided for apple harvesting robot. Comput. Electr. Eng. 2012, 38, 1186–1195. [Google Scholar] [CrossRef]
- Tao, Y.; Zhou, J. Automatic apple recognition based on the fusion of color and 3D feature for robotic fruit picking. Comput. Electron. Agric. 2017, 142, 388–396. [Google Scholar] [CrossRef]
- Zhan, W.T.; He, D.J.; Shi, S.L. Recognition of kiwifruit in field based on Adaboost algorithm. Trans. Chin. Soc. Agric. Eng. 2013, 29, 140–146. [Google Scholar]
- Zhao, Y.S.; Gong, L.; Huang, Y.X.; Liu, C.L. Robust tomato recognition for robotic harvesting using feature images fusion. Sensors 2016, 16, 173. [Google Scholar] [CrossRef] [Green Version]
- Zhao, Y.S.; Gong, L.; Huang, Y.X.; Liu, C.L. Detecting tomatoes in greenhouse scenes by combining AdaBoost classifier and colour analysis. Biosyst. Eng. 2016, 148, 127–137. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems Conference—NIPS 2012, Harrahs and Harveys, Lake Tahoe, NV, USA, 3–8 December 2012; pp. 1–9. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Proceedings of the International Conference on Learning Representations—ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
- Williams, H.A.M.; Jones, M.H.; Nejati, M.; Seabright, M.J.; MacDonald, B.A. Robotic kiwifruit harvesting using machine vision, convolutional neural networks, and robotic arms. Biosyst. Eng. 2019, 181, 140–156. [Google Scholar] [CrossRef]
- Liu, Z.; Wu, J.; Fu, L.; Majeed, Y.; Feng, Y.; Li, R.; Cui, Y. Improved kiwifruit detection using pre-trained VGG16 with RGB and NIR information fusion. IEEE Access 2020, 8, 2327–2336. [Google Scholar] [CrossRef]
- Mureşan, H.; Oltean, M. Fruit recognition from images using deep learning. Acta Univ. Sapientiae. Inform. 2018, 10, 26–42. [Google Scholar] [CrossRef] [Green Version]
- Fruits 360 Dataset. Available online: https//github.com/Horea94/Fruit-Images-Dataset (accessed on 30 April 2020).
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision—ICCV 2015, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision—ICCV 2017, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- He, K.X.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition—CVPR 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Zhang, J.; He, L.; Karkee, M.; Zhang, Q.; Zhang, X.; Gao, Z. Branch detection for apple trees trained in fruiting wall architecture using depth features and Regions-Convolutional Neural Network (R-CNN). Comput. Electron. Agric. 2018, 155, 386–393. [Google Scholar] [CrossRef]
- Sa, I.; Ge, Z.; Dayoub, F.; Upcroft, B.; Perez, T.; McCool, C. DeepFruits: A fruit detection system using deep neural networks. Sensors 2016, 16, 1222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bargoti, S.; Underwood, J. Deep Fruit Detection in Orchards. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation—ICRA 2017, Singapore, 29 May–3 June 2017; pp. 1–8. [Google Scholar]
- Peebles, M.; Lim, S.H.; Duke, M.; McGuinness, B. Investigation of optimal network architecture for asparagus spear detection in robotic harvesting. IFAC PapersOnLine 2019, 52, 283–287. [Google Scholar] [CrossRef]
- ACFR-Multifruit-2016: ACFR Orchard Fruit Dataset. Available online: http//data.acfr.usyd.edu.au/ag/treecrops/2016-multifruit/ (accessed on 30 April 2020).
- Yu, Y.; Zhang, K.; Yang, L.; Zhang, D. Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN. Comput. Electron. Agric. 2019, 163, 104846. [Google Scholar] [CrossRef]
- Jia, W.; Tian, Y.; Luo, R.; Zhang, Z.; Zheng, Y. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot. Comput. Electron. Agric. 2020, 172, 105380. [Google Scholar] [CrossRef]
- Gené-Mola, J.; Gregorio, E.; Cheein, F.A.; Guevara, J.; Llorens, J.; Sanz-Cortiella, R.; Escolà, A.; Rosell-Polo, J.R. Fruit detection, yield prediction and canopy geometric characterization using LiDAR with forced air flow. Comput. Electron. Agric. 2020, 168, 105–121. [Google Scholar] [CrossRef]
- Gan, H.; Lee, W.S.; Alchanatis, V.; Ehsani, R.; Schueller, R. Immature green citrus fruit detection using color and thermal images. Comput. Electron. Agric. 2018, 152, 117–125. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once Unified, Real-Time Object Detection. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition—CVPR 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. YOLOv3 An Incremental Improvement. In Proceedings of the 31th IEEE Conference on Computer Vision and Pattern Recognition—CVPR 2018, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1–6. [Google Scholar]
- Tian, Y.; Yang, G.; Wang, Z.; Wang, H.; Li, E.; Liang, Z. Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Comput. Electron. Agric. 2019, 157, 417–426. [Google Scholar] [CrossRef]
- Tian, Y.; Yang, G.; Wang, Z.; Li, E.; Liang, Z. Detection of apple lesions in orchards based on deep learning methods of CycleGAN and YOLO-V3-Dense. J. Sens. 2019, 1–14. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition—CVPR 2017, Honolulu, HI, USA, 22–25 July 2017; pp. 1–9. [Google Scholar]
- Kang, H.; Chen, C. Fruit detection, segmentation and 3D visualization of environments in apple orchards. Comput. Electron. Agric. 2020, 171, 105302. [Google Scholar] [CrossRef] [Green Version]
- Wan, S.; Goudos, S. Faster R-CNN for multi-class fruit detection using a robotic vision system. Comput. Netw. 2020, 168, 107036. [Google Scholar] [CrossRef]
- COCO Common Objects in Context Dataset. Available online: http//cocodataset.org/#overview (accessed on 30 April 2020).
- Ferguson, P.D.; Arslan, T.; Erdogan, A.T.; Parmley, A. Evaluation of Contrast Limited Adaptive Histogram Equalization (CLAHE) Enhancement on a FPGA. In Proceedings of the 2008 IEEE International SOC Conference, Newport Beach, CA, USA, 17–20 September 2008; pp. 119–122. [Google Scholar]
- Kang, H.; Chen, C. Fast implementation of real-time fruit detection in apple orchards using deep learning. Comput. Electron. Agric. 2020, 168, 105108. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
No. of Images | No. of Apples | Average No. of Apples Per Image | No. of Detected Apples | No. of Not Detected Apples | No. of Objects Mistaken for Apples | Precision | Recall | FNR | FPR |
---|---|---|---|---|---|---|---|---|---|
878 | 5142 | 5.86 | 469 | 4673 | 52 | 90.0% | 9.1% | 90.9% | 10.0% |
No. of Images | No. of Apples | Average No. of Apples Per Image | No. of Detected Apples | No. of Not Detected Apples | No. of Objects Mistaken for Apples | IoU | Precision | Recall | F1 | FNR | FPR |
---|---|---|---|---|---|---|---|---|---|---|---|
Whole Set of Images | |||||||||||
878 | 5142 | 5.86 | 4671 | 471 | 394 | 88.9% | 92.2% | 90.8% | 91.5% | 9.2% | 7.8% |
Far-View Canopy Images | |||||||||||
552 | 4358 | 7.89 | 4068 | 290 | 345 | 89.7% | 92.2% | 93.3% | 92.8% | 6.7% | 7.8% |
Close-Up Images | |||||||||||
274 | 533 | 1.95 | 446 | 87 | 30 | 86.1% | 93.7% | 83.7% | 88.4% | 16.3% | 6.3% |
Model | No. of Images | IoU | Precision | Recall | F1 | FNR | FPR |
---|---|---|---|---|---|---|---|
YOLOv3-Dense [67] | 480 | 89.6% | – | – | 81.7% | – | – |
DaSNet-v2 [70] | 560 | 86.1% | 88.0% | 86.8% | 87.3% | 12.0% | 13.2% |
YOLOv3 [70] | 560 | 85.1% | 87.0% | 85.2% | 86.0% | 13.0% | 14.8% |
YOLOv3 [74] | 150 | 84.2% | – | 80.1% | 80.3% | 9.9% | – |
Faster-RCNN [74] | 150 | 86.3% | – | 81.4% | 81.4% | 8.6% | – |
LedNet [74] | 150 | 87.2% | – | 84.1% | 84.9% | 5.9% | – |
Proposed technique (Whole set of images) | 878 | 88.9% | 92.2% | 90.8% | 91.5% | 9.2% | 7.8% |
Proposed technique (Far-view canopy images) | 552 | 89.7% | 92.2% | 93.3% | 92.8% | 6.7% | 7.8% |
Proposed technique (Close-up images) | 274 | 86.1% | 93.7% | 83.7% | 88.4% | 16.3% | 6.3% |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kuznetsova, A.; Maleva, T.; Soloviev, V. Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot. Agronomy 2020, 10, 1016. https://doi.org/10.3390/agronomy10071016
Kuznetsova A, Maleva T, Soloviev V. Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot. Agronomy. 2020; 10(7):1016. https://doi.org/10.3390/agronomy10071016
Chicago/Turabian StyleKuznetsova, Anna, Tatiana Maleva, and Vladimir Soloviev. 2020. "Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot" Agronomy 10, no. 7: 1016. https://doi.org/10.3390/agronomy10071016