Supervised Image Classification by Scattering Transform with Application to Weed Detection in Culture Crops of High Density
"> Figure 1
<p>Global view of the imaging system fixed on a robot moving above mache salads of high density. RGB images are captured by a JAI manufactured camera of 20 M pixels with a spatial resolution of 5120 × 3840 pixels, mounted with a 35 mm objective. The typical distance of plants to camera is of 1 m.</p> "> Figure 2
<p>Set of 10 RGB images from top view for the detection of weed out of plant used as testing data-set in this study.</p> "> Figure 3
<p>Illustration of different types of weeds used for the experiment.</p> "> Figure 4
<p>Simulation pipeline for the creation of images of plant with weed of <a href="#remotesensing-11-00249-f003" class="html-fig">Figure 3</a> similar to the one presented in <a href="#remotesensing-11-00249-f002" class="html-fig">Figure 2</a>.</p> "> Figure 5
<p>Anatomical scales where (<math display="inline"><semantics> <msub> <mi>W</mi> <mi>i</mi> </msub> </semantics></math>,<math display="inline"><semantics> <msub> <mi>P</mi> <mi>i</mi> </msub> </semantics></math>) presents the scales of weeds and plants respectively; <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> </semantics></math> points toward the texture of the limb, <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </semantics></math> indicates the typical size of leaflet and <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>P</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> </semantics></math> stands for the width of the veins. <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>w</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>p</mi> </mrow> </semantics></math> show the size of a leaf of weed and plant, respectively. The classification of weed and plant is done at the scale of a patch taken as <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>×</mo> <mo movablelimits="true" form="prefix">max</mo> <mo>(</mo> <mi>S</mi> <mi>p</mi> <mo>,</mo> <mi>S</mi> <mi>w</mi> <mo>)</mo> </mrow> </semantics></math> in agreement with a Shannon-like criteria.</p> "> Figure 6
<p>Schematic layout of the weed/plant classifier based on the scattering transform with three layers. The feature vector transmitted to the principal component analysis (PCA) step consists in the scatter vector <math display="inline"><semantics> <mrow> <msub> <mi>Z</mi> <mi>m</mi> </msub> <mi>f</mi> </mrow> </semantics></math> of the last layer of Equation (<a href="#FD2-remotesensing-11-00249" class="html-disp-formula">2</a>) after transposition.</p> "> Figure 7
<p>Output images for each class (weed on left and plant on right) and for each layer <span class="html-italic">m</span> of the scatter transform.</p> "> Figure 8
<p>Energy similarity, <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mi>m</mi> </msub> <mrow> <mo>(</mo> <mi>J</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, between energy of weeds and plants data sets based on <a href="#remotesensing-11-00249-t002" class="html-table">Table 2</a> and <a href="#remotesensing-11-00249-t003" class="html-table">Table 3</a>.</p> "> Figure 9
<p>Architecture of the deep network optimized for the task on classification.</p> "> Figure 10
<p>Comparison of the recognition accuracy between scatter transform and deep learning when the number of samples increases.</p> "> Figure 11
<p>Visual comparison of the best and the worst recognition of weeds and plants by scatter transform.</p> ">
Abstract
:1. Introduction
2. Material and Methods
2.1. Images and Challenges
2.2. Scales
2.3. Data-Set
2.4. Classifiers
2.4.1. Scatter Transform
2.4.2. Other Methods
3. Result
4. Discussion
5. Conclusions and Perspectives
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Janowczyk, A.; Madabhushi, A. Deep learning for digital pathology image analysis: A comprehensive tutorial with selected use cases. J. Pathol. Informat. 2016, 7. [Google Scholar] [CrossRef] [PubMed]
- Weiss, K.; Khoshgoftaar, T.M.; Wang, D. A survey of transfer learning. J. Big Data 2016, 3, 9. [Google Scholar] [CrossRef]
- Bruna, J.; Mallat, S. Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1872–1886. [Google Scholar] [CrossRef] [PubMed]
- Minaee, S.; Abdolrashidi, A.; Wang, Y. Iris recognition using scattering transform and textural features. In Proceedings of the Signal Processing and Signal Processing Education Workshop (SP/SPE), Salt Lake City, UT, USA, 9–12 August 2015; pp. 37–42. [Google Scholar]
- Lagrange, M.; Andrieu, H.; Emmanuel, I.; Busquets, G.; Loubrié, S. Classification of rainfall radar images using the scattering transform. J. Hydrol. 2018, 556, 972–979. [Google Scholar] [CrossRef] [Green Version]
- Li, B.H.; Zhang, J.; Zheng, W.S. HEp-2 cells staining patterns classification via wavelet scattering network and random forest. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 406–410. [Google Scholar]
- Rakotomamonjy, A.; Petitjean, C.; Salaün, M.; Thiberville, L. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images. Artif. Intell. Med. 2014, 61, 105–118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yang, X.; Huang, D.; Wang, Y.; Chen, L. Automatic 3d facial expression recognition using geometric scattering representation. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015; Voume 1, pp. 1–6. [Google Scholar]
- Torres-Sánchez, J.; López-Granados, F.; De Castro, A.I.; Peña-Barragán, J.M. Configuration and specifications of an unmanned aerial vehicle (UAV) for early site specific weed management. PLoS ONE 2013, 8, e58210. [Google Scholar] [CrossRef] [PubMed]
- Peña, J.M.; Torres-Sánchez, J.; Serrano-Pérez, A.; de Castro, A.I.; López-Granados, F. Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution. Sensors 2015, 15, 5609–5626. [Google Scholar] [CrossRef]
- Fernández-Quintanilla, C.; Peña, J.; Andújar, D.; Dorado, J.; Ribeiro, A.; López-Granados, F. Is the current state of the art of weed monitoring suitable for site-specific weed management in arable crops? Weed Res. 2018. [Google Scholar] [CrossRef]
- Bakhshipour, A.; Jafari, A. Evaluation of support vector machine and artificial neural networks in weed detection using shape features. Comput. Electron. Agric. 2018, 145, 153–160. [Google Scholar] [CrossRef]
- Lottes, P.; Khanna, R.; Pfeifer, J.; Siegwart, R.; Stachniss, C. UAV-based crop and weed classification for smart farming. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 3024–3031. [Google Scholar]
- Milioto, A.; Lottes, P.; Stachniss, C. Real-time semantic segmentation of crop and weed for precision agriculture robots leveraging background knowledge in CNNs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2229–2235. [Google Scholar]
- Aitkenhead, M.; Dalgetty, I.; Mullins, C.; McDonald, A.J.S.; Strachan, N.J.C. Weed and crop discrimination using image analysis and artificial intelligence methods. Comput. Electron. Agric. 2003, 39, 157–171. [Google Scholar] [CrossRef]
- Marchant, J.; Onyango, C. Comparison of a Bayesian classifier with a multilayer feed-forward neural network using the example of plant/weed/soil discrimination. Comput. Electron. Agric. 2003, 39, 3–22. [Google Scholar] [CrossRef]
- Prema, P.; Murugan, D. A novel angular texture pattern (ATP) extraction method for crop and weed discrimination using curvelet transformation. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2016, 15, 27–59. [Google Scholar] [CrossRef]
- Ahmad, A.; Guyonneau, R.; Mercier, F.; Belin, É. An Image Processing Method Based on Features Selection for Crop Plants and Weeds Discrimination Using RGB Images. In International Conference on Image and Signal Processing; Springer: Berlin, Germany, 2018; pp. 3–10. [Google Scholar]
- Haug, S.; Michaels, A.; Biber, P.; Ostermann, J. Plant classification system for crop/weed discrimination without segmentation. In Proceedings of the 2014 IEEE Winter Conference on Applications of Computer Vision (WACV), Steamboat Springs, CO, USA, 24–26 March 2014; pp. 1142–1149. [Google Scholar]
- Bakhshipour, A.; Jafari, A.; Nassiri, S.M.; Zare, D. Weed segmentation using texture features extracted from wavelet sub-images. Biosyst. Eng. 2017, 157, 1–12. [Google Scholar] [CrossRef]
- Bossu, J.; Gée, C.; Jones, G.; Truchetet, F. Wavelet transform to discriminate between crop and weed in perspective agronomic images. Comput. Electron. Agric. 2009, 65, 133–143. [Google Scholar] [CrossRef]
- Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef] [Green Version]
- Haralick, R.M.; Shanmugam, K.; Its’Hak, D. Textural features for image classification. IEEE Trans. Syst. Man Cybernet. 1973, 6, 610–621. [Google Scholar] [CrossRef]
- Shearer, S.A.; Holmes, R. Plant identification using color co-occurrence matrices. Trans. ASAE 1990, 33, 1237–1244. [Google Scholar] [CrossRef]
- Burks, T.; Shearer, S.; Payne, F. Classification of weed species using color texture features and discriminant analysis. Trans. ASAE 2000, 43, 441. [Google Scholar] [CrossRef]
- Chang, Y.; Zaman, Q.; Schumann, A.; Percival, D.; Esau, T.; Ayalew, G. Development of color co-occurrence matrix based machine vision algorithms for wild blueberry fields. Appl. Eng. Agric. 2012, 28, 315–323. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Volume 1. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv, 2014; arXiv:1412.6980. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar]
- Courty, N.; Flamary, R.; Tuia, D.; Rakotomamonjy, A. Optimal transport for domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1853–1865. [Google Scholar] [CrossRef] [PubMed]
- Slaughter, D.; Giles, D.; Downey, D. Autonomous robotic weed control systems: A review. Comput. Electron. Agric. 2008, 61, 63–78. [Google Scholar] [CrossRef]
- Fadlallah, S.; Goher, K. A review of weed detection and control robots: A world without weeds. In Advances in Cooperative Robotics; World Scientific: Singapore, 2017; pp. 233–240. [Google Scholar]
- Brown, R.B.; Noble, S.D. Site-specific weed management: sensing requirements—What do we need to see? Weed Sci. 2005, 53, 252–258. [Google Scholar] [CrossRef]
- Scharr, H.; Minervini, M.; Fischbach, A.; Tsaftaris, S.A. Annotated image datasets of rosette plants. In Proceedings of the European Conference on Computer Vision, Zürich, Switzerland, 6–12 September 2014; pp. 6–12. [Google Scholar]
- Ubbens, J.; Cieslak, M.; Prusinkiewicz, P.; Stavness, I. The use of plant models in deep learning: An application to leaf counting in rosette plants. Plant Methods 2018, 14, 6. [Google Scholar] [CrossRef] [PubMed]
m = 0 | m = 1 | m = 2 | m = 3 | m = 4 | |
---|---|---|---|---|---|
J = 1 | 96.18 | 2.35 | - | - | - |
J = 2 | 91.81 | 4.61 | 0.28 | - | - |
J = 3 | 85.81 | 8.46 | 0.89 | 0.03 | - |
J = 4 | 85.81 | 13.15 | 1.97 | 0.17 | 0.006 |
J = 5 | 81.46 | 15.36 | 3 | 0.36 | 0.024 |
J = 6 | 79.04 | 16.81 | 3.44 | 0.53 | 0.048 |
J = 7 | 80.74 | 17.05 | 3.49 | 0.63 | 0.071 |
m = 0 | m = 1 | m = 2 | m = 3 | m = 4 | |
---|---|---|---|---|---|
J = 1 | 99.90 | 0.0985 | - | - | - |
J = 2 | 99.71 | 0.2798 | 0.0098 | - | - |
J = 3 | 99.07 | 0.8832 | 0.0443 | 0.0016 | - |
J = 4 | 97.55 | 2.2669 | 0.1663 | 0.0080 | 0.0003 |
J = 5 | 95.10 | 4.3892 | 0.4667 | 0.0343 | 0.0020 |
J = 6 | 92.07 | 6.8696 | 0.9522 | 0.0983 | 0.0076 |
J = 7 | 89.26 | 9.0102 | 1.5049 | 0.1979 | 0.0196 |
m = 0 | m = 1 | m = 2 | m = 3 | m = 4 | |
---|---|---|---|---|---|
J = 1 | 99.92 | 0.0711 | - | - | - |
J = 2 | 99.76 | 0.2339 | 0.0040 | - | - |
J = 3 | 99.17 | 0.7984 | 0.0281 | 0.0003 | - |
J = 4 | 97.75 | 2.0899 | 0.1380 | 0.0041 | 0.00003 |
J = 5 | 95.41 | 4.1411 | 0.4215 | 0.0254 | 0.0006 |
J = 6 | 92.34 | 6.6553 | 0.9078 | 0.0892 | 0.005 |
J = 7 | 89.37 | 8.9341 | 1.4817 | 0.1944 | 0.0171 |
J = 1 | J = 2 | J = 3 | J = 4 | J = 5 | J = 6 | J = 7 | J = 8 | |
---|---|---|---|---|---|---|---|---|
m = 1 | 70.37% | 77.89% | 82.74% | 86.17% | 88.96% | 91.94% | 94.14% | 95.05% |
m = 2 | —- | 91.95% | 95.26% | 95.54% | 95.86% | 95.82% | 95.73% | 95.55% |
m = 3 | —- | —- | 95.41% | 95.44% | 95.21% | 95.07% | 95.03% | 96.00% |
m = 4 | —- | —- | —- | 96.31% | 96.02% | 96.05% | 96.16% | 96.11% |
5 Folds | 6 Folds | 7 Folds | 8 Folds | 9 Folds | 10 Folds | Average std | |
---|---|---|---|---|---|---|---|
Scatter Transform ( samples) | 94.9% | 95.2% | 95.3% | 95.7% | 95.8% | 95.8% | ±1.1 |
LBP ( samples) | 85.5% | 86.1% | 86.3% | 85.8% | 86.9% | 86.7% | ±0.4 |
GLCM ( samples) | 87.4% | 91.6% | 90.9% | 92.1% | 92.4% | 92.3% | ±0.7 |
Gabor Filter ( samples) | 88.0% | 88.2% | 88.7% | 88.6% | 89.4% | 89.3% | ±1.3 |
Deep Learning ( samples) | 89.4% | 89.9% | 91.1% | 91.5% | 91.9% | 92.1% | ±1.4 |
Deep Learning ( samples) | 97.6% | 97.9% | 97.9% | 98.2% | 98.1% | 98.3% | ±0.9 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rasti, P.; Ahmad, A.; Samiei, S.; Belin, E.; Rousseau, D. Supervised Image Classification by Scattering Transform with Application to Weed Detection in Culture Crops of High Density. Remote Sens. 2019, 11, 249. https://doi.org/10.3390/rs11030249
Rasti P, Ahmad A, Samiei S, Belin E, Rousseau D. Supervised Image Classification by Scattering Transform with Application to Weed Detection in Culture Crops of High Density. Remote Sensing. 2019; 11(3):249. https://doi.org/10.3390/rs11030249
Chicago/Turabian StyleRasti, Pejman, Ali Ahmad, Salma Samiei, Etienne Belin, and David Rousseau. 2019. "Supervised Image Classification by Scattering Transform with Application to Weed Detection in Culture Crops of High Density" Remote Sensing 11, no. 3: 249. https://doi.org/10.3390/rs11030249