Optimization of Trash Identification on the House Compound Using a Convolutional Neural Network (CNN) and Sensor System
<p>A sample detection object (mouse) in 2D and 3D.</p> "> Figure 2
<p>A sample of the detection object (mouse). (<b>a</b>) A photo of a mouse on paper (2D); (<b>b</b>) a real mouse photo (3D).</p> "> Figure 3
<p>(<b>a</b>) Sample images of own trash dataset. (<b>b</b>) Sample images of the TrashNet dataset. (<b>c</b>) Sample images of the LeafSnap dataset.</p> "> Figure 4
<p>Block diagram.</p> "> Figure 5
<p>Simple transfer learning using CNN architecture (AlexNet, VGGG16, GooogleNet, and ResNet18).</p> "> Figure 6
<p>Modifying the GoogleNet architecture using Deep Network Designer.</p> "> Figure 7
<p>Flowchart of the proposed study.</p> "> Figure 8
<p>Convert the bounding box to <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">z</span> coordinates.</p> "> Figure 9
<p>TF40 LiDAR.</p> "> Figure 10
<p>Fast scanning image process path.</p> "> Figure 11
<p>The position of the LiDAR sensor during the scanning process.</p> "> Figure 12
<p>Example of LiDAR sensor data.</p> "> Figure 13
<p>Flowchart of fast scanning image.</p> "> Figure 14
<p>Detail scanning image process path.</p> "> Figure 15
<p>Flowchart of detail scanning image.</p> "> Figure 16
<p>Result of training progress trash object using GoogleNet.</p> "> Figure 17
<p>Result of confusion matrix trash using VGG16.</p> "> Figure 18
<p>IP address of the IP camera displays on the mobile device (iPhone).</p> "> Figure 19
<p>Results of trash identification using a real-time camera. (<b>a</b>) Cardboard; (<b>b</b>) food packaging.</p> "> Figure 20
<p>Results of trash identification using a real-time camera. (<b>a</b>) metal; (<b>b</b>) paper.</p> "> Figure 21
<p>Results of trash identification using a real-time camera. (<b>a</b>) glass; (<b>b</b>) plastic.</p> "> Figure 22
<p>Result of the error trash identification.</p> "> Figure 23
<p>Result of fast scanning image of the trash object (box).</p> "> Figure 24
<p>Result of detail scanning image of the trash object (box).</p> "> Figure 25
<p>Result of fast scanning image and detail scanning image of the trash object.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. TF40 LiDAR
2.2. Fast Scanning Image Using LiDAR
- If sensor_value = hypotenuse, then the line/point of the image is a flat plane;
- If sensor_value > hypotenuse, then the lines/dots of the image become concave;
- If sensor_value < hypotenuse, then the lines/dots of the image become convex.
2.3. Detail Scanning Image Using LiDAR
3. Results
3.1. Confusion Matrix for Trash Classification Testing
3.2. Trash Identification Test Using a Real-Time Camera
3.3. Result of a Fast Scanning Image
3.4. Result of Detail Scanning Image
3.5. Result of Time Speed Comparison between Fast Scanning Image and Detail Scanning Image
4. Discussion
- If the object is straight in front of the LiDAR sensor, then the y-coordinate:
- 2.
- If the object is on the front left side/the front right side of the LiDAR sensor, the y-coordinate value can be calculated by the formula:
- 3.
- The formula can calculate the x-coordinate value:The value of 90 is due to servo motor 1 being set at 90 degrees. After all the x values are read and stored in the matrix variable, then the x value is added to the maximum x value with the formula:
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Baud, I.; Post, J.; Furedy, C. (Eds.) Solid Waste Management and Recycling; Springer: Dordrecht, The Netherlands, 2004; Volume 76. [Google Scholar] [CrossRef]
- Kshirsagar, P.R.; Kumar, N.; Almulihi, A.H.; Alassery, F.; Khan, A.I.; Islam, S.; Rothe, J.P.; Jagannadham, D.B.V.; Dekeba, K. Artificial Intelligence-Based Robotic Technique for Reusable Waste Materials. Comput. Intell. Neurosci. 2022, 2022, 2073482. [Google Scholar] [CrossRef]
- Owusu-Ansah, P.; Obiri-Yeboah, A.A.; Kwesi Nyantakyi, E.; Kwame Woangbah, S.; Idris Kofi Yeboah, S.I. Ghanaian inclination towards household waste segregation for sustainable waste management. Sci. Afr. 2022, 17, e01335. [Google Scholar] [CrossRef]
- Sheng, T.J.; Islam, M.S.; Misran, N.; Baharuddin, M.H.; Arshad, H.; Islam, M.R. An Internet of Things Based Smart Waste Management System Using LoRa and Tensorflow Deep Learning Model. IEEE Access 2020, 8, 148793–148811. [Google Scholar] [CrossRef]
- Raza, S.M.; Hassan, S.M.G.; Hassan, S.A.; Shin, S.Y. Real-Time Trash Detection for Modern Societies using CCTV to Identifying Trash by utilizing Deep Convolutional Neural Network. arXiv 2021, arXiv:2109.09611. [Google Scholar]
- Alsubaei, F.S.; Al-Wesabi, F.N.; Hilal, A.M. Deep Learning-Based Small Object Detection and Classification Model for Garbage Waste Management in Smart Cities and IoT Environment. Appl. Sci. 2022, 12, 2281. [Google Scholar] [CrossRef]
- Longo, E.; Sahin, F.A.; Redondi, A.E.C.; Bolzan, P.; Bianchini, M.; Maffei, S. A 5G-Enabled Smart Waste Management System for University Campus. Sensors 2021, 21, 8278. [Google Scholar] [CrossRef]
- Treiber, M.A. Introduction. In Optimization for Computer Vision; Springer: London, UK, 2013; pp. 1–16. [Google Scholar] [CrossRef]
- Fuchikawa, Y.; Nishida, T.; Kurogi, S.; Kondo, T.; Ohkawa, F.; Suehiro, T.; Kihara, Y. Development of a Vision System for an Outdoor Service Robot to Collect Trash on Streets. In the Proceedings of the Eighth IASTED International Conference on Computer Graphics and Imaging, CGIM 2005, Honolulu, HI, USA, 15–17 August 2005; pp. 100–105. [Google Scholar]
- Salvini, P.; Teti, G.; Spadoni, E.; Laschi, C.; Mazzolai, B.; Dario, P. The Robot DustCart. IEEE Robot. Autom. Mag. 2011, 18, 59–67. [Google Scholar] [CrossRef]
- Yang, M.; Thung, G. Classification of Trash for Recyclability Status. CS229Project Rep. 2016, 2016, 1–6. [Google Scholar]
- Mao, W.L.; Chen, W.C.; Wang, C.T.; Lin, Y.H. Recycling waste classification using optimized convolutional neural network. Resour. Conserv. Recycl. 2021, 164, 105132. [Google Scholar] [CrossRef]
- Rahman, M.W.; Islam, R.; Hasan, A.; Bithi, N.I.; Hasan, M.M.; Rahman, M.M. Intelligent waste management system using deep learning with IoT. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 2072–2087. [Google Scholar] [CrossRef]
- Adedeji, O.; Wang, Z. Intelligent waste classification system using deep learning convolutional neural network. Procedia Manuf. 2019, 35, 607–612. [Google Scholar] [CrossRef]
- Hulyalkar, S.; Deshpande, R.; Makode, K. Implementation of Smartbin Using Convolutional Neural Networks. Int. Res. J. Eng. Technol. 2018, 5, 3352–3358. [Google Scholar]
- Zhao, Z.-Q.; Zheng, P.; Xu, S.-T.; Wu, X. Object Detection With Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3212–3232. [Google Scholar] [CrossRef] [Green Version]
- Funch, O.I.; Marhaug, R.; Kohtala, S.; Steinert, M. Detecting glass and metal in consumer trash bags during waste collection using convolutional neural networks. Waste Manag. 2021, 119, 30–38. [Google Scholar] [CrossRef]
- Ren, C.; Jung, H.; Lee, S.; Jeong, D. Coastal waste detection based on deep convolutional neural networks. Sensors 2021, 21, 7269. [Google Scholar] [CrossRef]
- Liu, C.; Xie, N.; Yang, X.; Chen, R.; Chang, X.; Zhong, R.Y.; Peng, S.; Liu, X. A Domestic Trash Detection Model Based on Improved YOLOX. Sensors 2022, 22, 6974. [Google Scholar] [CrossRef]
- Wu, X.; Sahoo, D.; Hoi, S.C.H. Recent advances in deep learning for object detection. Neurocomputing 2020, 396, 39–64. [Google Scholar] [CrossRef] [Green Version]
- Dougherty, G. Classification. In Pattern Recognition and Classification: An Introduction; Springer: New York, NY, USA, 2013; pp. 9–26. [Google Scholar] [CrossRef]
- Khan, M.A.U.; Nazir, D.; Pagani, A.; Mokayed, H.; Liwicki, M.; Stricker, D.; Afzal, M.Z. A Comprehensive Survey of Depth Completion Approaches. Sensors 2022, 22, 6969. [Google Scholar] [CrossRef]
- Qiu, J.; Cui, Z.; Zhang, Y.; Zhang, X.; Liu, S.; Zeng, B.; Pollefeys, M. DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 3308–3317. [Google Scholar] [CrossRef] [Green Version]
- Cheng, X.; Wang, P.; Guan, C.; Yang, R. CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 10615–10622. [Google Scholar] [CrossRef]
- Van Gansbeke, W.; Neven, D.; De Brabandere, B.; Van Gool, L. Sparse and Noisy LiDAR Completion with RGB Guidance and Uncertainty. In Proceedings of the 2019 16th International Conference on Machine Vision Applications (MVA), Tokyo, Japan, 27–31 May 2019; pp. 1–6. [Google Scholar] [CrossRef] [Green Version]
- Hu, M.; Wang, S.; Li, B.; Ning, S.; Fan, L.; Gong, X. PENet: Towards Precise and Efficient Image Guided Depth Completion. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May 2021–5 June 2021; pp. 13656–13662. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, K.; Li, X.; Zhang, Z.; Li, J.; Yang, J. RigNet: Repetitive Image Guided Network for Depth Completion. In Computer Vision-ECCV 2022; Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T., Eds.; Springer Nature Switzerland: Cham, Germany, 2022; pp. 214–230. [Google Scholar] [CrossRef]
- Nazir, D.; Liwicki, M.; Stricker, D.; Afzal, M.Z. SemAttNet: Towards Attention-based Semantic Aware Guided Depth Completion. IEEE Access 2022, 10, 120781–120791. [Google Scholar] [CrossRef]
- Liu, L.; Song, X.; Lyu, X.; Diao, J.; Wang, M.; Liu, Y.; Zhang, L. FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for Depth Completion. Proc. Conf. AAAI Artif. Intell. 2021, 25, 2136–2144. [Google Scholar] [CrossRef]
- Eldesokey, A.; Felsberg, M.; Holmquist, K.; Persson, M. Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning to End. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 12011–12020. [Google Scholar] [CrossRef]
- Jaritz, M.; de Charette, R.; Wirbel, E.; Perrotton, X.; Nashashibi, F. Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation. In Proceedings of the 2018 IEEE International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 52–60. [Google Scholar] [CrossRef] [Green Version]
- Barré, P.; Stöver, B.C.; Müller, K.F.; Steinhage, V. LeafNet: A computer vision system for automatic plant species identification. Ecol. Inform. 2017, 40, 50–56. [Google Scholar] [CrossRef]
- Kumar, N.; Belhumeur, P.N.; Biswas, A.; Jacobs, D.W.; Kress, W.J.; Lopez, I.C.; Soares, J.V. Leafsnap: A Computer Vision System for Automatic Plant Species Identification. In Computer Vision-ECCV 2012; Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 502–516. [Google Scholar] [CrossRef]
- Adetiba, E.; Ajayi, O.T.; Kala, J.R.; Badejo, J.A.; Ajala, S.; Abayomi, A. LeafsnapNet: An Experimentally Evolved Deep Learning Model for Recognition of Plant Species based on Leafsnap Image Dataset. J. Comput. Sci. 2021, 17, 349–363. [Google Scholar] [CrossRef]
- Chen, J.; Zhang, H.; Lu, Y.; Zhang, Q. The Research on Control and Dynamic Property of Autonomous Vehicle Adaptive Lidar System. In Proceedings of the 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics), Rhodes, Greece, 2–6 November 2020; pp. 462–468. [Google Scholar] [CrossRef]
- Benewake, T. “TF40 Datasheet”. 2022. Available online: https://my.mouser.com/datasheet/2/1099/Benewake_10152020_TF40-1954048.pdf (accessed on 20 June 2022).
- Iordan, D.; Popescu, G. The accuracy of LiDAR measurements for the different land cover categories. Environ. Eng. 2015, 4, 158–164. [Google Scholar]
- Wang, Y.; Che, J.; Zhang, L.; Ma, M. Research of garbage salvage system based on deep learning. In Proceedings of the International Conference on Computer Application and Information Security (ICCAIS 2021), Wuhan, China, 24 May 2022. [Google Scholar] [CrossRef]
- Fan, Z.; Li, C.; Chen, Y.; Mascio, P.D.; Chen, X.; Zhu, G.; Loprencipe, G. Ensemble of Deep Convolutional Neural Networks for Automatic Pavement Crack Detection and Measurement. Coatings 2020, 10, 152. [Google Scholar] [CrossRef]
References | Sensors | Method | Comment |
---|---|---|---|
[23,24,25,26,27,28,29] | Camera RGB, LiDAR | Early Fusion | Data from RGB camera images and LiDAR sensors are directly input for deep learning and processed together. The result is complete depth. |
[26,28,30] | Camera RGB, LiDAR | Sequential Fusion | The data from the RGB camera image are first processed by deep learning. The result is RGB depth. Then, these results and the LiDAR sensor data directly become the input for subsequent deep learning. Both are processed together, and the result is a complete depth. |
[26,28,31] | Camera RGB, LiDAR | Late Fusion | The data from the RGB camera image are first processed by deep learning. The result is RGB depth. In addition, LiDAR sensor data are also processed by deep learning. The result is LiDAR depth. Furthermore, the respective outputs are processed together, resulting in complete depth. |
Proposed | Camera RGB, LiDAR | Sequential_Camera_LiDAR (SCL): CNN + Fast Scanning Image + Detail Scanning Image | Image data from the RGB camera are processed using deep learning to detect the presence of trash objects. If what is detected is a trash object, then a Fast Scanning image is performed to ensure that the object is in 3D form. The scanning process uses a LiDAR sensor. If the object is 3D, then a detail scanning Image is performed to determine the correct position in lifting the trash object using a robot gripper. |
Parameter | AlexNet | VGG16 | GoogleNet | Resnet18 |
---|---|---|---|---|
MiniBatch size | 5 | 5 | 5 | 5 |
Learning rate | 0.0003 | 0.0003 | 0.0003 | 0.0003 |
Max epoch | 6 | 6 | 6 | 6 |
Data augmented | Yes | Yes | Yes | Yes |
Parameter Name | Standard Version | |
---|---|---|
Product performance | Range | 0.04–40 m 90% reflectivity, 0.04–20 m 10% reflectivity |
Accuracy | ±2 mm | |
Distance resolution | 1 mm | |
Frame rate | 5 Hz | |
Optical parameters | Light source | LD |
Wavelength | 635 nm | |
Laser class | CLASS 2 (EN 60825) | |
Detection angle | <1 mrad | |
Electrical parameters | Supply voltage | 3.3 V |
Average current | ≤180 mA | |
Power consumption | ≤0.6 W | |
Communication voltage level | LVTTL (3.3 V) |
CNN Architecture | Training Time (s) | Validation Accuracy (%) |
---|---|---|
AlexNet | 229 | 77.54 |
VGG16 | 894 | 79.09 |
GoogleNet | 581 | 86.38 |
ResNet18 | 335 | 80.75 |
CNN Architecture | Accuracy (%) |
---|---|
AlexNet | 80.5 |
VGG16 | 95.6 |
GoogleNet | 98.3 |
ResNet18 | 97.5 |
Types of Trash | Number of Real Trash Tests | Total Trash is Correctly Detected | Accuracy (%) |
---|---|---|---|
Cardboard | 45 | 37 | 82.22222222 |
Fabric | 17 | 13 | 76.47058824 |
Food Packaging | 63 | 52 | 82.53968254 |
Fruit | 21 | 17 | 80.95238095 |
Glass | 34 | 27 | 79.41176471 |
Leaf | 56 | 46 | 82.14285714 |
Metal | 21 | 16 | 76.19047619 |
Paper | 61 | 51 | 83.60655738 |
Plastic | 52 | 42 | 80.76923077 |
Rubber | 14 | 10 | 71.42857143 |
Wood | 27 | 21 | 77.77777778 |
Overall | 411 | 332 | 79.41019176 |
Types of Trash | Number of Real Trash Tests | Total Trash is Correctly Detected | Accuracy (%) |
---|---|---|---|
Cardboard | 45 | 40 | 88.88888889 |
Fabric | 17 | 14 | 82.35294118 |
Food Packaging | 63 | 57 | 90.47619048 |
Fruit | 21 | 19 | 90.47619048 |
Glass | 34 | 30 | 88.23529412 |
Leaf | 56 | 49 | 87.50000000 |
Metal | 21 | 18 | 85.71428571 |
Paper | 61 | 54 | 88.52459016 |
Plastic | 52 | 45 | 86.53846154 |
Rubber | 14 | 11 | 78.57142857 |
Wood | 27 | 23 | 85.18518519 |
Overall | 411 | 360 | 86.58758694 |
Types of Trash | Number of Real Trash Tests | Total Trash is Correctly Detected | Accuracy (%) |
---|---|---|---|
Cardboard | 45 | 44 | 97.77777778 |
Fabric | 17 | 16 | 94.11764706 |
Food Packaging | 63 | 62 | 98.41269841 |
Fruit | 21 | 20 | 95.23809524 |
Glass | 34 | 33 | 97.05882353 |
Leaf | 56 | 55 | 98.21428571 |
Metal | 21 | 20 | 95.23809524 |
Paper | 61 | 60 | 98.36065574 |
Plastic | 52 | 51 | 98.07692308 |
Rubber | 14 | 13 | 92.85714286 |
Wood | 27 | 26 | 96.29629630 |
Overall | 411 | 400 | 96.51349463 |
Types of Trash | Number of Real Trash Tests | Total Trash is Correctly Detected | Accuracy (%) |
---|---|---|---|
Cardboard | 45 | 43 | 95.55555556 |
Fabric | 17 | 15 | 88.23529412 |
Food Packaging | 63 | 61 | 96.82539683 |
Fruit | 21 | 20 | 95.23809524 |
Glass | 34 | 33 | 97.05882353 |
Leaf | 56 | 54 | 96.42857143 |
Metal | 21 | 20 | 95.23809524 |
Paper | 61 | 59 | 96.72131148 |
Plastic | 52 | 50 | 96.15384615 |
Rubber | 14 | 13 | 92.85714286 |
Wood | 27 | 26 | 96.2962963 |
Overall | 411 | 394 | 95.14622079 |
CNN Architecture | Average Accuracy (%) |
---|---|
AlexNet | 79.410 |
VGG16 | 86.588 |
GoogleNet | 96.513 |
ResNet18 | 95.146 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Naf’an, E.; Sulaiman, R.; Ali, N.M. Optimization of Trash Identification on the House Compound Using a Convolutional Neural Network (CNN) and Sensor System. Sensors 2023, 23, 1499. https://doi.org/10.3390/s23031499
Naf’an E, Sulaiman R, Ali NM. Optimization of Trash Identification on the House Compound Using a Convolutional Neural Network (CNN) and Sensor System. Sensors. 2023; 23(3):1499. https://doi.org/10.3390/s23031499
Chicago/Turabian StyleNaf’an, Emil, Riza Sulaiman, and Nazlena Mohamad Ali. 2023. "Optimization of Trash Identification on the House Compound Using a Convolutional Neural Network (CNN) and Sensor System" Sensors 23, no. 3: 1499. https://doi.org/10.3390/s23031499
APA StyleNaf’an, E., Sulaiman, R., & Ali, N. M. (2023). Optimization of Trash Identification on the House Compound Using a Convolutional Neural Network (CNN) and Sensor System. Sensors, 23(3), 1499. https://doi.org/10.3390/s23031499