Real-Time Seed Detection and Germination Analysis in Precision Agriculture A Fusion Model With U-Net and CNN On Jetson Nano
Real-Time Seed Detection and Germination Analysis in Precision Agriculture A Fusion Model With U-Net and CNN On Jetson Nano
Real-Time Seed Detection and Germination Analysis in Precision Agriculture A Fusion Model With U-Net and CNN On Jetson Nano
Abstract—Precision agriculture involves the strategic utilization seed quality through the utilization of sophisticated imaging
of resources, precise application of inputs, and continuous moni- technologies and machine learning algorithms. The study carried
toring of crop health with the aim of enhancing productivity and out in this field is noteworthy as it showcases the potential of
sustainability in the field of agriculture. However, seed quality is
difficult since natural differences among seed batches may affect these methods in improving the precision and effectiveness of
germination rates, vigor, and crop performance. Hence, in this seed quality assessments, thereby resulting in increased crop
article, a novel fusion model for seed detection and germination is yields and overall agricultural productivity [2]. Accurate and
proposed. The proposed model combines the U-Net and CNN archi- effective seed germination detection is crucial for a number of
tectures for seed segmentation and classification, respectively. By
applications, including seed quality evaluation [3], agricultural
harnessing U-Net’s capabilities in image segmentation and CNN’s
strengths in classification, the proposed approach enables effective production optimization, and plant growth dynamics monitor-
seed germination analysis. In addition, the model is specifically op- ing. Traditional techniques for detecting seed germination some-
timized for real-time processing and applications by implementing times depend on labor-intensive, inaccurate human counting, or
it on the NVIDIA Jetson Nano embedded GPU platform. The pro- observation. By using convolutional neural networks (CNNs),
posed fusion model achieved 0.91 pixel accuracy, 0.84 intersection deep learning (DL) techniques have shown promising results
over union, and 0.90 precision. The proposed model demonstrated
excellent predictive ability when compared with the ResNet50, in automating seed germination detection in recent years [4].
Inception, and LeNet. In addition, the proposed model requires less Providing crucial knowledge on seed quality and management,
number of trainable parameters after LeNet. Further, the proposed covering all areas of seed germination, dormancy, and technol-
model tested in real time and achieved 0.26 ms latency. ogy in the field of seed research [5]. Temperature and water
Index Terms—Convolutional neural network (CNN), deep potential conditions necessary for the successful germination
learning (DL), NVIDIA Jetson Nano, precision agriculture, seed of various chickpea cultivars are examined, thereby enhancing
germination. cultivation practices [6]. To enhance chickpea farming, the ap-
propriate temperature and water potential for germination must
I. INTRODUCTION be determined [7]. Various applications of DL in agriculture,
such as crop monitoring, yield estimate, and precision farming,
N AGRICULTURE, horticulture, and ecological studies,
I seed germination, the process by which a dormant seed
sprouts into a new plant, is a crucial step in plant develop-
are examined [8]. The purpose of using CNN is to improve agri-
cultural processes by improving reliability and efficiency [9].
Machine learning uses contemporary artificial neural networks
ment [1]. In current agricultural research, there has been an
and region suggestions to identify seed germination accurately
increasing focus on enhancing the evaluation techniques for
in high-throughput tests [10]. A machine vision system for seed
germination analysis is presented in this work. The hardware
Manuscript received 27 July 2023; revised 11 October 2023; accepted 7
November 2023. Date of publication 5 December 2023; date of current version
configuration and image processing methods utilized to ex-
19 December 2023. This work was supported by the MSME, Government of tract pertinent information from seed pictures are described.
India under Grant 17(2)/MSME Innovative/PMAC/2021-22. This article was To distinguish between seeds that have germinated and those
recommended by Associate Editor M. Sophocleous. (Corresponding author:
Prakash Kodali.)
that have not, the authors use conventional techniques, such as
Ramesh Reddy Donapati is with the Department of Electronics and Com- color analysis, texture analysis, and shape-based characteris-
munication Engineering, National Institute of Technology Warangal, Warangal tics [11]. To differentiate between seeds that have germinated
506004, India, and also with the Dept. of ECE, Vallurupalli Nageswara Rao
Vignana Jyothi Institute of Engineering & Technology, Hyderabad 500090, India
and those that have not, the authors use a CNN-based archi-
(e-mail: dramesh24@student.nitw.ac.in). tecture and form descriptors. The findings demonstrate how DL
Ramalingaswamy Cheruku is with the Department of Computer Science and techniques are successful at precisely recognizing seed germina-
Engineering, National Institute of Technology Warangal, Warangal 506004,
India (e-mail: rmlswamy@nitw.ac.in).
tion phases [12]. A two-stream DL architecture for segmenting
Prakash Kodali is with the Department of Electronics and Communication and counting leaves from 2-D plant photos encompasses size
Engineering, National Institute of Technology Warangal, Warangal 506004, and shape differences [13]. Different plant phenotype mod-
India (e-mail: kprakash@nitw.ac.in).
Digital Object Identifier 10.1109/TAFE.2023.3332495
els are discussed in [14] and [15]. Simpler empirical models
2771-9529 © 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
146 IEEE TRANSACTIONS ON AGRIFOOD ELECTRONICS, VOL. 1, NO. 2, DECEMBER 2023
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
DONAPATI et al.: REAL-TIME SEED DETECTION AND GERMINATION ANALYSIS IN PRECISION AGRICULTURE: A FUSION MODEL 147
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
148 IEEE TRANSACTIONS ON AGRIFOOD ELECTRONICS, VOL. 1, NO. 2, DECEMBER 2023
Fig. 3. (a) Camera interfaced to Jetson Nano development board. (b) Growth chamber used for collecting dataset.
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
DONAPATI et al.: REAL-TIME SEED DETECTION AND GERMINATION ANALYSIS IN PRECISION AGRICULTURE: A FUSION MODEL 149
Fig. 5. (a) Proposed growth chamber. (b) Temperature and humidity monitoring. (C) Collected seed images. (d) Display of the proposed growth chamber.
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
150 IEEE TRANSACTIONS ON AGRIFOOD ELECTRONICS, VOL. 1, NO. 2, DECEMBER 2023
Fig. 6. Learning curve for the U-Net Model. Fig. 7. Foldwise loss curve for the proposed CNN model.
model that predicts the germination process based on the image and comprehension of this method are relatively simple, it
patterns. By collecting images for every five minutes, the model cannot provide detailed explanations regarding the prediction of
receives granular and comprehensive data that enables it to detect individual classes. Moreover, it may lead to misleading results
and learn even subtle changes during germination and increase in situations where there is an imbalance among the classes. The
the accuracy of its predictions, and to record the entire growth intersection over union (IoU) metric quantitatively measures the
process. degree of overlap between the predicted segmentation area and
The collected dataset is used for fusion model employing the ground truth, as given in (4). The calculation involves divid-
U-Net and CNN for seed detection and classification. Using ing the overlap area by the combined area of the two regions.
the prepared dataset, the model is trained and then optimized A higher IoU value indicates a model with greater precision.
with an appropriate loss function, optimizer, and performance Precision is a metric that assesses the proportion of accurate
metrics. The performance of the model is rigorously evaluated positive predictions out of all the predicted positive instances.
against the validation and test datasets, gauging its effectiveness The precision of a model refers to its ability to accurately identify
and deployment suitability. relevant instances. Recall as given in (5) refers to the ratio of
U-Net model is evaluated using Keras, Tensor Flow 2.12.0 on correctly identified positive instances to the total number of
an HP Z6 G4 Workstation with 52 cores and 64 GB of DDR4 actual positive instances in the context of predictive modeling.
RAM. A U-Net model is trained using 35 871 seed images for It signifies the proficiency of the model in accurately identifying
50 epochs on a GPU for semantic segmentation. Similarly, the all pertinent occurrences.
proposed CNN model is trained for 50 epochs on the GPU using The F1-score as given in (6) is a quantitative measure that
germination and nongermination seed images. effectively combines precision and recall in a balanced manner.
The proposed system is trained on a system having speci- The metric is a standardized evaluation of the model’s precision,
fication, CPU as Intel Xeon GOLD 6226R @ 2.9 GHz, GPU with particular significance for class scenario imbalances. A
as NVIDIA RTX A4000 with 64 GB internal memory. The higher F1-score is indicative of a model that exhibits greater
proposed fusion model took roughly six hours for 50 epochs. accuracy.
A 1.3 MB model weight file is generated. The model weights These metrics provide a comprehensive assessment of the per-
are deployed on Jetson Nano, and it takes 35 s to process the formance of a model and are crucial in adjusting and enhancing
input image for germination prediction. the DL models
To support real-time applications, the model has been op- TP + TN
timized for deployment on the NVIDIA Jetson Nano GPU, Pixel accuracy = (3)
TP + FN + TN + FP
capitalizing on its high computational capacity and energy effi-
TP
ciency, as shown in Fig. 5. To assure seamless operation on the IoU = (4)
Jetson Nano platform, the deployment procedure includes model TP + FP
optimization techniques, such as model pruning and quantiza- TP
Recall = (5)
tion. Finally, the system is validated in real-world agricultural TP + FN
contexts, where the performance and robustness of the deployed 2TP
model are rigorously evaluated, confirming its practical utility F1-score = . (6)
2TP + FP + FN
and contribution to precision agriculture.
E. Results
D. Evaluation Metrics
The U-Net model and the proposed CNN of the fusion model
The pixel accuracy metric given in (3) is a straightforward are trained for 50 epochs. The collected dataset is partitioned
measure that quantifies the proportion of correctly predicted as per the ten-fold cross validation (10-FCV) for training and
pixels to the total number of pixels. While the computation testing. Then, the training dataset is first fed into U-Net and
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
DONAPATI et al.: REAL-TIME SEED DETECTION AND GERMINATION ANALYSIS IN PRECISION AGRICULTURE: A FUSION MODEL 151
trained with Adam optimizer with a cross-entropy loss function. are fed into the proposed CNN for classification of germination
Next, the output of U-Net is fed into the CNN model and trained or no germination state.
using Adam optimizer with a cross-entropy loss function. The
training performance of U-Net and CNN is depicted in Figs. 6
and 7, respectively. The individual binary masks of the given F. Tenfold Cross Validation
Petri dish generated by the U-Net model are as shown in Fig. 8. The 1O-FCV is employed to assess the efficacy of a DP model
These individual binary masks are combined to create a binary in predicting seed germination. Initially, the dataset consisting of
mask of the input image, which is shown in Fig. 9. These 1200 is partitioned into ten subsets of equal size 120. During each
segmented images are used to extract the seeds, as shown in iteration, the training process utilizes nine subsets that is 1080
Fig. 10. All the extracted seeds are shown in Fig. 11. These seeds images for training purposes, whereas one subset of 120 images
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
152 IEEE TRANSACTIONS ON AGRIFOOD ELECTRONICS, VOL. 1, NO. 2, DECEMBER 2023
TABLE IV
TENFOLD VALIDATION FOR THE PROPOSED FUSION MODEL
G. Comparative Analysis
The proposed model is compared with the three state-of-the-
art models ResNet50, Inception, and LeNet, respectively. These
results are furnished in Table V and best values are highlighted.
From the table results, it is observed that the proposed model
Fig. 10. Seeds extracted from Petri dishes using mask images of U-Net model.
performs better than the other three state-of-the-art models
(ResNet50, Inception, and LeNet) overall. The suggested model
predicts pixels with more accuracy (0.91 pixels accurate) than
ResNet50 (0.76), Inception (0.74), and LeNet (0.82 pixels ac-
curate). The proposed fusion model has a score of 0.84 for
IoU, which is much higher than ResNet50’s (0.65), Inception’s
(0.58), and LeNet’s (0.70) scores. This shows more overlap
between the segmentation regions that were predicted and
those that were actually segmented, further demonstrating the
model’s improved performance. The suggested model outper-
forms ResNet50 (0.72), Inception (0.77), and LeNet (0.79),
achieving an accuracy score of 0.90 when measuring the model’s
ability to correctly identify relevant occurrences. Recall, a metric
that gauges a model’s capacity to find all relevant occurrences,
gives the proposed model a score of 0.92. ResNet50 and Incep-
tion fall short with 0.89 and 0.77, respectively, although LeNet
is near at 0.94.
Finally, the proposed fusion model outperforms ResNet50
(0.79), Inception (0.71), and LeNet (0.82) with an F1-score of
0.91. This higher F1-score suggests that the suggested model
offers a superior precision/recall balance, which is crucial in
Fig. 11. Collection of all the segmented masks from the proposed fusion situations of class imbalance. In conclusion, the proposed fusion
model. model outperforms the competition in terms of effectiveness
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
DONAPATI et al.: REAL-TIME SEED DETECTION AND GERMINATION ANALYSIS IN PRECISION AGRICULTURE: A FUSION MODEL 153
TABLE V
EVALUATION METRICS OF THE FUSION MODELS WITH STATE-OF-THE-ART PRETRAINED MODELS
Fig. 12. Comparsion of UNET only for the germination and fusion model.
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
154 IEEE TRANSACTIONS ON AGRIFOOD ELECTRONICS, VOL. 1, NO. 2, DECEMBER 2023
IV. CONCLUSION
In this article, a novel fusion model for seed detection and
germination classification is proposed that combines the U-Net
and CNN architectures. By harnessing U-Net’s capabilities in
image segmentation and CNN’s strengths in classification, the
proposed approach enables effective seed germination classifi-
cation. In addition, the model is specifically optimized for real-
time processing by implementing it on the NVIDIA Jetson Nano
embedded GPU platform. We tested it in real-time environment
and the measured latency is 0.26 ms. The proposed fusion model
obtained 0.91 pixel accuracy, 0.84 IoU, and 0.90 precision,
which are best when compared with the state-of-the-art models.
Also, the proposed model requires less number of trainable
parameters after LeNet. Further, the proposed model was tested
in real time and achieved 0.26 ms latency.
Fig. 15. Germination prediction on Jetson Nano board for 100–150 seeds.
ACKNOWLEDGMENT
TABLE VI The authors would like to thank Dr. Mamta Pathak, a Princi-
EVALUATION METRICS OF THE FUSION MODELS WITH STATE-OF-THE-ART pal Olericulturist at Punjab Agricultural University, Ludhiana,
MODELS Punjab, India for her continuous support.
REFERENCES
[1] Heslop-Harrison, John, “Plant development,” Encyclopedia Britannica.
Accessed: Jul. 2023, 2022. [Online]. Available: https://www.britannica.
com/science/plant-development
ROC curve (AUC-ROC) measures a model’s capacity to distin- [2] V. G. Dhanya et al., “Deep learning-based computer vision approaches
for smart agricultural applications,” Artif. Intell. Agriculture, vol. 6,
guish between positive and negative classes. The ROC curve of pp. 211–229, 2022, ISSN 2589-7217.
the proposed model with the state-of-the-art model is shown in [3] R. P. S, “Testing seed for quality,” in Seed Science and Technology, M.
Fig. 13. From the figure, it is noted that the proposed model Dadlani and D. K. Yadava Eds. Singapore: Springer, 2023.
[4] Q. Peng, L. Tu, Y. Wu, Z. Yu, G. Tang, and W. Song, “Automatic monitoring
performance is consistently superior to those of the LeNet, system for seed germination test based on deep learning,” J. Elect. Comput.
Inception, and ResNet50 models. This would indicate that the Eng., vol. 2022, 2022, Art. no. 4678316, doi: 10.1155/2022/4678316.
proposed model performs better with various threshold settings. [5] M. L. Srivastava, Seed Germination, Mobilization of Food Reserves, and
Seed Dormancy, in Plant Growth and Development. San Diego CA, USA:
The proposed model appears more capable of differentiating be- Academic Press, 2002, pp. 447–471.
tween positive and negative classes, resulting in a more reliable [6] H. Khaeim, Z. Kende, M. Jolánkai, G. P. Kovács, C. Gyuricza, and Á
model for seed germination prediction. Tarnawa, “Impact of temperature and water on seed germination and
seedling growth of maize I (Zea mays l.),” Agronomy, vol. 12, no. 2, 2022,
Art. no. 397, doi: 10.3390/agronomy12020397.
I. Model Deployment on Jetson Nano [7] R. Anju, D. Poonam, J. U. Chand, S. K. Dev, S. Kadambot, and H. M. Nay-
yar, “Developing climate-resilient chickpea involving physiological and
A U-Net model with a weight of 1.3 MB, and a CNN weight molecular approaches with a focus on temperature and drought stresses,”
Front. Plant Sci., vol. 10, 2020, Art. no. 1759.
of 22 KB, will predict seed germination on the Jetson Nano [8] A. Kamilaris and F. X. Prenafeta-Boldú, “Deep learning in agriculture: A
in real time. A red bounding box indicates that the seed has survey,” Comput. Electron. Agriculture, vol. 147, pp. 70–90, 2018, ISSN
not germinated, and a green bounding box indicates that it has 0168-1699, doi: 10.1016/j.compag.2018.02.016.
[9] N. Genze et al., “Accurate machine learning-based germination detection,
germinated, as shown in Figs. 14 and 15. prediction and quality assessment of three grain crops,” Plant Methods,
The U-Net’s complicated design, which incorporates convo- vol. 16, 2020, Art. no. 157, doi: 10.1186/s13007-020-00699-x.
lutional and max pooling layers in its contracting route and an [10] Y. Nehoshtan et al., “Robust seed germination prediction using deep
learning and RGB image data,” Sci. Rep., vol. 11, 2021, Art. no. 22030.
expanding path for exact localization, makes it more significant. doi: 10.1038/s41598-021-01712-6.
The CNN model is lighter and quicker, but less precise. [11] L. Benjamaporn and C. Pornpanomchai, “Application of image processing
These models continually analyze seed pictures to predict and computer vision on rice seed germination analysis,” Int. J. Appl. Eng.
Res., vol. 11, pp. 6800–6807, 2016.
germination in less than 0.26 ms. Real-time analysis might [12] Y. Gulzar, Y. Hamid, A. B. Soomro, A. A. Alwan, and L. Journaux, “A
give farmers and researchers rapid feedback, enhancing their convolution neural network-based seed classification system,” Symmetry,
operations. vol. 12, no. 12, 2020, doi: 10.3390/sym12122018.
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.
DONAPATI et al.: REAL-TIME SEED DETECTION AND GERMINATION ANALYSIS IN PRECISION AGRICULTURE: A FUSION MODEL 155
[13] F. Xijian, Z. Rui, T. Tardi, D. C. Sruti, and Y. Qiaolin, “A Segmentation- Ramesh Reddy Donapati (Member, IEEE) received
guided deep learning framework for leaf counting,” Front. Plant Sci., the B. Tech degree in ECE and the M. Tech degree
vol. 13, 2022, Art. no. 844522. in embedded systems from Jawaharlal Nehru Tech-
[14] A. Walter, F. Liebisch, and A. Hund, “Plant phenotyping: From bean nological University, Hyderabad, India, in 2010 and
weighing to image analysis,” Plant Methods, vol. 11, 2015, Art. no. 14, 2012, respectively. He is currently working toward the
doi: 10.1186/s13007-015-0056-8. Ph.D. degree in AI and IoT in precision agriculture
[15] Q. Xiao, X. Bai, C. Zhang, and Y. He, “Advanced high-throughput plant with the National Institute of Technology, Warangal,
phenotyping techniques for genome-wide association studies: A review,” India.
J. Adv. Res., vol. 35, pp. 215-230, 2022, ISSN 2090-1232. He is currently an Assistant Professor with the
[16] F. Forcella, R. L. B. Arnold, R. Sanchez, and C. M. Ghersa, “Modeling Dept. of ECE, VNR VJIET, Hyderabad. To his credit
seedling emergence,” Field Crops Res., vol. 67, no. 2, pp. 123–139, 2000, one Indian patent is granted and more than 20 research
ISSN 0378-4290. articles. Moreover, he has four sanctioned research projects funded by various
[17] Y. Zhang, J. Yu, Y. Chen, W. Yang, W. Zhang, and Y. He, “Real-time government bodies. His research interests include AI and IoT in agriculture.
strawberry detection using deep neural networks on embedded system
(RTSD-Net): An edge AI application,” Comput. Electron. Agriculture,
vol. 192, 2022, Art. no. 106586, ISSN 0168-1699.
[18] N. A. George–Jones et al., “Automated detection of vestibular Schwan-
noma growth using a two–dimensional U–Net convolutional neural net-
work,” Laryngoscope, vol. 131, no. 2, pp. E619–E624, 2021.
[19] K. Hadi et al., “Development of pixel-wise U-Net model to assess perfor- Ramalingaswamy Cheruku (Member, IEEE) re-
mance of cereal sowing,” Biosyst. Eng., vol. 208, pp. 260–271, 2021. ceived the B.Tech. degree in CSE from the Jawahar-
[20] C. Qian et al., “An improved U-Net network-based quantitative analysis of lal Nehru Technological University, Kakinada (State
melon fruit phenotypic characteristics,” J. Food Meas. Characterization, University), India, in 2008, the M.Tech. degree in
vol. 16, no. 5, pp. 4198–4207, 2022. CSE from the Atal Bihari Vajpayee Indian Institute of
[21] W. Choi and Y. -J. Cha, “SDDNet: Real-time crack segmentation,” Information Technology and Management, Gwalior,
IEEE Trans. Ind. Electron., vol. 67, no. 9, pp. 8016–8025, Sep. 2020, India, in 2011, and the Ph.D. degree in CSE from the
doi: 10.1109/TIE.2019.2945265. National Institute of Technology Goa, Ponda, India,
[22] R. Ali and Y.-J. Cha, “Attention-based generative adversarial network in 2018.
with internal damage segmentation using thermography,” Automat. Con- He is currently an Assistant Professor (Gr-I) with
struction, vol. 141, 2022, Art. no. 104412, ISSN 0926-5805, [Online]. the Department of CSE, National Institute of Tech-
Available: https://doi.org/10.1016/j.autcon.2022.104412 nology Warangal, Hanamkonda, India. He has authored or coauthored more
[23] J. Lewis, Y. J. Cha, and J. Kim, “Dual encoder-decoder-based deep polyp than 50 research articles in reputed venues. Further, he has one granted Indian
segmentation network for colonoscopy images,” Sci. Rep., vol. 13, no. 1, patent and one granted Australian patent.
2023, Art. no. 1183, doi: 10.1038/s41598-023-28530-2. PMID: 36681776;
PMCID: PMC9867760.
[24] K. Cao and X. Zhang, “An improved res-convolutional model for tree
species classification using airborne high-resolution images,” Remote
Sens., vol. 12, no. 7, 2020, Art. no. 1128.
[25] K. V. Suma, D. B. Koppad, K. Awasthi, A. K. Phani, and R.
Vikas, “Application of AI models in agriculture,” in Proc. 4th Kodali Prakash (Senior Member, IEEE) received the
Int. Conf. Circuits, Control, Commun. Comput., 2022, pp. 387–390, bachelor degree in electronics and communication
doi: 10.1109/I4C57141.2022.10057718. engineering from the Jawaharlal Nehru Technological
[26] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks University, Hyderabad, India, in 2007, and the master
for Biomedical Image Segmentation,” in Proc. Med. Image Comput. degree in electro-mechatronics and the Ph.D. degree
Comput.-Assist. Interv.–MICCAI 2015: 18th Int. Conf., Munich, Germany, in flexible electronics from the Indian Institute of
2015. Science, Bangalore, India, in 2010 and 2016, respec-
[27] R. Xin, J. Zhang, and Y. Shao, “Complex network classification with tively.
convolutional neural network,” Tsinghua Sci. Technol., vol. 25, no. 4, He is currently an Assistant Professor with the
pp. 447–457, Aug. 2020. Department of Electronics and Communication Engi-
[28] D. Kolosov, V. Kelefouras, P. Kourtessis, and I. Mporas, “Anatomy of neering, National Institute of Technology, Warangal,
deep learning image classification and object detection on commercial India. He has three granted patents to his credit and authored or coauthored
edge devices: A case study on face mask detection,” IEEE Access, vol. 10, nearly 35 research articles. Further, he has six funded projects from various
pp. 109167–109186, 2022, doi: 10.1109/ACCESS.2022.3214214. government bodies.
Authorized licensed use limited to: VP's Kamalnayan Bajaj Institute of Eng. and Tech. Downloaded on July 26,2024 at 12:54:54 UTC from IEEE Xplore. Restrictions apply.