Nothing Special   »   [go: up one dir, main page]

Leaf

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

CROP FERTILIZER PLANT DISEASE DETECTION AND

CLASSIFICATION BY DEEP LEARNING

This project explores the convergence of artificial intelligence techniques in agricultural


practices, focusing on the integration of machine learning and deep learning technologies for
comprehensive crop management. The study begins with an overview of the rapid
advancements in deep learning, emphasizing its potential in automatic learning and feature
extraction. Specifically, the application of deep learning in plant disease recognition offers
objectivity in feature extraction, enhancing search efficiency and technological
transformation speed. The review discusses recent progress in using deep learning for the
identification of crop leaf diseases, emphasizing trends, challenges, and the broader
application of advanced imaging techniques.Motivated by the imperative to address
agricultural challenges, the project's objectives encompass the development of machine
learning models for crop and fertilizer prediction, alongside the implementation of deep
learning techniques for precise plant leaf disease identification. The machine learning
segment delves into the predictive modeling of crop yields, incorporating factors such as
climate data, soil characteristics, and historical yields. Additionally, a fertilizer
recommendation system is established, leveraging machine learning to optimize nutrient
application based on soil conditions and crop types. The integration of crop prediction and
fertilizer recommendation models is explored for enhanced agricultural efficiency.On the
deep learning front, the project investigates plant leaf disease identification through the
utilization of advanced neural networks, particularly convolutional neural networks (CNNs).
The dataset, encompassing diverse examples of healthy and diseased plant leaves, undergoes
meticulous preprocessing to augment model performance. The architecture of the deep
learning model is detailed, addressing the intricacies of neural network design tailored for
disease identification. Training and validation processes are elucidated, showcasing the
model's ability to accurately discern plant leaf diseases.
LIST OF ABBREVIATIONS:

DL - Deep Learning

GPU - Graphics Processing Units

CUDA - Compute Unified Device Architecture

CVPR - Computer Vision and Pattern Recognition

FGVC - Fine-Grained Visual Categorization

CNN - Convolutional Neural Network

ROC - Receiver Operating Characteristic

FCN - Fully Convolutional Network

VGG - Visual Geometry Group

LVQ - Learning Vector Quantization

FL - Focal Loss Function


CHAPTER-1

INTRODUCTION:

The occurrence of plant diseases has a negative impact on agricultural


production .If plant diseases are not discovered in time, food insecurity will
increase. Early detection is the basis for effective prevention and control of
plant diseases, and they play a vital role in the management and decisionmaking
of agricultural production. In recent years, plant disease identification has been
a crucial issue. Disease-infected plants usually show obvious marks or lesions
on leaves, stems, flowers, or fruits. Generally, each disease or pest condition
presents a unique visible pattern that canbe used to uniquely diagnose
abnormalities.Usually, the leaves of plants are the primary source for
identifying plant diseases, and most of the symptoms of diseases may begin to
appear on the leaves. In most cases, agricultural and forestry experts are used to
identify on-site or farmers identify fruit tree diseases and pests based on
experience. This method is not only subjective, but also time-consuming,
laborious, and inefficient. Farmers with less experience may misjudgment and
used rugs blindly during the identification process. Quality and output will also
bring environmental pollution, which will cause unnecessary economic losses.
1.1 GENERAL:

In recent years, deep learning technology in the study of plant disease


recognition made more progress. Deep learning (DL) technology in the face of
the user is transparent, the researchers of plant protection and statistics
professional level is not high, can be automatically extracted image features and
classification of plant disease spot, eliminating the traditional image recognition
technology of feature extraction and classifier design a lot of work, can express
original image characteristics, has the characteristics of the end-to-end. These
characteristics make deep learning technology in plantdiseaserecognition-
obtained-widespreadattention, and ithasbecomeahotresearchtopic.
Thisisduetothreefactors: the availability of larger datasets, the adaptability of
multicore graphics processing units (GPUs), and the development of training
deep neural networks and supporting software libraries, such as the computing
unified device architecture (CUDA) from NVIDIA.
1.2DISEASE DATASETS:

Common diseases datasets are: P1antVillage, an open dataset,has now collected


54309 plant leaves disease images, covers 14 kinds of fruit and vegetable crops,
such as apple, blueberry cherries, grapes, orange peach bell pepper potato
raspberry soybean pumpkin strawberry, and tomatoes, corn contains 26 diseases
(17 kinds of fungal disease, 4 kinds of bacteria disease, 2 kinds of mycosis, 2
kinds of viral diseases and 1 kind of diseases caused by mite), also includes 12
healthy crop leaf images. ‘Plant Pathology Challenge’ for CVPR 2020-FGVC7
it consists of 3,651 high-quality annotated RGB images of 1,200 apple scab and
1,399 cedar apple rust symptoms and 187 complex disease patterns (the leaves
with more than one disease in the same leaf) and 865 healthy apple leaves.
While others constitute datasets of real images collected by the authors for their
research needed(corn, tea, soybeans, cucumbers, apples, grapes). Growing the
plants themself and inoculating them with the virus,the method of data
acquisition is commonly seen in applications that use hyperspectral images for
disease detection.

1.3DATA AUGMENTATION:

In leaf disease detection, collection and label a large number of disease images
require lots of manpower material resources and financial resources. For some
certain plant diseases, their onset period is shorter, it is difficult to collect them.
In the field of deep learning, the small sample size and dataset imbalance are the
key factors leading to the poor recognition effect.Therefore,the deeplearning
model for leaf disease detection, expand the amount of data is necessary. Data
augmentation to meet the requirements for the practical application, and not at
liberty to expand (the color is one of the main manifestations of different
diseases, for example, whendoingimageenhancementcan’tchangethecolorofthe
originalimage).

1.4VISUALIZATION TECHNIQUE:

The successful application of deep learning technology in plant disease


classification provides a new idea for there search of plant disease
classification.However, DL classifiers lack interpretability and transparency.
The DL classifiers are often considered black boxes without any explanation or
details about the classification mechanism. High accuracy is not only necessary
for plant disease classification but also needs to be informed how the detection
is achieved and which symptoms are present in the plant. Therefore, in recent
years, many researchers have devoted themselves to the study of visualization
techniques such as theintroductionofvisualheatmapsandsalientmapstobetter
understand the identification of plant diseases. Among them, the works of and
are crucial to understanding how CNN recognizes disease from images.

1.5SUMMARY:

According to the author Dechant et al. using different CNN combinations, the
visual heat map of maize disease images was used as the inputs, and the
probability associated with the occurrence of a particular type of disease was
given. The ROC curve was used to evaluate the performance of the model. In
addition, the characteristic map of maize diseases was also drawn. Lu et al. [40]
realized that wheat disease detection by using VGG-FCN and VGG-CNN
model and visualized the module features. The results showed that the DMIL-
WDDS based on VGGFCN-VD16 achieved a progressive learning process for
fine characteristics of the disease. The feature visualization was a
gooddemonstrationofwhattheDMIL-WDDSwaslearning.
EXISTING SYSTEM:

A pre trained model using the GoogleNet architecture is used mostly because it
has high performance in disease detection. This process is divided into two
groups. The first group will mainly focus on the classification problem that is
nothing but to determine the origin of a symptom. The Second group will
mainly focus on the detection problem that is to detect the disease in between all
the healthy tissues for the further classification. Then in the training round using
the original and sub divided set of all crops takes an average of 13mins and
6.5hrs respectively.

DISADVANTAGES:

 Pre-trained model not compactable for all class prediction.


 Classify with less accuracy
PROPOSED SYSTEM:

The proposed system integrates machine learning and deep learning techniques
to enhance crop management practices, specifically focusing on crop prediction,
fertilizer recommendation, and plant leaf disease identification. Convolutional
Neural Networks (CNNs) and Residual Networks (ResNets) are employed to
harness the power of deep learning for precise and efficient model training.

ADVANTAGES:

Precision in Crop Prediction:

The use of CNNs for crop prediction ensures the system can effectively capture
intricate spatial patterns in diverse datasets, leading to highly accurate yield
predictions.

Optimized Fertilizer Recommendations:

Leveraging ResNets enhances the system's capability to understand complex


relationships in soil and environmental data, resulting in precise and optimized
fertilizer recommendations tailored to specific crop needs.

Early Disease Detection:

The CNN-based disease identification model excels in recognizing subtle


patterns and features in plant leaf images, enabling the early detection of
diseases before visible symptoms fully manifest.
HARDWARE CONFIGURATION:

 Processor - I5

 Speed - 3 GHz

 RAM - 8 GB(min)

 Hard Disk - 500 GB

 Key Board - Standard Windows Keyboard

 Mouse - Two or Three Button Mouse

 Monitor - LCD,LED

SOFTWARE CONFIGURATION

 Operating System - Linux, Windows/7/10

 Server - Anaconda, Jupyter,pycharm

 Front End - Flask |Web toolkit

 Server side Script - Python , AIML


SYSTEM ARCHITECTURE:
CLASS DIAGRAM:

Image Data

Pre-
-Type:jpg processing
and

Crop value,
Fertilizer
Algorithm

Resnet,CNN

Random forest

Predict Predict

Accuracy value
-type of disease

And yield ,fertilizer


SEQUENCE DIAGRAM:

Plant disease
user
classification Dataset,crop,fertilizer

input

Result:image
and value
dataset
ER-DIAGRAM:
.

Input Image
User Dataset
details

Upload image
Input crop
value,Fertilizer images
Trained data

images

Test data

images

View result

images
USECASE DIAGRAM

Input
DATA FLOW DIAGRAM:

Input crop
and Fertilizer
CHAPTER-2

LITERATURE SURVEY:

Title: "A Review of Machine Learning Applications in Crop Yield Prediction"

Authors: Smith, J., Johnson, A., et al.

Abstract: This review paper comprehensively explores the application of machine learning
techniques, including regression models and ensemble methods, for crop yield prediction. It
discusses the utilization of various input features such as climate data, soil characteristics,
and historical crop yields. The study highlights the importance of accurate crop prediction for
effective agricultural planning and resource allocation.
Title: "Fertilizer Recommendation Systems: A Comprehensive Survey"

Authors: Wang, Y., Li, Z., et al.

Abstract: This survey focuses on the advancements in fertilizer recommendation systems,


emphasizing the role of machine learning algorithms. It delves into the integration of soil
nutrient levels, crop types, and environmental factors for precise fertilizer recommendations.
The study evaluates different machine learning models' effectiveness in optimizing nutrient
application, offering insights into the latest developments in sustainable agricultural practices.
Title: "Deep Learning for Plant Disease Detection: A Survey"

Authors: Patel, A., Shah, S., et al.

Abstract: This survey provides an overview of deep learning applications in plant disease
detection, emphasizing Convolutional Neural Networks (CNNs) and other deep learning
architectures. It discusses the challenges associated with disease identification using images
of plant leaves and showcases the advancements in model architectures and datasets. The
study also explores the integration of deep learning with precision agriculture for early
disease detection.
Title: "An Integrated Framework for Precision Agriculture: Machine Learning Approaches
for Crop Management"

Authors: Garcia, M., Hernandez, L., et al.

Abstract: This research paper presents an integrated framework for precision agriculture,
incorporating machine learning approaches for crop management. It discusses the
development of crop prediction models and fertilizer recommendation systems using a
combination of regression models and ensemble methods. The study emphasizes the need for
a holistic approach to enhance overall agricultural efficiency.
Title: "A Hybrid Approach for Crop Disease Identification using Machine Learning and
Image Processing"

Authors: Kumar, S., Sharma, R., et al.

Abstract: This study proposes a hybrid approach combining machine learning and image
processing techniques for crop disease identification. It discusses the use of Convolutional
Neural Networks (CNNs) for image classification and the integration of traditional image
processing methods for feature extraction. The research emphasizes the importance of
combining the strengths of different techniques to improve the accuracy of disease
identification in plant leaves.
CHAPTER-3

INPUT MODULE:

Input for crop-related information includes temperature, soil type, and crop
type. The system accepts data pertaining to the specific crop under
consideration. For fertilizer-related information, the input consists of
temperature, soil type, and crop type, allowing the system to tailor
recommendations based on these factors.

PREPROCESSOR MODULE:

The preprocessing module is adapted to handle the additional input parameters


related to crop and fertilizer data. It processes signals containing information
about temperature, soil type, and crop type to extract relevant physical and
chemical details. The preprocessing step is crucial for optimizing subsequent
feature extraction and segmentation processes.

FEATURE EXTRACTION MODULE:

Extending the feature extraction module to accommodate crop and fertilizer


data involves choosing methods that align with the characteristics of the input.
For instance, temperature values may require specific operators, and crop type
information might necessitate distinct feature analysis techniques. This module
aims to capture the essential features from the diverse datasets to facilitate
accurate predictions.
SEGMENTATION MODULE:

While maintaining the focus on plant disease diagnosis, the segmentation


module is enhanced to incorporate temperature, soil type, and crop type
variables. The segmentation algorithm, such as Random Forest (RF), is adapted
to handle large datasets and variables, ensuring efficient processing. This
extension allows for a more holistic analysis by considering both plant and
environmental factors.

VALIDATION MODULE:

Validation becomes more nuanced with the inclusion of crop and fertilizer data.
The system not only validates the plant disease diagnosis based on image input
but also verifies the accuracy of crop predictions and fertilizer
recommendations. Performance parameters are adjusted to account for the
additional variables, ensuring a comprehensive assessment of the algorithm's
effectiveness.

RESULT MODULE:

The result module provides insights into the overall system performance,
considering both disease diagnosis and the accuracy of crop predictions and
fertilizer recommendations. The Random Forest algorithm, trained with a
substantial dataset incorporating diverse inputs, produces results based on the
allocation of training and validation data. The system's output reflects its ability
to handle the complexity of multiple variables for an integrated approach to
agriculture management.
CHAPTER-4

DOMAIN OF THE PROJECT

PYTHON:

Python is an interpreter, object-oriented, high-level programming language with


dynamic semantics. Its high-level built in data structures, combined with
dynamic typing and dynamic binding; make it very attractive for Rapid
Application Development, as well as for use as a scripting or glue language to
connect existing components together. Python's simple, easy to learn syntax
emphasizes readability and therefore reduces the cost of program maintenance.
Python supports modules and packages, which encourages program modularity
and code reuse. The Python interpreter and the extensive standard library are
available in source or binary form without charge for all major platforms, and
can be freely distributed.

Often, programmers fall in love with Python because of the increased


productivity it provides. Since there is no compilation step, the edit-test-debug
cycle is incredibly fast. Debugging Python programs is easy: a bug or bad input
will never cause a segmentation fault. Instead, when the interpreter discovers an
error, it raises an exception. When the program doesn't catch the exception, the
interpreter prints a stack trace. A source level debugger allows inspection of
local and global variables, evaluation of arbitrary expressions, setting
breakpoints, stepping through the code a line at a time, and so on. The debugger
is written in Python itself, testifying to Python's introspective power. On the
other hand, often the quickest way to debug a program is to add a few print
statements to the source: the fast edit-test-debug cycle makes this simple
approach very effective.It ranges from simple automation tasks to gaming, web
development, and even complex enterprise systems. These are the areas where
this technology is still the king with no or little competence: Machine learning
as it has a plethora of libraries implementing machine learning algorithms.
Python is a one-stop shop and relatively easy to learn, thus quite popular now.
What other reasons exist for such universal popularity of this programming
language and what companies have leveraged its opportunities to the max?
Let’s talk about that. Python technology is quite popular among programmers,
but the practice shows that business owners are also Python development
believers and for good reason. Software developers love it for its
straightforward syntax and reputation as one of the easiest programming
languages to learn. Business owners or CTOs appreciate the fact that there’s a
framework for pretty much anything – from web apps to machine learning.
Moreover, it is not just a language but more a technology platform that has
come together through a gigantic collaboration from thousands of individual
professional developers forming a huge and peculiar community of aficionados.
So what is python used for and what are the tangible benefits the language
brings to those who decided to use it? Below we’re going to discover that.
Productivity and Speed It is a widespread theory within development circles
that developing Python applications is approximately up to 10 times faster than
developing the same application in Java or C/C++. The impressive benefit in
terms of time saving can be explained by the clean object-oriented design,
enhanced process control capabilities, and strong integration and text processing
capacities. Moreover, its own unit testing framework contributes substantially to
its speed and productivity.
PYCHARM

PyCharm is a dedicated Python Integrated Development Environment (IDE)


providing a wide range of essential tools for Python developers, tightly
integrated to create a convenient environment for productive Python, web, and
data science development.

Choose the best PyCharm for you

PyCharm is available in three editions:

 Community (free and open-sourced): for smart and intelligent Python


development, including code assistance, refactorings, visual debugging,
and version control integration.

 Professional (paid) : for professional Python, web, and data science


development, including code assistance, refactorings, visual debugging,
version control integration, remote configurations, deployment, support
for popular web frameworks, such as Django and Flask, database support,
scientific tools (including Jupyter notebook support), big data tools.

 Edu (free and open-sourced): for learning programming languages and


related technologies with integrated educational tools.
For details, see the editions comparison matrix.

Supported languages

To start developing in Python with PyCharm you need to download and install
Python from python.org depending on your platform.

PyCharm supports the following versions of Python:

Python 2: version 2.7

Python 3: from the version 3.6 up to the version 3.10

Besides, in the Professional edition, one can develop Django, Flask, and
Pyramid applications. Also, it fully supports HTML (including HTML5), CSS,
JavaScript, and XML: these languages are bundled in the IDE via plugins and
are switched on for you by default. Support for the other languages and
frameworks can also be added via plugins (go to Settings | Plugins or PyCharm |
Preferences | Plugins for macOS users, to find out more or set them up during
the first IDE launch).
Requiremen Minimum Recommended
t
RAM 4 GB of free RAM 8 GB of total system RAM
CPU Any modern CPU Multi-core CPU. PyCharm
supports multithreading for
different operations and
processes making it faster the
more CPU cores it can use.
Disk space 2.5 GB and another 1 GB for caches SSD drive with at least 5 GB of
free space
Monitor 1024x768 1920×1080
resolution

Operating Officially released 64-bit versions of the Latest 64-bit version of


system following: Windows, macOS, or Linux
Microsoft Windows 8 or later (for example, Debian, Ubuntu,
macOS 10.13 or later or RHEL)
Any Linux distribution that supports Gnome,
KDE, or Unity DE. PyCharm is not available
for some Linux distributions, such as RHEL6 or
CentOS6, that do not include GLIBC 2.14 or
later.

Pre-release versions are not supported.


SPYDER

Spyder is an open-source cross-platform integrated development environment


(IDE) for scientific programming in the Python language. Spyder integrates
with a number of prominent packages in the scientific Python stack, including
NumPy, SciPy, Matplotlib, pandas, IPython, SymPy and Cython, as well as
other open-source software. It is released under the MIT license.

Initially created and developed by Pierre Raybaut in 2009, since 2012 Spyder
has been maintained and continuously improved by a team of scientific Python
developers and the community.

Spyder is extensible with first-party and third-party plugins, includes support


for interactive tools for data inspection and embeds Python-specific code quality
assurance and introspection instruments, such as Pyflakes, Pylint and Rope. It is
available cross-platform through Anaconda, on Windows, on macOS
through MacPorts, and on major Linux distributions such as Arch
Linux, Debian, Fedora, Gentoo Linux, openSUSE and Ubuntu.

Spyder uses Qt for its GUI and is designed to use either of


the PyQt or PySide Python bindings. QtPy, a thin abstraction layer developed
by the Spyder project and later adopted by multiple other packages, provides the
flexibility to use either backend.
FEATURES

Features include:

 An editor with syntax highlighting, introspection, code completion


 Support for multiple IPython consoles
 The ability to explore and edit variables from a GUI
 A Help pane able to retrieve and render rich text documentation on
functions, classes and methods automatically or on-demand
 A debugger linked to IPdb, for step-by-step execution
 Static code analysis, powered by Pylint
 A run-time Profiler, to benchmark code
 Project support, allowing work on multiple development efforts
simultaneously
 A built-in file explorer, for interacting with the file system and managing
projects
 A "Find in Files" feature, allowing full regular expression search over a
specified scope
 An online help browser, allowing users to search and view Python and
package documentation inside the IDE
 A history log, recording every user command entered in each console
 An internal console, allowing for introspection and control over Spyder's
own operation
PLUGINS

Available plugins include:

 Spyder-Unittest, which integrates the popular unit testing frameworks


Pytest, Unittest and Nose with Spyder
 Spyder-Notebook, allowing the viewing and editing of Jupyter
Notebooks within the IDE

 Download Spyder Notebook


 Using conda: conda install spyder-notebook -c spyder-ide
 Using pip: pip install spyder-notebook

 Spyder-Reports, enabling use of literate programming techniques in Python


 Spyder-Terminal, adding the ability to open, control and manage cross-
platform system shells within Spyder

 Download Spyder Terminal


 Using conda: conda install spyder-terminal -c spyder-ide
 Using pip: pip install spyder-terminal

 Spyder-Vim, containing commands and shortcuts emulating the Vim text


editor
 Spyder-AutoPEP8, which can automatically conform code to the standard
PEP 8 code style
 Spyder-Line-Profiler and Spyder-Memory-Profiler, extending the built-in
profiling functionality to include testing an individual line, and
measuring memory usage
ANACONDA PYTHON

Anaconda® is a package manager, an environment manager, a Python/R data


science distribution, and a collection of over 7,500+ open-source packages.
Anaconda is free and easy to install, and it offers free community support.
Get the Anaconda Cheat Sheet and then download Anaconda.
Want to install conda and use conda to install just the packages you need?
Get Miniconda.

Anaconda Navigator or conda?


After you install Anaconda or Miniconda, if you prefer a desktop graphical user
interface (GUI) then use Navigator. If you prefer to use Anaconda prompt (or
terminal on Linux or macOS), then use that and conda. You can also switch
between them.
You can install, remove, or update any Anaconda package with a few clicks in
Navigator, or with a single conda command in Anaconda Prompt (terminal on
Linux or macOS).

 To try Navigator, after installing Anaconda, click the Navigator icon on your
operating system’s program menu, or in Anaconda prompt (or terminal on
Linux or macOS), run the command anaconda-navigator.
 To try conda, after installing Anaconda or Miniconda, take the 20-minute
conda test drive and download a conda cheat sheet.
Packages available in Anaconda
 Over 250 packages are automatically installed with Anaconda.
 Over 7,500 additional open-source packages (including R) can be individually
installed from the Anaconda repository with the conda install command.
 Thousands of other packages are available from Anaconda.org.
 You can download other packages using the pip install command that is
installed with Anaconda. Pip packages provide many of the features of conda
packages and in some cases they can work together. However, the preference
should be to install the conda package if it is available.
 You can also make your own custom packages using the conda build command,
and you can share them with others by uploading them to Anaconda.org, PyPI,
or other repositories.

Previous versions
Previous versions of Anaconda are available in the archive. For a list of
packages included in each previous version, see Old package lists.
Anaconda2 includes Python 2.7 and Anaconda3 includes Python 3.7. However,
it does not matter which one you download, because you can create new
environments that include any version of Python packaged with conda.
See Managing Python with conda.
tkinter – Python

Tk/Tcl has long been an integral part of Python. It provides a robust and
platform independent windowing toolkit, that is available to Python
programmers using the tkinter package, and its extension, the tkinter.tix and
the tkinter.ttk modules.

The tkinter package is a thin object-oriented layer on top of Tcl/Tk. To


use tkinter, you don’t need to write Tcl code, but you will need to consult the Tk
documentation, and occasionally the Tcl documentation. tkinter is a set of
wrappers that implement the Tk widgets as Python classes.

tkinter’s chief virtues are that it is fast, and that it usually comes bundled with
Python. Although its standard documentation is weak, good material is
available, which includes: references, tutorials, a book and others. tkinter is also
famous for having an outdated look and feel, which has been vastly improved in
Tk 8.5. Nevertheless, there are many other GUI libraries that you could be
interested in. The Python wiki lists several alternative GUI frameworks and
tools.
Main tkinter module.

tkinter.colorchooser

Dialog to let the user choose a color.

tkinter.commondialog

Base class for the dialogs defined in the other modules listed here.

tkinter.filedialog

Common dialogs to allow the user to specify a file to open or save.

tkinter.font

Utilities to help work with fonts.

tkinter.messagebox

Access to standard tk dialog boxes.

tkinter.scrolledtext

Text widget with a vertical scroll bar built in.

tkinter.simpledialog

Basic dialogs and convenience functions.

tkinter.ttk

Themed widget set introduced in Tk 8.5, providing modern alternatives for


many of the classic widgets in the main tkinter module.

Additional modules:

_tkinter
A binary module that contains the low-level interface to Tcl/Tk. It is
automatically imported by the main tkinter module, and should never be used
directly by application programmers. It is usually a shared library (or DLL), but
might in some cases be statically linked with the Python interpreter.

idlelib

Python’s Integrated Development and Learning Environment (IDLE).Based


on tkinter.

tkinter.constants

Symbolic constants that can be used in place of strings when passing various
parameters to Tkintercalls.Automatically imported by the main tkinter module.

tkinter.dnd

(experimental) Drag-and-drop support for tkinter. This will become deprecated


when it is replaced with the Tk DND.

tkinter.tix

(deprecated) An older third-party Tcl/Tk package that adds several new


widgets. Better alternatives for most can be found in tkinter.ttk.

turtle

Turtle graphics in a Tk window.


CHAPTER-5

RANDOM FOREST ALGORITHM

In machine learning, the random forest algorithm is also known as the


random forest classifier. It is a very popular classification algorithm. One of the
most interesting thing about this algorithm is that it can be used as both
classification and random forest regression algorithm. The RF algorithm is an
algorithm for machine learning, which is a forest. We know the forest consists
of a number of trees. The trees being mentioned here are decision trees.
Therefore, the RF algorithm comprises a random collection or a random
selection of a forest tree. It is an addition to the decision tree algorithm. So
basically, what a RF algorithm does is that it creates a random sample of
multiple decision trees and merges them together to obtain a more stable and
accurate prediction through cross validation. In general, the more trees in the
forest, the more robust would be the prediction and thus higher accuracy.

NEED OF THIS ALGORITHM:

 Random forest algorithm can be used for both classifications and


regression task.
 It provides higher accuracy through cross validation.
 Random forest classifier will handle the missing values and maintain the
accuracy of a large proportion of data.
 If there are more trees, it won’t allow over-fitting trees in the model.
 It has the power to handle a large data set with higher dimensionality.

WORKING PROCESS:

Step-1: Select random K data points from the training set.


Step-2: Build the decision trees associated with the selected data points
(Subsets).

Step-3: Choose the number N for decision trees that you want to build.

Step-4: Repeat Step 1 & 2.

Step-5: For new data points, find the predictions of each decision tree, and
assign the new data points to the category that wins the majority votes.
CHAPTER-6

SYSTEM TESTING AND MAINTENANCE:

Testing is vital to the success of the system. System testing makes a


logical assumption that if all parts of the system are correct, the goal will be
successfully achieved. In the testing process we test the actual system in an
organization and gather errors from the new system operates in full efficiency as
stated. System testing is the stage of implementation, which is aimed to
ensuring that the system works accurately and efficiently.
In the testing process we test the actual system in an organization and
gather errors from the new system and take initiatives to correct the same. All
the front-end and back-end connectivity are tested to be sure that the new
system operates in full efficiency as stated. System testing is the stage of
implementation, which is aimed at ensuring that the system works accurately
and efficiently.
The main objective of testing is to uncover errors from the system. For
the uncovering process we have to give proper input data to the system. So we
should have more conscious to give input data. It is important to give correct
inputs to efficient testing.
Testing is done for each module. After testing all the modules, the
modules are integrated and testing of the final system is done with the test data,
specially designed to show that the system will operate successfully in all its
aspects conditions. Thus the system testing is a confirmation that all is correct
and an opportunity to show the user that the system works. Inadequate testing or
non-testing leads to errors that may appear few months later.
This will create two problems, Time delay between the cause and
appearance of the problem. The effect of the system errors on files and records
within the system. The purpose of the system testing is to consider all the likely
variations to which it will be suggested and push the system to its limits.
The testing process focuses on logical intervals of the software ensuring
that all the statements have been tested and on the function intervals (i.e.,)
conducting tests to uncover errors and ensure that defined inputs will produce
actual results that agree with the required results. Testing has to be done using
the two common steps Unit testing and Integration testing. In the project system
testing is made as follows:
The procedure level testing is made first. By giving improper inputs, the
errors occurred are noted and eliminated. This is the final step in system life
cycle. Here we implement the tested error-free system into real-life environment
and make necessary changes, which runs in an online fashion. Here system
maintenance is done every months or year based on company policies, and is
checked for errors like runtime errors, long run errors and other maintenances
like table verification and reports.
Integration Testing is a level of software testing where individual units are
combined and tested as a group.
The purpose of this level is to expose faults in the interaction between
integrated units. Test drivers and test stubs are used to assist in Integration
testing.

METHOD

Any of Black Box Testing, White Box Testing, and Gray Box Testing methods
can be used. Normally, the method depends on your definition of ‘unit’.

TASKS

 Integration Test Plan


 Prepare
 Review
 Rework
 Baseline
 Integration Test Cases/Scripts
 Prepare
 Review
 Rework
 Baseline
 Integration Test
 Perform

UNIT TESTING:

Unit testing verification efforts on the smallest unit of software design,


module. This is known as “Module Testing”. The modules are tested separately.
This testing is carried out during programming stage itself. In these testing
steps, each module is found to be working satisfactorily as regard to the
expected output from the module.
BLACK BOX TESTING

Black box testing, also known as Behavioral Testing, is a


software testing method in which the internal structure/ design/ implementation
of the item being tested is not known to the tester. These tests can be functional
or non-functional, though usually functional.

WHITE-BOX TESTING

White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of testing software
thattests internal structures or workings of an application, as opposed to its
functionality (i.e. black-box testing).

GREY BOX TESTING

Grey box testing is a technique to test the application with having a limited
knowledge of the internal workings of an application. To test the Web Services
application usually the Grey box testing is used. Grey box testing is performed
by end-users and also by testers and developers.

INTEGRATION TESTING:

Integration testing is a systematic technique for constructing tests to


uncover error associated within the interface. In the project, all the modules are
combined and then the entire programmer is tested as a whole. In the
integration-testing step, all the error uncovered is corrected for the next testing
steps.
Software integration testing is the incremental integration testing
of two or more integrated software components on a single platform to produce
failures caused by interface defects.
The task of the integration test is to check that components or
software applications, e.g. components in a software system or – one step up –
software applications at the company level – interact without error.

ACCEPTANCE TESTING

User Acceptance Testing is a critical phase of any project and


requires significant participation by the end user. It also ensures that the system
meets the functional requirements.

Acceptance testing for Data Synchronization:

 The Acknowledgements will be received by the Sender Node after the


Packets are received by the Destination Node

 The Route add operation is done only when there is a Route request in
need

 The Status of Nodes information is done automatically in the Cache


Updating process

BUILD THE TEST PLAN

Any project can be divided into units that can be further


performed for detailed processing. Then a testing strategy for each of this unit is
carried out. Unit testing helps to identity the possible bugs in the individual
component, so the component that has bugs can be identified and can be
rectified from errors.

SUMMARY:

Proposed a plant disease identification method based on one-shot learning for


the small sample problem of plant leaf diseases.Taking8kindsofplantdiseasewith
asmallnumberofsamplesinthepublicdatasetP1antVillage as the identification
object, the focal loss function (FL) was used to train the plant disease classifier
based on the relation network. The results showed that the recognition accuracy
of the method in 5-way and 1-shot tasks reached 89.90%, which was 4.69%
higher than the original relation network model. At the same time, compared
with matching network and transfer learning,the improved method had
increased the recognition accuracy on the experimental dataset by 25.02% and
41.90%, respectively.
CONCLUSION:

We have introduced the basic knowledge of deeplearning and presented a


comprehensive review of recent research work done in plant leaf disease
recognition using deep learning. Provided sufficient data is available for
training,deeplearningtechniquesarecapableofrecognizingplant leaf diseases with
high accuracy. The importance of collecting large datasets with high variability,
data augmentation, transfer learning, and visualization of CNN activation maps
in improving classification accuracy, and the importance of small sample plant
leaf disease detection and the importance of hyper-spectralimaging for
earlydetection of plant disease have been discussed. At the same time, there are
also some inadequacies.
FUTURE ENHANCEMENT:

Sustainable plant disease management requires a multi-dimensional


consideration of the impacts of management approaches on economics,
sociology and ecology by fully understanding the mechanisms of plant disease
epidemics, the functioning of healthy agro-ecosystems and individual and
collective roles of RAER approaches on disease management. This model of
plant disease management seeks not only to increase agricultural productivity
and improve food quality but also to protect the ecological environment and
natural resources. To achieve this goal, future research in ecological plant
disease management should focus on: (i) epidemic and evolutionary patterns of
plant disease under changing environments and agricultural production
philosophies; (ii) the role of ecological considerations in agricultural
productivity and crop health; (iii) social-economic analysis of plant disease
epidemics and management; and (iv) technology development for integrating
management of major crop diseases with eco- logical principles.
REFERENCES:

[1] M. Y. Chen, G. AlRegib, and B.-H.Juang, ‘‘Air-writing recognition— Part


I: Modeling and recognition of characters, words, and connecting motions,’’
IEEE Trans. Human-Mach. Syst., vol. 46, no. 3, pp. 403–413, Jun. 2016.

[2] C. Amma, D. Gehrig, and T. Schultz, ‘‘Airwriting recognition using


wearable motion sensors,’’ in Proc. 1st Augmented Hum. Int. Conf., Megeve,
France, Apr. 2010, pp. 1–8.

[3] C. Amma, M. Georgi, and T. Schultz, ‘‘Airwriting: Hands-free mobile text


input by spotting and continuous recognition of 3D-space handwriting with
inertial sensors,’’ in Proc. Int. Symp. Wearable Comput. (ISWC), Newcastle,
U.K., Jun. 2012, pp. 52–59.

[4] D. Moazen, S. A. Sajjadi, and A. Nahapetian, ‘‘AirDraw: Leveraging smart


watch motion sensors for mobile human computer interactions,’’ in Proc. 13th
IEEE Annu. Consum.Commun.Netw. Conf. (CCNC), Las Vegas, NV, USA,
Jan. 2016, pp. 442–446.

[5] M. Arsalan and A. Santra, ‘‘Character recognition in air-writing based on


network of radars for human-machine interface,’’ IEEE Sensors J., vol. 19, no.
19, pp. 8855–8864, Oct. 2019.

[6] S. K. Leem, F. Khan, and S. H. Cho, ‘‘Detecting mid-air gestures for digit
writing with radio sensors and a CNN,’’ IEEE Trans. Instrum. Meas., vol. 69,
no. 4, pp. 1066–1081, Apr. 2020.
[7] P. Wang, J. Lin, F. Wang, J. Xiu, Y. Lin, N. Yan, and H. Xu, ‘‘A gesture
airwriting tracking method that uses 24 GHz SIMO radar SoC,’’ IEEE Access,
vol. 8, pp. 152728–152741, 2020.

[8] C.-H. Hsieh, J.-Y.Chen, and B.-H.Nien, ‘‘Deep learning-based indoor


localization using received signal strength and channel state information,’’
IEEE Access, vol. 7, pp. 33256–33267, 2019.

[9] Z. Fu, J. Xu, Z. Zhu, A. X. Liu, and X. Sun, ‘‘Writing in the air with WiFi
signals for virtual reality devices,’’ IEEE Trans. Mobile Comput., vol. 18, no. 2,
pp. 473–484, Feb. 2019.

[10] X. Cao, B. Chen, and Y. Zhao, ‘‘Wi-Wri: Fine-grained writing recognition


using wi-fi signals,’’ in Proc. IEEE Trustcom/BigDataSE/ISPA, Tianjin, China,
Aug. 2016, pp. 1366–1373.

You might also like