Classification of Banana Leaf
Classification of Banana Leaf
Classification of Banana Leaf
A PROJECT REPORT
Submitted by
AFRIN FATHIMA K (Reg.No:2015104005)
ANCY A (Reg.No:2015104015)
ANU PRIYA R (Reg.No:2015104020)
i
SETHU INSTITUTIE OF TECHNOLOGY
AN AUTONOMOUS INSTITUTION
BONAFIDE CERTIFICATE
SIGNATURE SIGNATURE
Mrs. Helina Rajini Suresh M.E.,(Ph.D.,) Mrs. R.Sivaranjani M.E.,(Ph.D).,
HEAD OF THE DEPARTMENT SUPERVISOR
Department of ECE Professor/Department of ECE
Sethu Institute of Technology Sethu Institute of Technology
Pulloor, Kariapatti-626 115 Pulloor,Kariapatti-626 115
Submitted for the 15UEC804 - Project Work End Semester Examination held at
Sethu Institute of Technology on …………………….
ii
ACKNOWLEDGEMENT
First we like to thank god the almighty for giving us the talent and
opportunity to complete project.
Dr. A. SENTHIL KUMAR M.E., Ph.D., for being given guidance kind and
cooperative encouragement, inspiration and keep interest show throughout the
We would like to express our deep sense of gratitude to our Head of the
Department Mrs. HELINA RAJINI SURESH M.E., (Ph.D.)., who extended
their heartiest encouragement, advice and valuable guidance through this
project.
iii
ABSTRACT
iv
TABLE OF CONTENTS
ABSTRACT iv
1 INTRODUCTION 1
1.1.1 DIGITIZER 1
v
2.3.4 IMAGE RECOGNITION &INTERPRETATION 9
3 LITERATURE SURVEY 13
5 PROPOSED WORK 23
5.5.1 GLCM 26
5.6 CLASSIFICATION 32
6 SOFTWARE DESCRIPTION 35
6.1 INTRODUCTION 35
8 CONCLUSION 43
REFERENCES 44
vi
LIST OF FIGURES
FIGURE TITLES PAGE
NO. NO.
4.3 ANTHRACNOSE 22
vii
LIST OF ABBREVIATIONS
ENT - Entropy
E - Energy
viii
CHAPTER 1
INTRODUCTION
1
1.1.1 Digitizer
Processing system
2
enhancing, removing noise, isolating regions, etc. Segmentation partitions an
image into its constituent parts or objects. The output of segmentation is usually
raw pixel data, which consists of either the boundary of the region or the pixels
in the region themselves. Representation is the process of transforming the raw
pixel data into a form useful for subsequent processing by the computer.
Description deals with extracting features that are basic in differentiating one
class of objects from another. Recognition assigns a label to an object based on
the information provided by its descriptors. Interpretation involves assigning
meaning to an ensemble of recognized objects. The knowledge about a problem
domain is incorporated into the knowledge base. The knowledge base guides the
operation of each processing module and also controls the interaction between
the modules. Not all modules need be necessarily present for a specific function.
The composition of the image processing system depends on its application.
The frame rate of the image processor is normally around 25
frames/second.
1.1.3 Digital Computer
Mathematical processing of the digitized image such as convolution,
averaging, addition, subtraction, etc. are done by the computer.
1.1.4 Mass Storage
The secondary storage devices normally used are floppy disks, CD ROMs
etc.
1.1.5 Hard Copy Device
The hard copy device is used to produce a permanent copy of the image
and for the storage of the software involved.
1.1.6 Operator console
The operator console consists of equipment and arrangements for
verification of intermediate results and for alterations in the software as and
3
when require. The operator is also capable of checking for any resulting errors
and for the entry of requisite data.
1.2 Applications of Image Processing
Importance and necessity of digital image processing stems from two
principal application areas: Improvement of pictorial information for human
interpretation and Processing of scene data for autonomous machine perception.
Digital image processing has a broad spectrum of applications such as remote
sensing, image storage and transmission for business applications, medical
imaging, acoustic imaging, and automated inspection of industrial parts.
Images acquired by satellites are useful in tracking of earth resources,
geographical mapping, prediction of agricultural crops, urban growth, weather,
flood and fire control. Space imaging applications include recognition and
analysis of objects contained in images obtained from deep space-probe
missions. There are also medical applications such as processing of X-Rays,
Ultrasonic scanning, Magnetic Resonance Imaging, Nuclear Magnetic
Resonance Imaging, etc.
In addition to the above mentioned applications, digital image processing
is now being used in solving a wide variety of problems. Though unrelated,
these problems commonly require methods capable of enhancing information
for human interpretation and analysis. Image enhancement and restoration
procedures are used to process degraded images of unrecoverable objects.
Successful applications of image processing concepts are found in astronomy,
defense, biology and industrial applications. The images may be used in the
detection of tumors or for screening the patients. The current major area of
application of digital image processing techniques is in solving the problem of
machine vision.
4
CHAPTER 2
FUNDAMENTALS OF IMAGE PROCESSING
a. The amount of source light being incident on the scene being viewed
and
b. The amount of light reflected by the objects in the scene.
The former is known as the Illumination and the latter is known as the
Reflectance components of the image.
2.2 Gray Scale
The intensity of a monochrome image f at coordinates (x, y) is known as
the gray level (l) of the image at that point. It is evident that
Lmin l Lmax --------- (2.1)
Where,
Lmin is the minimum gray level
And the only requirement is that Lmin be positive and Lmax be finite.
5
If i and i are the minimum and maximum values of the
min max
The interval Lmin ,Lmax is called the gray scale of the image. Normally the
image is shifted to the interval [0,L] where l=0 is considered black and l=L is
considered white.
2.3 Classes in Image Processing
An image processing system may handle a number of problems and have
a number of applications but it mostly involves the following processes known
as the basic classes in image processing
1. Image Representation and Description
2. Image Enhancement
3. Image Restoration
4. Image Recognition and Interpretation
5. Image Segmentation
6. Image Reconstruction
7. Image Data Compression
2.3.1 Image Representation and Description
Any processed image must be represented and described in a form
suitable for further computer processing. Basically, representing a region
involves two choices
1. In terms of its external characteristics (its boundary) and
2. in terms of its internal characteristics (the pixels comprising the
region)
6
The next task is to describe the region based on the chosen representation.
Generally an external representation is chosen when the primary focus is on
shape characteristics. An internal representation is selected when the primary
focus is on reflectivity characteristics such as color and texture.
Some of the available representation approaches are,
1. Chain codes
2. Polygonal approximations
3. Signatures
4. Boundary segments
2.3.2 Image Enhancement
The principle objective of enhancement technique is to process an image
so that the result is more suitable than the original than for a specific application
[1]. Most enhancement techniques are very much problem oriented and hence
enhancement for one problem may turn out to be degradation for the other.
Enhancement approaches may be classified into two broad categories.
The former refers to processing the image in the image plane (pixels)
itself while the latter techniques are based on modifying the Fourier (or any
other) transform of an image. In general enhancement techniques for problems
involve various combinations of methods from both the categories.
Some examples of enhancement operations are edge enhancement,
pseudo coloring, histogram equalization, noise filtering, unsharp masking,
sharpening, magnifying, etc. The enhancement process does not increase the
inherent information content present in the image but only tries to present it in a
suitable manner. Enhancement operations may be either local or global. Global
7
operations operate on the entire image at a time while local operations define
spatial masks (small sub images) over which the operation is to be performed.
2.3.3 Image Restoration
The ultimate goal of restoration techniques (as in image enhancement) is
to improve the image in some sense. However, unlike enhancement, restoration
is a process that attempts to recover an image that has been degraded by using
some apriori knowledge of the degradation phenomenon. Thus restoration
techniques are oriented towards modeling the degradation and applying the
inverse process in order to recover the original image. This approach usually
involves formulating a criterion of goodness that will yield some optimal
estimate of the desired result.
Early techniques for digital image restoration were derived mostly from
frequency domain concepts. However, modern methods take advantage of the
algebraic approach. Although a direct solution by algebraic methods generally
involves the manipulation of large systems of simultaneous equations, under
certain conditions computational complexities can be reduced to the same level
as required by traditional frequency domain restoration techniques. Restoration
techniques may be either linear or non-linear.
Image restoration may be classified into three major types.
1. Restoration models: Image formation, detector and recorder,
noise model, sampled observation.
2. Linear filtering Inverse / pseudo-inverse filter, Wiener filter,
FIR filter, Kalman filter, semi recursive filter.
3. Other methods Speckle noise reduction, maximum entropy
restoration, Bayesian methods, blind deconvolution, etc.
8
2.3.4 Image Recognition and Interpretation
Image recognition or analysis is a process of discovering, identifying and
understanding patterns that are relevant to the performance of an image based
task. One of the principle goals of image analysis is to endow a machine with
the capability to approximate similar to human beings. An automated image
analysis system is capable of exhibiting various degrees of intelligence.
Some of the associated characteristics are,
1. The ability to extract pertinent information from a background of
irrelevant details.
2. The capability to learn from examples and to generalize this
knowledge.
3. The ability to make inferences from incomplete information.
9
2.3.5 Image Segmentation
Image segmentation is a technique for extracting information from a
image. This is generally the first step in image analysis. Segmentation
subdivides an image into its constituent parts or objects. The level to which this
subdivision is carried depends on the problem being solved. Segmentation is
stopped when the objects of interest in an application have been isolated.
In general, autonomous segmentation is one of the most difficult tasks in
image processing. This step determines the eventual success or failure of the
analysis. Effective segmentation rarely fails to lead to a successful solution.
Segmentation algorithms for monochrome images generally are based on one of
two basic properties of gray level values
1. Discontinuity
2. Similarity
In the first category, the approach is to partition an image based on abrupt
changes in gray level. The principal areas of interest within this category are
detection of isolated points and detection of lines and edges in an image. The
principal approaches in the second category are based on thresholding, region
growing, region splitting and region merging. The concept of segmenting an
image based on discontinuity or similarity of the gray level value of its pixels is
applicable to both static and dynamic images. In the latter cases, motion can be
used as a powerful queue to improve the performance of segmentation
algorithms.
2.3.6 Image Reconstruction
An important problem in image processing is to reconstruct a cross
section of an object from several images of its trans-axial projections. A
projection is a shadow gram obtained by illuminating an object by penetrating
radiation. Each horizontal line is a one dimensional projection of the horizontal
slice of the project. Each pixel on the projected image represents the total
absorption of the radiation along its path from the source to the detector. By
10
rotating the source detector assembly around the object, projection views for
several different angles can be obtained. Image systems that generate such slice
views are called computerized tomography (CT) scanners. These
reconstructions are of several types.
1. Transmission tomography
2. Reflection tomography
3. Emission tomography
4. Magnetic resonance imaging
5. Nuclear magnetic resonance imaging
If a three dimensional object is scanned by a parallel beam, then the entire
three dimensional objects can be reconstructed from a set of two dimensional
slices, each of which can be reconstructed using several available algorithms.
2.3.7 Image Data Compression
An enormous amount of data is produced when a 2-D light intensity
function is sampled and quantized to create a digital image. The amount of data
generated may be so great that it results in impractical storage, processing and
communication requirements.
Image compression addresses the problem of reducing the amount of data
required to represent a digital image. The underlying basis of the reduction
process is the removal of redundant data. This amounts to transforming a 2-D
pixel array into a statistically uncorrelated data set. The transformation is
applied prior to storage or transmission of image. Later the compressed image is
decompressed to reconstruct the original image or an approximation to it. Initial
focus in this field was on the development of methods for reducing video
transmission bandwidth, a process called bandwidth compression.
Image compression is the natural technology for handling the increased
spatial resolution of today’s imaging sensors and evolving broadcast television
standards. Applications of data compression are in broadcast television, remote
sensing via satellite, military communications via aircraft, radar and sonar,
11
teleconferencing, computer communications, facsimile transmission, document
and medical imaging, hazardous waste control applications and the like.
12
CHAPTER 3
LITERATURE SURVEY
This paper, image processing techniques are used to detect the plant leaf
diseases. The objective of this work is to implement image analysis &
classification techniques for detection of leaf diseases and classification. The
proposed framework consists of four parts. They are (1) Image preprocessing
(2) segmentation of the leaf using K-means clustering to determine the diseased
areas (3) feature extraction using statistical Gray-Level Co-Occurrence Matrix
13
(GLCM) features and classification is done using Support Vector Machine
(SVM).
14
[5] Yogesh Dandawate and Radha Kokare, “An Automated
Approach for Classification of Plant Diseases Towards Development
of Futuristic Decision Support System in Indian Perspective”, IEEE,
2015.
This paper focus on the approach based on image processing for detection
of diseases of soybean plants. The soybean images are captured using mobile
camera having resolution greater than 2 mega pixel. The purpose of the
proposed project is to provide inputs for the Decision Support System (DSS),
which is developed for providing advice to the farmers as and when require
over mobile internet. Our proposed work classifies the images of soybean leaves
as healthy and diseased using Support Vector Machine (SVM). The algorithm
comprises of four major steps: image acquisition analysis and classification.
The SVM classifier proves its ability in automatic and accurate classification of
image. Finally, it can be concluded from the experimental results that this
approach can classify the leaves with an average accuracy of 93.79%. The
proposed system will enable the farmers to get advice from the agriculture
experts with minimal efforts.
15
This paper investigated on 1) using partial least square regression
(PLSR), v support vector regression (v-SVR), and Gaussian process regression
(GPR) method for wheat leaf rust disease detection, 2) evaluating the impact of
training sample size on the results, 3) the influence of disease symptoms effects
on the predictions performances of the above-mentioned methods, and 4)
comparisons between the performance of SVIs and machine learning
techniques. In this study, the spectra of the infected and non- infected leaves in
different diseases symptoms were measured using a non-image
spectroradiometer in the electro- magnetic region of 350 to 2500nm. In order to
produce a ground truth dataset, we employed photos of a digital camera to
compute the disease severity and disease symptoms fraction. The result
represent that the machine learning techniques is contrast to SVIs are not
sensitive to different diseases symptoms and their results are reliable.
[7] Halil Durmu, Ece Olcay Gune and Murvet Kirci, “Disease
Detection on the Leaves of the Tomato Plants by Using Deep
Learning”, IEEE, 2017.
This paper physical changes in the leaves can be seen with RGB cameras.
In the previous studies, standard feature extraction methods on plant leaf images
to detect diseases have been used. In this study, deep learning methods were
used to detect diseases. Deep learning architecture selection was the key issue
for the implementation. So that, two different deep learning network
architecture were tested first AlexNet and then SqueezeNet. For both of these
deep learning networks training and validation were done on the Nvidia Jetson
TX1. Tomato leaf image from the Plant Village dataset has been used for the
training. Ten different classes including healthy images are used. Trained
networks are also tested on the images from the internet.
16
[8] Jobin Francis , Anto Sahaya Dhas D , Anoop B K ,
“IDENTIFICATION OF LEAF DISEASES IN PEPPER PLANTS
USING SOFT COMPUTING TECHNIQUES” , IEEE , 2016.
17
healthy leaves are collectively trained under Random Forest to classify the
diseased and healthy image. For extraction features of an image we use
Histogram of an Oriented Gradient (HOG). Overall, using machine learning to
train the large data sets available publicly gives us a clear way to detect the
disease present in plants in a colossal scale.
18
CHAPTER 4
Cigar End Tip Rot: A black necrosis spread from the perianth into the
tip of immature fingers. The rotted portion of the banana finger is dry and tends
to adhere to fruits.
19
Crown Rot: The characteristics symptoms are blackening of the crown
tissues, which spreads to the pulp through the pedicel resulting rotting of the
injected portion and separation of fingers from the hand.
Stem-end Rot: The fungus enters through the cut stem or hand. The
invaded flesh becomes soft and water-soaked.
Pseudostem Heart Rot : The first indication of heart rot is the presence
of heart leaves with part of the lamina missing or decayed. In severe cases, the
inner leaves of the crown first turn yellow, then brown and finally die. In more
severe cases all the leaves and the plant die.
Head Rot : Newly planted suckers get affected, leading to rotting and
emitting foul odour. In older plants rotting at the collar region and leaf bases are
seen. In advance cases trunk base becomes swollen and split.
Banana Bunchy Top Virus: The diseases transmitted to the plant by the
aphit vector Pntaloni nigronervosa and dwarf bananas are very susceptible to
this disease. Primary symptoms of the disease are seen when infected suckers
are planted. Such infected suckers putforth narrow leaves, which are chlorotic
and exhibit mosaic symptoms. The affected leaves re brittle with their margins
rolled upwards. Characteristics symptoms of bunchy top virus is the presence of
20
interrupted dark green strikes long the secondary veins of the lamina or the
midrib of the petiole. The diseased plants remain stunted and do not produce
bunch of any commercial value.
21
Fig 4.1 : Panama Wilt Fig 4.2 : Leaf Spot Fig 4.3 : Anthracnose
Fig 4.4 : Cigar End Tip Rot Fig 4.5: Crown Rot Fig 4.6: Stem End Rot
Fig 4.7: Pseudostem Heart Rot Fig 4.8: Moko Disease Fig 4.9: Banana Bunchy Top Virus
22
CHAPTER 5
PROPOSED WORK
23
5.2 IMAGE DATASET
In this step the sample images are collected from the dataset.For which a
training set of 360 images and a testing set of 260 images is constructed. The
standard jpg format is used to store these images. Then the input image is
resized to 256×256 pixels.
24
them as much as possible far away from each other. The next step is to take
each point belonging to a given data set and associate it to the nearest centre.
When no point is pending, the first step is completed and an early group
age is done. At this point we need to re-calculate k new centroids as barycentre
of the clusters resulting from the previous step. After we have these k new
centroids, a new binding has to be done between the same data set
points and the nearest new centre. A loop has been generated. As a result of this
loop we may notice that the k centers change their location step by step until
no more changes are done or in other words centers do not move any more.
Finally, this algorithm aims at minimizing an objective function know as
squared error function given by:
𝑣(𝑣) = ∑𝑣 (‖𝑣𝑣 − 𝑣
𝑣‖)
2
------ (5.1)
𝑣,𝑣=1
25
1) Randomly select ‘c’ cluster centers.
2) Calculate the distance between each data point and cluster centers.
3) Assign the data point to the cluster center whose distance from the cluster
center is minimum of all the cluster centers..
𝑣𝑣 = (1⁄𝑣1 ) ∑ 𝑣
𝑣=1 𝑣
1 ------ (5.2)
1
5) Recalculate the distance between each data point and new obtained cluster
centers.
6) If no data point was reassigned then stop, otherwise repeat from step 3).
26
computing the co-occurrence matrix and the second step is calculating texture
feature based on the co-occurrence matrix. This technique is useful in wide
range of image analysis applications from biomedical to remote sensing
techniques.
5.5.2 WORKING OF GLCM
Basic of GLCM texture considers the relation between two neighbouring
pixels in one offset, as the second order texture. The gray values relationships in
a target are transformed into the co-occurrence matrix space by a given kernel
mask such as 3×3, 5×5, 7×7 and so forth. In the transformation from the image
space into the co-occurrence matrix space, the neighbouring pixels in one or
some of the eight defined directions can be used; normally, four direction such
as 0°, 45°, 90°, and 135° is initially regarded, and its reverse direction (negative
direction) can be also counted into account. It contains information about the
positions of the pixels having similar gray level values.
Each element (i, j) in GLCM specifies the number of times that the pixel
with value I occurred horizontally adjacent to a pixel with value j. In Figure
computation has been made in the manner where, element (1, 1) in the GLCM
contains the value 1 because there is only one instance in the image where two,
horizontally adjacent pixels have the values 1 and 1. Element (1, 2) in the
GLCM contains the value 2 because there are two instances in the image where
two, horizontally adjacent pixels have the values 1 and 2. Element (1, 2) in the
GLCM contains the value 2 because there are two instances in the image where
two, horizontally adjacent pixels have the values 1 and 2. The GLCM matrix
has been extracted for input dataset imagery. Once after the GLCM is
computed, texture features of the image are being extracted successively.
27
Fig 5.4 :Creation of GLCM
5.5.3 HARALICK TEXTURE FEATURES
Haralick extracted thirteen texture features from GLCM for an image.
The important texture features for classifying the image into water body and
non-water body are Energy (E), Entropy (Ent), Contrast (Con), Inverse
Difference Moment (IDM) and Directional Moment (DM).
The thirteen texture features are,
Contrast
Correlation
Energy
Homogeneity
Mean
Standard
Entropy
RMS
Variance
Smoothness
Kurtosis
Skewness
28
IDM.
Andrea Baraldi and Flavio Parmiggiani (1995) discussed the five
statistical parameter energy, entropy, contrast, IDM and DM, which are
considered the most relevant among the 14 originally texture features proposed
by Haralick et al. (1973). The complexity of the algorithm also reduced by
using these texture features.
Let i and j are the coefficients of co-occurrence matrix, M i, j is the
element in the co-occurrence matrix at the coordinates i and j and N is the
dimension of the co-occurrence matrix.
a) CONTRAST
Contrast measures intensity contrast of a pixel and its neighbour pixel
over the entire image. If the image is constant, contrast is equal to 0. The
equation of the contrast is as follows,
Contrast ( p )(i
N 1 2
i, j 0 ij
j) ------ (5.3)
b) ENERGY
Energy is a measure of uniformity with squared elements summation in
the GLCM. Range is in between 0 and 1. Energy is 1 for a constant image. The
equation of the energy is given by equation,
N 1 2
Energy (p ) ------ (5.4)
i, j 0 ij
c) HOMOGENEITY
Homogeneity measures the similarity among the pixels. Its range is
between 0 and 1. Homogeneity is 1 for a diagonal GLCM. The equation of the
Homogeneity is as follows,
( p2 ) ------ (5.5)
Homogeneity
N 1
ij
2
i, j 0
[1 (i
j) ]
d) CORRELATION
29
Correlation measures how correlated a pixel is to its neighbourhood. Its
range is in between -1 and 1.
(i )( j )
Correlation
N 1
(p ) ------ (5.6)
i, j0 ij 2
e) ENTROPY
This concept comes from thermodynamics. Entropy (Ent) is the measure
of randomness that is used to characterize the texture of the input image. Its
value will be maximum when all the elements of the co-occurrence matrix are
the same. It is also defined as in Equation
g) SKEWNESS
Skewness is a measure of the asymmetry of the data around the sample
mean. If skewness is negative, the data spreads out more to the left of the mean
than to the right. If skewness is positive, the data spreads out more to the right.
The skewness of the normal distribution is zero.
The skewness of a distribution is defined as,
)
2
E( x ------ (5.9)
Skewness
2
30
h) KURTOSIS
Kurtosis is a measure of how outlier-prone a distribution is. The kurtosis
of the normal distribution is 3. Distributions that are more outlier-prone than the
normal distribution have kurtosis greater than 3; distributions that are less
outlier-prone have kurtosis less than 3. Some definitions of kurtosis subtract 3
from the computed value, so that the normal distribution has kurtosis of 0.
The kurtosis function does not use this convention.
)
4
E( x ------ (5.10)
Kurtosis
4
The RMS block computes the true root mean square (RMS) value of the
input signal. The true RMS value of the input signal is calculated over a running
average window of one cycle of the specified fundamental frequency,
RMS 1 t f
t
2 ------ (5.11)
T t T
where f(t) is the input signal and T is 1/(fundamental frequency).
j) STANDARD DEVIATION
31
1 ------ (5.12)
N
S
N 1 i 1 A
i
32
Fig 5.5 SVM Classifier
5.6.1 CONFUSION MATRIX
A confusion matrix is a table that is often used to describe the
performance of a classification model (or "classifier") on a set of test data for
which the true values are known. The confusion matrix itself is relatively simple
to understand, but the related terminology can be confusing.
True positives (TP): These are cases in which we predicted yes (they
have the disease), and they do have the disease.
True positive=TP/actual yes ------ (5.14)
True negatives (TN): We predicted no, and they don't have the disease.
True negatives =TN/actual no ------ (5.15)
33
False positives (FP): We predicted yes, but they don't actually have the
disease. (Also known as a "Type I error.")
False positive=FP/actual yes ------ (5.16)
False negatives (FN): We predicted no, but they actually do have the
disease. (Also known as a "Type II error.")
False negatives =FN/actual no ------ (5.17)
34
CHAPTER 6
SOFTWARE DESCRIPTION
6.1 INTRODUCTION
6.1.1 THE MATLAB SYSTEM:
MATLAB is an abbreviation of matrix laboratory. It is multi-paradigm
numerical computing software which uses the 4th generation of programming
Language (MATLAB programming language) developed by Mathworks.
MATLAB is able to do plotting of functions and data, matrix manipulations,
creation of user interfaces, implementation of algorithm, and interfacing with
programs written in other languages. It is widely used by the people from
different background of field and knowledge such and science, engineering,
economist, and the researchers.
35
fraction of the time it would take to write a program in a scalar non interactive
language such as C or FORTRAN.
36
CHAPTER 7
HEALTHY LEAF
37
(i) (j) (k) (l)
Fig 7.1 : (a),(e),(i) and (m) Healthy leaf , (b),(f),(j) and (n) Cluster 1 , (c),(g),(k)
and (o) cluster 2 ,(d),(h),(l) and (p) Cluster 3
AFFECTED LEAF
38
(e) (f) (g) (h)
Fig 7.2 : (a),(e),(i) and (m) Affected Leaf , (b),(f),(j) and (n) Cluster 1 ,
(c),(g),(k) and (o) Cluster 2 ,(d),(h),(l) and (p) Cluster 3
In GLCM method, both the color and texture of an image are taken into
account, to arrive at unique features, which represent that image.The affected
and the clustered images of banana leaves are shown in Fig 7.2 The manual
feeding of the datasets, in the form of digitized RGB color photographs was
implemented for feature extraction and training the data. After training, the test
data sets were used to analyze the performance of accurate classification. The
main characteristics of disease detection are speed and accuracy. Hence, there is
working on development of automatic, efficient, fast and accurate which is use
for detection disease on unhealthy leaf. To segment the leaf area, the K-means
clustering technique is used for segmentation of image then feature extraction is
done using both texture as well as color features. Then finally SVM
39
classification technique is used to detect the type of leaf diseases. This
algorithm helps in identifying the presence of diseases by observing the visual
symptoms seen on the leaves of the plant. Our approach is in the form of table
of confusion (sometimes also called a confusion matrix), is a table with two
rows (predicted class) and two columns(actual class) that reports the number
of false positives, false negatives, true positives, and true negatives. This allows
more detailed analysis than mere proportion of correct classifications
(accuracy).
OUTPUT
40
This confusion matrix shows the accuracy rate of 77%, when we take
sample images of training(40) and testing(20) for classification of healthy and
affected banana leaf.
41
CHAPTER 8
CONCLUSION
Agricultural sector is still one of the most important sector over which the
majority of the Indian population relies on. Detection of diseases in banana leaf
is hence critical to the growth of the economy. The Detection of banana leaf
Diseases has been developed using the MATLAB Application. The
segmentation of the diseased part is done using K-Means segmentation. Then,
GLCM texture features are extracted and classification is done using SVM.
SVM classifier was used for the accurate classification of the diseases which
will help the farmers to reduce the pesticide usage as well as to increase the
crop yield. Future work can be done by considering the different Support Vector
Machine kernels to obtain higher accuracy with lower execution time and more
features can be considered to increase the accuracy.
42
REFERENCES
[2] Yogesh Dhandawate and Radha Kokare, “An automated approach for
classification of Plant diseases towards Development of Futuristic Decision
Support System in Indian Perspective”, IEEE, Page No: 794-799, 2015.
43
[7] Pranjali B.Padol and Anjali A.Yadav, “SVM Classifier Based Grape Leaf
Diseases Detection”, Conference on Advances in Signal Processing (CASP),
Page No: 175-179, June 2016.
[8] Vijay Singh, A.K.Misra, “Detection of Plant Leaf Disease using Image
Segmentation and Soft Computing Techniques”, Information Processing in
Agriculture, Page No: 41-49, November 2016.
[9] Halil Durmus , Ece Olcay Gunes and Murvet Kirci , “Disease Detection on
the Leaves of the tomato Plants by using Deep Learning” , I.T.U.TARBIL
Environmental Agriculture Informatics Applied Research Centre , 2017.
[13] Melike Sardogan, Adem Tuncer and Yunes Ozen, “Plant Leaf Disease
44
Detection and Classification based on CNN with LVQ Algorithm”,
International Conference on Computer Science and Engineering, Page No: 382-
385, 2018.
[15] Seema Ramesh et all. , “Plant Disease Detection using Machine Learning”,
International Conference on Design Innovation for 3Cs Compute Communicate
Control, DOI 10.1109/ICDI3C.2018.00017, Page No: 41-45, 2018.
[16] Prajwala TM , Alla Pranadhi et all. , “Tomato Leaf Disease Detection using
Convolutional Neural Networks”, Eleventh International Conference on
Contemporary Computing(IC3) , August 2018.
45