CN112598031A - Vegetable disease detection method and system - Google Patents
Vegetable disease detection method and system Download PDFInfo
- Publication number
- CN112598031A CN112598031A CN202011444655.XA CN202011444655A CN112598031A CN 112598031 A CN112598031 A CN 112598031A CN 202011444655 A CN202011444655 A CN 202011444655A CN 112598031 A CN112598031 A CN 112598031A
- Authority
- CN
- China
- Prior art keywords
- feature
- vegetable
- convolution
- convolution kernels
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/68—Food, e.g. fruit or vegetables
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a vegetable disease detection method and a system, comprising the following steps: acquiring an image of a vegetable leaf to be detected; inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result; determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result; the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks. According to the vegetable disease detection method and system provided by the invention, the mask residual convolution network is utilized to extract the characteristics of the scab image, and the high-precision disease diagnosis difficulty of the small sample vegetable image is realized under the complex production environment combining the scab image and the environmental parameters by combining the environmental characteristics.
Description
Technical Field
The invention relates to the field of agricultural intelligent detection, in particular to a vegetable disease detection method and system.
Background
Diseases are important factors influencing the quality of vegetables, so that the instant monitoring and the effective prevention and treatment of the diseases in the agricultural production process are increasingly emphasized. For example, solanaceae vegetables have a large number of disease types and high disease frequency, and if the diseases are not found in time and diagnosed accurately, timely, effective and targeted control measures cannot be applied, so that large-area production reduction and even production halt can be caused.
At present, with the rapid development of artificial intelligence technology, the research on plant disease intelligent identification in the field of agricultural scientific research is gradually deepened all over the world, but most of the research is realized based on a pure image identification technology, and only a few of the research is comprehensively diagnosed through environmental parameters and image parameters. The disease identification based on the image mainly comprises two types, one type is a traditional machine learning image identification method based on the prior disease image characteristics, and the other type is the disease image identification based on the convolutional neural network.
In one aspect. In the prior art, a vegetable disease identification method based on machine learning is mainly used for forming a classification feature vector of a disease image through color features, shape features and texture features of a disease spot image. And then screening the disease characteristics by using algorithms such as a rough set theory, a genetic algorithm, local discriminant mapping, local linear embedding and the like. And then segmenting the image scab by utilizing a multi-feature segmentation method, a maximum inter-class variance threshold method, a k-means hard clustering algorithm, a watershed segmentation algorithm, a full threshold and a self-adaptive threshold. And then, extracting features by utilizing a gray level co-occurrence matrix and a step-by-step discriminant analysis method. And finally, carrying out classification and identification through a BP neural network, a support vector machine, a Bayesian discrimination method and the like. However, although the method has higher performance in a specific data set and a production environment, the process of image segmentation is too complicated, the robustness is poor (low robust), and the feature extraction method has no universality, so that the overall generalization capability of the model is poor.
On the other hand, in the prior art, model training of fusion characteristics is performed by combining the graphic characteristics of disease images with environmental and meteorological characteristics, but the image segmentation process is complicated, the characteristic extraction depends on priori knowledge and has no universality, so that the overall generalization capability of the model is low, the robustness of complex environment image identification is not strong, the coupling degree is high (high coupling), and an overfitting phenomenon exists.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a vegetable disease detection method and a vegetable disease detection system.
The invention provides a vegetable disease detection method, which comprises the following steps: acquiring an image of a vegetable leaf to be detected; inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result; determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result; the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
Optionally, according to the vegetable disease detection method provided by the present invention, the identification prediction result includes a mask prediction result, a frame prediction result, and a classification prediction result;
the step of inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance to obtain a recognition prediction result comprises the following steps:
identifying the vegetable leaf image to be detected by utilizing the ResNet sub-network to obtain image characteristics related to the vegetable leaf image to be detected; generating a plurality of feature maps according to the image features by utilizing the FPN sub-network; determining a detection candidate region related to the vegetable leaf image to be detected according to the plurality of feature maps by utilizing the RPN sub-network; mapping the detection candidate region to the to-be-detected vegetable leaf image based on a RoI Align algorithm to obtain a classification feature map, a frame regression feature map and a mask feature map; performing mask feature calculation on the mask feature map by using the FCN sub-network to obtain a mask prediction result; expanding the frame regression feature map into a one-dimensional feature vector; performing frame prediction on the one-dimensional feature vector based on a positive and negative sample calculation method to obtain a frame prediction result; connecting the one-dimensional characteristic vector with the environmental parameter vector to obtain a comprehensive characteristic matrix; and carrying out classification and identification on the comprehensive characteristic matrix based on the trained classifier to obtain a classification prediction result.
Optionally, according to the vegetable disease detection method provided by the invention, in the process of performing frame prediction on the one-dimensional feature vector based on a positive and negative sample calculation method, a dropout module is added in a full connection layer predicted by a Bounding box; and in the process of carrying out classification and identification on the comprehensive characteristic matrix based on the trained classifier, adding a dropout module in a fully connected layer predicted by the classifier.
Optionally, according to the vegetable disease detection method provided by the present invention, before inputting the vegetable leaf image to be detected to a mask residual convolution network trained in advance, the method further includes pre-training the mask residual convolution network by using a vegetable leaf image sample; and in the process of pre-training the mask residual convolution network, increasing the loss weight when the mask prediction result is obtained and reducing the loss weight when the classification prediction result is obtained.
Optionally, before pre-training the mask residual convolution network by using a vegetable leaf image sample, the method for detecting a vegetable disease according to the present invention further includes: and turning over, brightness change processing and contrast change processing are carried out on the vegetable leaf image sample so as to expand the vegetable leaf image sample.
According to the vegetable disease detection method provided by the invention, the ResNet subnetwork comprises:
a first feature extraction module comprising 64 convolution kernels of 7 x 3, convolution layers with a step size of 2, and 1 pooling layer with a step size of 2; a second feature extraction module, including 1 first scale layer and 2 first feature layers connected in series in sequence, where the first scale layer includes 64 convolutional layers with convolution kernels 1 × 64, 64 convolutional layers with convolution kernels 3 × 64, and 256 convolutional layers with convolution kernels 1 × 64, and each of the first feature layers includes 64 convolutional layers with convolution kernels 1 × 256, 64 convolutional layers with convolution kernels 3 × 64, and 256 convolutional layers with convolution kernels 1 × 64; a third feature extraction module, including a second scale layer and 3 second feature layers connected in series in sequence, where the second scale layer includes 128 convolutional layers with convolution kernels of 1 × 256 and 2, 128 convolutional layers with convolution kernels of 3 × 128, and 512 convolutional layers with convolution kernels of 1 × 128, and each of the second feature layers includes 128 convolutional layers with convolution kernels of 1 × 512, 128 convolutional layers with convolution kernels of 3 × 128, and 512 convolutional layers with convolution kernels of 1 × 128; a fourth feature extraction module, including a third scale layer and 22 third feature layers connected in series in sequence, where the third scale layer includes 256 convolutional layers with convolution kernels of 1 × 512 by 2, 256 convolutional layers with convolution kernels of 3 × 256, and 1024 convolutional layers with convolution kernels of 1 × 256, and each of the third feature layers includes 256 convolutional layers with convolution kernels of 1 × 1024, 256 convolutional layers with convolution kernels of 3 × 256, and 1024 convolutional layers with convolution kernels of 1 × 256; the fifth feature extraction module comprises a fourth scale layer and 2 fourth feature layers which are sequentially connected in series, wherein the fourth scale layer comprises 512 convolutional layers with convolution kernels of 1 × 1024 step size of 2, 512 convolutional layers with convolution kernels of 3 × 512, and 2048 convolutional layers with convolution kernels of 1 × 512, and each fourth feature layer comprises 512 convolutional layers with convolution kernels of 1 × 2048, 512 convolutional layers with convolution kernels of 3 × 512, and 2048 convolutional layers with convolution kernels of 1 × 512.
Optionally, the vegetable disease detection method provided by the invention comprises the following steps: generating a plurality of feature maps according to the image features by using the FPN sub-network, including:
passing the output result of the fifth feature extraction module through a convolution layer with 256 convolution kernels of 1 × 2048 to obtain a fifth feature map; passing the output result of the fourth feature extraction module through 256 convolution layers with convolution kernels of 1 × 1024, and fusing the output result with the up-sampling result of the fifth feature map to obtain a fourth feature map; passing the output result of the third feature extraction module through 256 convolution layers with convolution kernels of 1 × 512, and fusing the output result with the up-sampling result of the fourth feature map to obtain a third feature map; and passing the output result of the second feature extraction module through 256 convolution layers with convolution kernels of 1 × 256, and fusing the output result with the up-sampling result of the third feature map to obtain a second feature map.
Optionally, the vegetable disease detection method provided by the invention comprises the following steps: the determining, by using the RPN subnetwork and according to the plurality of feature maps, a detection candidate region related to the to-be-detected vegetable leaf image includes:
respectively passing the second feature map, the third feature map, the fourth feature map and the fifth feature map through 256 convolution layers with convolution kernels of 3 × 256, and respectively acquiring a second detection candidate region, a third detection candidate region, a fourth detection candidate region and a fifth detection candidate region;
and performing maximum pooling processing for compensating the fifth feature map to 2 to obtain a sixth detection candidate region.
The invention also provides a vegetable disease detection system, comprising:
the image acquisition unit is used for acquiring an image of the vegetable leaf to be detected;
the network prediction unit is used for inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance to obtain a recognition prediction result;
the result output unit is used for determining a vegetable disease detection result of the to-be-detected vegetable leaf image according to the identification prediction result;
the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of any one of the vegetable disease detection methods.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the vegetable disease detection methods described above.
According to the vegetable disease detection method and system provided by the invention, the mask residual convolution network is utilized to extract the characteristics of the scab image, and the high-precision disease diagnosis difficulty of the small sample vegetable image is realized under the complex production environment combining the scab image and the environmental parameters by combining the environmental characteristics.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a vegetable disease detection method provided by the present invention;
FIG. 2 is a schematic flow chart of a vegetable disease detection model provided by the present invention;
FIG. 3 is a schematic structural diagram of a ResNet sub-network provided by the present invention;
FIG. 4 is a schematic diagram of the structure of the FPN sub-network provided by the present invention;
FIG. 5 is a schematic diagram of a vegetable disease detection model provided by the invention for fusing environmental, biological and image characteristics;
FIG. 6 is a schematic structural diagram of a vegetable disease detection system provided by the present invention;
fig. 7 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method and system for detecting vegetable diseases provided by the embodiment of the invention are described below with reference to fig. 1-6.
As shown in fig. 1, the method for detecting vegetable diseases provided by the present invention includes, but is not limited to, the following steps:
step S1: acquiring an image of a vegetable leaf to be detected;
step S2: inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result;
step S3: determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result;
the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
The convolutional neural network has the advantages of no need of prior feature construction, more identification types, easiness in overfitting and the like, and has strong expression capability on the disease image features. However, in the current stage, the study of disease identification by using the convolutional neural network is mainly based on the adoption of an object identification technology, lesion spots are segmented by priori knowledge, and classification and identification are carried out by using the convolutional network, so that the disease identification is not separated from human intervention in nature.
In the invention, the method for detecting the vegetable diseases is described in detail by taking the example of realizing high-precision disease detection of tomatoes under the complex production environment combining tomato leaf spot images and environmental parameters.
First, a leaf image of a vegetable to be tested (such as a tomato to be tested) is collected. The vegetable disease detection method provided by the invention is adopted to respectively detect each vegetable leaf image, and the vegetable disease detection result is comprehensively formulated according to the detection results of all detected images.
Further, any to-be-detected vegetable leaf image is input into a mask residual convolution network model trained in advance, and a recognition prediction result output by the network model is obtained.
The Mask residual convolution network mainly comprises Mask R-CNN.
Specifically, the mask residual convolution network mainly adopts a residual network (ResNet) as a backbone network (backbone), and the ResNet is used for carrying out image feature extraction on the received vegetable leaf image to be detected. The method is characterized in that a characteristic map Pyramid network (FPN) is added on the ResNet to solve the problem of multiple scales in the image characteristic extraction process, and the performance of detecting small scab image blocks in the vegetable leaf images to be detected is greatly improved through simple network connection change under the condition that the calculation amount of an original model is not increased basically. The invention extracts feature maps of multiple scales by adding an FPN sub-network on a ResNet sub-network.
Further, according to the method for detecting vegetable diseases provided by the invention, an anchor point frame with different scales and aspect ratios is generated for each point (called anchor point) on a feature map according to a plurality of feature maps acquired by a Region candidate sub-Network (RPN) through an FPN sub-Network, and coordinates [ m, n, a, b ] of the anchor point frame are coordinates on an original map (wherein m, n, a, b are respectively a central horizontal coordinate, a central vertical coordinate, a length and a width).
Optionally, the anchor frame is compared IoU with the label frame in the correct t-tag (group Truth) to distinguish between positive and negative samples (i.e. each anchor frame is divided into a foreground frame or a background frame). For the foreground frame, 4 position offsets from the real label frame are calculated for position offset labeling. This labeled anchor block (with the labels of the front and background categories and the positional offsets) is then compared with the two outputs of the convolutional layer for loss (recommended region classification loss rpn _ class _ loss and recommended region regression loss rpn _ bbox _ los) to learn how to extract the foreground block.
Further, after learning how to extract the foreground frame, determining the foreground frame according to the output probability value of the rpn _ cls _ score layer; and integrating the position deviation value into the coordinates of the anchor point frame to obtain the coordinates of the actual anchor point frame, so that all detection candidate areas related to the vegetable leaf image to be detected can be acquired.
In a common two-stage inspection framework, ROI Pooling functions to pool corresponding regions in a feature map into a fixed-size feature map according to the position coordinates of a preselected box for subsequent classification and bounding box regression operations. Since the position of the preselected box is typically derived from model regression, it is generally a floating point number, whereas the pooled feature map requires a fixed size. Therefore, there are two quantification processes for this procedure of ROI Pooling. In fact, after the above two quantifications, the candidate frame has a certain deviation from the position where the candidate frame is initially regressed, and the deviation affects the accuracy of detection or segmentation.
In view of this, in order to solve the above disadvantages of ROI Pooling, the method for detecting vegetable diseases provided by the present invention uses ROI Align to perform mapping processing on the detection candidate region, mainly cancels quantization operation, and uses a bilinear interpolation method to obtain an image numerical value on a pixel point with coordinates as a floating point number, thereby converting the whole feature aggregation process into a continuous operation.
And mapping all the detection candidate regions through ROI Align to obtain a corresponding classification feature map, a frame regression feature map and a mask feature map.
The full convolution sub-network (FCN) converts the full connection layer in the traditional CNN into convolution layers one by one, and is different from the classic CNN which uses the full connection layer to obtain the feature vector with fixed length for classification in the convolution layer.
Specifically, when the target detection is performed on the recommended candidate box, the size of an interest region needs to be unified, feature map coordinates with different scales are mapped into an original input image by using a RoI Align algorithm, and a feature map with a fixed size is extracted.
Further, the invention respectively uses the full convolution layer sub-networks FCN to perform mask calculation on the mask characteristic graph to obtain a mask prediction result; meanwhile, after one-dimensional vector expansion is carried out on the frame regression feature map, frame prediction is carried out on the frame regression feature map by using a positive and negative sample calculation method, and a frame prediction result is obtained.
Further, after the frame regression feature map is obtained and expanded into a one-dimensional feature vector (VShared), the VShared and an environmental parameter vector (Venv) can be combined to obtain a comprehensive feature matrix, the comprehensive feature matrix is calculated based on a positive and negative sample calculation method, the comprehensive feature matrix is obtained and transmitted into a softmax classifier together with a real classification result, and the classifier parameters are corrected through back propagation to obtain a corrected classifier and a classification prediction result.
According to the vegetable disease detection method provided by the invention, the mask residual convolution network is utilized to extract the characteristics of the disease spot image, and the environment characteristics are combined, so that the problem of high-precision disease diagnosis for the small sample vegetable image is solved under the complex production environment combining the disease spot image and the environment parameters.
Based on the content of the foregoing embodiment, as an alternative embodiment, the recognition prediction result includes a mask prediction result, a frame prediction result, and a classification prediction result;
the step of inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance to obtain a recognition prediction result comprises the following steps:
identifying the vegetable leaf image to be detected by utilizing the ResNet sub-network to obtain image characteristics related to the vegetable leaf image to be detected;
generating a plurality of feature maps according to the image features by utilizing the FPN sub-network;
determining a detection candidate region related to the vegetable leaf image to be detected according to the plurality of feature maps by utilizing the RPN sub-network;
mapping the detection candidate region to the to-be-detected vegetable leaf image based on a RoI Align algorithm to obtain a classification feature map, a frame regression feature map and a mask feature map;
performing mask feature calculation on the mask feature map by using the FCN sub-network to obtain a mask prediction result;
expanding the frame regression feature map into a one-dimensional feature vector;
performing frame prediction on the one-dimensional feature vector based on a positive and negative sample calculation method to obtain a frame prediction result;
connecting the one-dimensional characteristic vector with the environmental parameter vector to obtain a comprehensive characteristic matrix; and carrying out classification and identification on the comprehensive characteristic matrix based on the trained classifier to obtain a classification prediction result.
Specifically, as shown in fig. 2, the method for detecting vegetable diseases specifically clarifies the structure of Mask R-CNN and the process of image analysis, and includes:
ResNet (such as ResNet-101) is used as a backbone, and convolution and pooling of the vegetable leaf image to be detected are performed through ResNet, so that feature extraction of the disease image is achieved, and image features related to the vegetable leaf image to be detected are obtained.
On the basis, an FPN structure is added on ResNet to convert the acquired image features into a plurality of feature maps.
And then, the recommendation of the candidate area is detected according to the training and prediction by taking the obtained characteristic maps as a backbone network.
Further, mapping the detection candidate region to the to-be-detected vegetable leaf image by using a RoI Align algorithm respectively, and obtaining a classification feature map, a frame regression feature map and a mask feature map respectively.
Further, the FCN sub-networks are respectively utilized to directly identify the mask feature map, and a frame prediction result is obtained; after the frame regression feature map is expanded into a one-dimensional feature vector, frame prediction is carried out on the one-dimensional feature vector by using a positive and negative sample calculation method, and a frame prediction result is obtained. Meanwhile, for combining internet of things environment characteristics, a comprehensive characteristic matrix is obtained, a one-dimensional characteristic vector is connected with an environment parameter vector, and a classifier is used for classifying and identifying the comprehensive characteristic matrix to obtain a classification prediction result.
Specifically, the regression feature map may be developed, a one-dimensional feature vector (VShared) may be output, a branch may be calculated in a bounding box (bounding box), and the VShared may be subjected to the bounding box regression; in the Classifier branch, VShared and environment parameters Venv (en0, en 1.. envK) are connected and transmitted into a softmax Classifier together with a real classification result, and Classifier parameters are corrected through back propagation to obtain a corrected Classifier and a predicted classification result
Further, mask computation is performed by a full convolution sub-network (FCN). And (3) carrying out 256-kernel conv + bn + relu convolution on the feature graph processed by the RoIAlign 4 times by 3, then carrying out up-sampling transposition convolution, and carrying out relu activation to obtain the feature graph. Finally, convolution and sigmoid activation are performed by an N-kernel of 1 × 1 (N-1 is the disease type, 1 is the leaf target, which can be optimized as N ═ 2 here), so that each pixel in the corresponding feature map generates a predicted value between 0 and 1 on each classification.
And finally, according to the mask prediction result, the frame prediction result and the classification prediction result of the to-be-detected vegetable leaf image, the position of the scab, the size of the scab and the type of the scab existing in the leaf image can be accurately determined.
Compared with the method for only carrying out disease classification prediction in the prior art, the method for detecting the vegetable diseases can obtain the mask prediction result, the frame prediction result and the classification prediction result at the same time through one-time input of the vegetable leaf image, and the analysis result is more accurate due to the fact that the size and the specific position of the disease spot are determined. Meanwhile, a foundation is provided for developing comparison tests, such as: the same vegetable leaves can be subjected to image sampling in different periods of time to analyze the control effect of diseases and the like in the period of time (such as the application interval).
Based on the content of the foregoing embodiment, as an optional embodiment, the ResNet sub-network includes:
a first feature extraction module comprising 64 convolution kernels of 7 x 3, convolution layers with a step size of 2, and 1 pooling layer with a step size of 2;
a second feature extraction module, including 1 first scale layer and 2 first feature layers connected in series in sequence, where the first scale layer includes 64 convolutional layers with convolution kernels 1 × 64, 64 convolutional layers with convolution kernels 3 × 64, and 256 convolutional layers with convolution kernels 1 × 64, and each of the first feature layers includes 64 convolutional layers with convolution kernels 1 × 256, 64 convolutional layers with convolution kernels 3 × 64, and 256 convolutional layers with convolution kernels 1 × 64;
a third feature extraction module, including a second scale layer and 3 second feature layers connected in series in sequence, where the second scale layer includes 128 convolutional layers with convolution kernels of 1 × 256 and 2, 128 convolutional layers with convolution kernels of 3 × 128, and 512 convolutional layers with convolution kernels of 1 × 128, and each of the second feature layers includes 128 convolutional layers with convolution kernels of 1 × 512, 128 convolutional layers with convolution kernels of 3 × 128, and 512 convolutional layers with convolution kernels of 1 × 128;
a fourth feature extraction module, including a third scale layer and 22 third feature layers connected in series in sequence, where the third scale layer includes 256 convolutional layers with convolution kernels of 1 × 512 by 2, 256 convolutional layers with convolution kernels of 3 × 256, and 1024 convolutional layers with convolution kernels of 1 × 256, and each of the third feature layers includes 256 convolutional layers with convolution kernels of 1 × 1024, 256 convolutional layers with convolution kernels of 3 × 256, and 1024 convolutional layers with convolution kernels of 1 × 256;
the fifth feature extraction module comprises a fourth scale layer and 2 fourth feature layers which are sequentially connected in series, wherein the fourth scale layer comprises 512 convolutional layers with convolution kernels of 1 × 1024 step size of 2, 512 convolutional layers with convolution kernels of 3 × 512, and 2048 convolutional layers with convolution kernels of 1 × 512, and each fourth feature layer comprises 512 convolutional layers with convolution kernels of 1 × 2048, 512 convolutional layers with convolution kernels of 3 × 512, and 2048 convolutional layers with convolution kernels of 1 × 512.
Specifically, as shown in fig. 3, the first feature extraction is performed on the vegetable leaf image to be detected in the first feature extraction module (C1 module). Similar to the characteristics of the human observed object, the neural network starts from obvious characteristics such as obvious shapes, proportions, contours and the like during characteristic extraction, and the observation range is large and the number of characteristics is small, so that in order to save operation resources, the size of the convolution kernel can take a large value, and the number of the convolution kernels takes a small value. Specifically, in the present invention, 64 3-channel convolution kernels of 7 × 7 are taken, down-sampling is performed with a step size of 2 to filter redundant features, and maximum pooling with a step size of 2 is employed to perform feature abstraction to reduce subsequent computational complexity, so as to output 64 feature maps of 256 × 256.
The subsequent modules (the second feature extraction module to the fifth feature extraction module) all include one scale layer and a plurality of feature layers. The scale layer is mainly used for scaling the feature map output by the previous module, and the feature layer is mainly used for further feature abstraction of the feature map and extraction of more detailed features of the image. It should be noted that, no matter the scale layer or the feature layer, the "short cut connection" between layers is adopted to perform linear correction, so that the gradient disappearance does not occur in the deep convolution process.
Further, in the second feature extraction module (C2 module), the scale layer (first scale layer) does not perform feature fusion and image size change, and the feature dimension is increased to 256 through 256 1 × 64 convolution kernels while the original size is maintained. Then, 2 feature layers (first feature layer) are used, feature fusion is performed through 64 convolution kernels of 1 × 256, feature extraction is performed through convolution kernels of 3 × 64, and dimension expansion is performed through 256 convolution kernels of 1 × 64.
Further, the feature maps output by the C2 module are subjected to feature fusion at a scale layer (second scale layer) of a third feature extraction module (C3 module), the feature dimensions are reduced by 50% to 128, and the feature maps are reduced in length and width by 50% through convolution with step 2. Finally, after being convolved by 512 convolution kernels of 1 × 128, the output characteristic number of the C3 module is restored and expanded to 2 times of the output characteristic number of the C3 module, namely the output after 512. Then, using 3 feature layers (second feature layer), performing dimensionality reduction on the feature map to 128, then performing 128 convolution kernel convolutions of 3 × 128, which are the same as the scale layer, and finally performing dimensionality increase to 2 times of the input dimensionality of the third module, namely, to 512 dimensionalities, and outputting 512 feature maps of 128 × 128.
Further, a fourth feature extraction module (C4 module) is similar to the C3 module, in that the scale layer (third scale layer) fuses features to 50% first, i.e., the dimension is reduced from 512 to 256; and reducing the size of the image to 50%, namely reducing the length and the width from 128 to 64; the feature dimension after convolution is then expanded to 2 times of the input feature number of the C4 module, namely 1024. Then, utilizing 22 feature layers (third feature layer) to perform feature fusion until the dimension is consistent with the dimension after feature fusion of the scale layer, namely, reducing the dimension from 1024 to 256; after convolution with the same scale layer, the feature dimension is expanded to 2 times of the input feature of the C4 module, namely, 256 times is increased to 1024.
Further, the operation process of the fifth feature extraction module (C5 module) is similar to that of the C4 module, and after the 64 × 1024 feature map input by the C4 is processed, the 32 × 2048 feature map is output, the length-width ratio is reduced by 50%, and the number of features is increased by 2 times.
In the vegetable disease detection method provided by the invention, a specific structure of a ResNet sub-network is provided, so that the specific and total energy extraction of the input vegetable leaf images to be detected can be effectively realized, the accuracy of the feature extraction can be improved by increasing the equivalent depth, the inner residual block uses jump connection, and the problem of gradient disappearance caused by increasing the depth in a deep neural network is solved
Based on the content of the foregoing embodiment, as an alternative embodiment, the generating, by using the FPN sub-network, a plurality of feature maps according to the image features includes:
passing the output result of the fifth feature extraction module through a convolution layer with 256 convolution kernels of 1 × 2048 to obtain a fifth feature map;
passing the output result of the fourth feature extraction module through 256 convolution layers with convolution kernels of 1 × 1024, and fusing the output result with the up-sampling result of the fifth feature map to obtain a fourth feature map;
passing the output result of the third feature extraction module through 256 convolution layers with convolution kernels of 1 × 512, and fusing the output result with the up-sampling result of the fourth feature map to obtain a third feature map;
and passing the output result of the second feature extraction module through 256 convolution layers with convolution kernels of 1 × 256, and fusing the output result with the up-sampling result of the third feature map to obtain a second feature map.
Based on the above embodiment, the determining, by using the RPN subnetwork and according to the plurality of feature maps, a detection candidate region related to the to-be-detected vegetable leaf image includes:
respectively passing the second feature map, the third feature map, the fourth feature map and the fifth feature map through 256 convolution layers with convolution kernels of 3 × 256, and respectively acquiring a second detection candidate region, a third detection candidate region, a fourth detection candidate region and a fifth detection candidate region;
and performing maximum pooling processing for compensating the fifth feature map to 2 to obtain a sixth detection candidate region.
As shown in FIG. 4, the vegetable disease detection method provided by the invention adds an FPN structure on the basis of ResNet-101. The output of the C5 module was convolved by 1 x 1, reducing its number of channels to 256, yielding M5. The M5 was convolved by 3 × 3 to generate a fifth profile (P5) with the same number of channels.
Further, the output of the M5 is upsampled by 2+ C4 block, then convolved by 1 to reduce the number of channels to 256, yielding M4. Performing a 3 × 3 convolution on M4 generates a fourth profile (P4) with the same number of channels.
Further, the output of the M4 is upsampled by 2+ C3 block, then convolved by 1 to reduce the number of channels to 256, yielding M3. The 3 x 3 convolution of M3 generated a third profile (P3) with the same number of channels.
Further, the output of the 2+ C2 block was upsampled to M3, followed by 1 × 1 convolution to reduce the number of channels to 256, yielding M2. Performing a 3 x 3 convolution on M2 generates a second profile (P2) with the same number of channels.
In summary, the present invention generates 5 signatures M2-M5 of different scales through FPN subnetworks, and generates n × 256 signatures P2-P5 through 256 convolution kernels of 3 × 256. Training and predicting a main network by P2-P5, performing maximum pooling with the step length of 2 for P5 once, generating a sixth feature map (P6), and taking P2-P6 as a detection candidate region related to the to-be-detected vegetable leaf image for the interest region recommendation network.
Based on the content of the above embodiment, as an optional embodiment, in the process of performing frame prediction on the one-dimensional feature vector based on a positive-negative sample calculation method, a dropout module is added at a full connection layer of Bounding box prediction; and in the process of carrying out classification and identification on the comprehensive characteristic matrix based on the trained classifier, adding a dropout module in a fully connected layer predicted by the classifier.
After the recommended detection candidate region is obtained, before the detection candidate region is subjected to target detection, the sizes of interest regions need to be unified, the feature map coordinates with different scales are mapped into an original input image by using a RoI Align algorithm, and a feature map with a fixed size is extracted. For example: the feature map size for classification and bounding box regression was set to 7 × 7, and the feature map size for mask calculation was set to 14 × 14. Then, the feature map is subjected to 2 operations of Conv + BN + ReLu and expanded to output one-dimensional feature vectors VShared (feature0, feature1, feature2,. and feature1023) for 1024 elements.
Further, in the calculation branch of the bounding box prediction (bounding box), VShared is directly subjected to bounding box regression.
Furthermore, in a classification prediction (Classifier) branch, VShared and an environmental parameter Venv (en0, en 1.. envK) are connected to form a comprehensive characteristic matrix (VShared, Venv), the comprehensive characteristic matrix and a real classification result are transmitted into a softmax Classifier, and Classifier parameters are corrected through back propagation to obtain a corrected Classifier and a predicted classification result.
Optionally, the invention adopts a dropout function to process the full connection layer of the Bounding box branch and the Classiier branch, so as to reduce the degree of over-fitting caused by a small sample data set.
According to the vegetable disease detection method provided by the invention, on one hand, the degree of overfitting of the model to the training sample can be effectively reduced by adding dropout treatment in the full-connection layer; on the other hand, environmental data characteristic vectors such as average temperature and humidity are introduced into a full connection layer of image classification, the characteristic vectors are connected with the expanded image characteristic vectors, then the environmental data characteristic vectors are transmitted into a classifier to carry out comprehensive classification prediction of combining environmental characteristics and image special diagnosis, classification accuracy is enhanced by introducing environmental parameters, the problem of low classification accuracy caused by insufficient image characteristics can be solved, and the accuracy of final prediction can be effectively improved.
Specifically, the advantage of the deep convolutional neural network in the aspect of image processing is established on multi-feature extraction of a large number of data samples, and under the condition that the number of samples is small, when a deep convolutional model is trained, training loss is still in the process of continuous reduction, but verification loss is continuously oscillated and does not fall or does not fall and reversely rise, and an overfitting phenomenon occurs. According to the method, data enhancement is performed on a small amount of existing samples, and the overfitting degree of a model is reduced through dropout.
Because the mask residual error convolution network provided by the application only has the fully-connected layer of the classification branch 'Classifier head' and the frame branch 'BBox head', dropout processing is carried out in the fully-connected layer of the Classifier and BBox branches, hidden layer nodes are reduced, the discarding rate is set to 0.5, half hidden nodes are randomly ignored in each training batch, the fully-connected network trained each time has unreliability, each batch can be regarded as different models, each 2 hidden nodes are not necessarily involved in calculation at the same time, interaction among the 2 fixed hidden nodes is not excessively depended on when the weight value is updated, and therefore, excessive dependence among features can be eliminated, and the over-fitting phenomenon is effectively prevented.
Based on the content of the above embodiment, as an optional embodiment, before inputting the vegetable leaf image to be tested to a mask residual convolution network trained in advance, the method further includes pre-training the mask residual convolution network by using a vegetable leaf image sample;
and in the process of pre-training the mask residual convolution network, increasing the loss weight when the mask prediction result is obtained and reducing the loss weight when the classification prediction result is obtained.
Specifically, inside the RPN subnetwork, P2-P6 are first passed through 256 3 × 256 convolution kernels to eliminate aliasing effects of upsampling. Wherein, the size of the anchor frame (anchor) is [128, 256, 512], and the length-width ratio is [1, 0.5, 2 ]. Then, the detection candidate regions P2-P6 are scanned by a convolution kernel of 3 × 3, and the center of each 3 × 3 region in each detection candidate region generates 3 × 3 frames. In the logical classification, each frame [ m, n, a, b ] generates a matrix of a x b x 18 by convolution of 1 x 18, and the 18-dimensional matrix stores exactly 2 classification results corresponding to the 9 frame foreground and background. And then, comparing the predicted value with the actual value, for example, adopting an intersection ratio (IoU) calculation method, taking 0.3 as a judgment threshold value of a positive sample and a negative sample, and performing binary classification activation through a softmax function, namely, generating 9 × 2 foreground confidence degrees and background confidence degrees at each point.
When the Bounding box is corrected, a matrix of a, b and 36 is generated by convolution of 1, 1 and 36 for each frame [ m, n, a and b ], and the 36-dimensional matrix stores data of 4 coordinates, namely the central abscissa, the central ordinate, the length and the width of each frame corresponding to 9 frames and carries out offset parameter calculation. Two sets of results are output, one set being the confidence that each frame is foreground or background, the other set being the corrected coordinates [ center abscissa, center ordinate, length, width ] of each frame. As can be seen from the above, each point in the scan candidate area generates 9 frames, i.e., each point generates 9 × 4 or 36 frame offsets.
In such a way, one scanning detection candidate area will generate tens of thousands of target frames, if all frames are analyzed and predicted, such calculation amount is quite large, and the invention utilizes a frame recommendation layer (bounding box recommendation layer) to obtain k frames with foreground confidence coefficient being earlier. And performing frame correction through the frame offset parameter obtained by calculation, discarding frames with the range exceeding the size of the characteristic graph, and respectively calculating the finally selected 1000-plus-2000 frames by using a non-maximum inhibition method during training and prediction.
Optionally, the present invention provides an improved IoU calculation method, so as to calculate IoU of the above-mentioned border and each Target regions of interest (Target RoIs), and determine that the border region is a positive sample and a negative sample, and the following algorithm is adopted:
and setting the frame set of the actual blade as L, the frame set of the actual scab example as D, and the frame set of the predicted result scab example as D'. The method for judging the predicted value of each lesion spot in D' to be positive/negative comprises the following steps:
(1) for any unmarked element x in D', whether the intersection of the true value y of one leaf with the true value y can be found in L is calculated to be more than 90%.
(2) If not, the range of the predicted lesion is beyond the leaf, and the lesion must be a negative sample; and if so, finding out the true value set Dy and the disease case prediction value set D' Y of all the disease cases with intersection larger than 90% with Y.
(3) x is a negative sample if it does not intersect Dy, and if it does intersect, D ' Y is taken to mean the set I ' of all subsets containing x ═ I '1,...,I’m,...,I’nSet I of all subsets of I ═ I1,...,Ij,...,IkI each element IjInitial j is 1 and maximum j<K, calculating if element I 'is present in I'm(0<m<N) and IjIoU of (a) is greater than 0.5, if not, j +1, if present, finding a set of all eligible combinations of predicted lesions { I'm1,...,I’mp,...,ImqTaken as IoU maximum combination I 'with G'mpDenotes composition I'mpIs a positive sample if j ═ k, and optionally has no I'mAnd IjIoU of is greater than 0.5, x is a negative sample.
Further, after positive and negative sample values of all frames are obtained, balance screening of the positive and negative samples is carried out according to a preset positive and negative sample proportion, 200 positive and negative sample frames are obtained, and then deviation values of actual interest areas closest to the positive sample frames are calculated.
Further, target detection is carried out on the detection candidate region, the size of the interest region is unified, the feature map coordinates with different scales are mapped into the original input image by utilizing the RoI Align algorithm, and the feature map with the fixed size is extracted. Let 7 × 7 be the feature size for classification and bounding box regression, and 14 × 14 be the feature size for mask computation. The profile was then passed 2 times Conv + BN + ReLu's operation and expansion output 1024-element vector VShared(feature0,feature1,feature2,...,feature1023). Calculating the branch in the bounding box, and dividing VSharedPerforming frame regression; in the Classifier branch, V is convertedSharedAnd an environmental parameter Venv(en0,en1,...envK) Connection (V)Shared,Venv) And the classifier parameters are transmitted into a softmax classifier together with the real classification result, and the classifier parameters are corrected through backward propagation to obtain a corrected classifier and a predicted classification result.
It should be noted that, in the method for detecting a vegetable disease provided by the present invention, the loss weight of three branches is adjusted during training (in the original case, the loss weight of three branches is all 1), that is, the loss weight of Mask prediction (Mask) is increased to 1.25, the loss weight of bounding box prediction (bounding box) is kept unchanged, and the loss weight of classification prediction (Classifier) is reduced to 0.75, so that the model is more heavily weighted in calculation of the bounding box and the Mask with larger calculation amount during training. Meanwhile, the application can ensure higher accuracy of classification effect by combining biological characteristics and environmental parameters even if the loss weight of the Classifier is properly reduced.
Finally, mask calculations are performed by FCN. And (3) performing 4 times of 256-kernel conv + bn + relu convolution on the mask feature map subjected to the RoIAlign processing, then performing 2-kernel up-sampling transpose convolution with the step size of 2, performing relu activation to obtain 28-256 feature map, and finally performing sigmoid activation on the feature map by using 1-1N-kernel (N-1 is a disease type, 1 is a leaf target, and N can be optimized to be 2), so that a predicted value between 0 and 1 is generated on each classification corresponding to each pixel in the feature map.
According to the vegetable disease detection method provided by the invention, the multi-task loss optimization is realized by adjusting the loss weights of the three branches to obtain a better example edge calculation result, and specifically:
the Mask residual error convolution network provided by the vegetable disease detection method mainly comprises two sub-networks of RPN and Mask R-CNN. The loss calculation during training is mainly based on the classification loss (RPN _ class _ loss) and regression loss (RPN _ bbox _ los) of the recommended region of the RPN sub-network and the frame calculation loss of the Mask R-CNN sub-network, wherein the frame calculation loss mainly comprises: bezel classification penalty (mrnnn _ class _ loss), bezel regression penalty (mrnn _ bbox _ loss), and mask computed global penalty (mrnnn _ marsk _ loss). The loss value of the multi-task model calculated by the invention is obtained by weighting and summing the loss values of all tasks:
wherein Loss is the total Loss value, LiFor each task the loss value of the loss function, wiIs corresponding to LiThe weight of (c).
In the above model in the prior art, the weights of the default 5 tasks are the same, the 5 loss functions are uniformly added, and then end-to-end training is performed to obtain a converged vegetable disease detection model. However, in the invention, the characteristics that the shape of the lesion of the blade of the research target is different from the shapes of various objects in the data set and has strong irregularity, and under the condition of a small sample data set, the shape edge characteristics of the lesion are difficult to extract, so that the range of the lesion is difficult to predict accurately are fully considered, the loss weight of the model is adjusted, the weight of mrcnn _ mask _ loss is increased, mrcnn _ bbox _ loss, rpn _ bbox _ loss and rpn _ class _ loss are kept unchanged, and mrcnn _ class _ loss is correspondingly reduced, so that a better example edge calculation result is obtained.
Based on the content of the foregoing embodiment, as an optional embodiment, before pre-training the mask residual convolution network by using a vegetable leaf image sample, the method further includes: and turning over, brightness change processing and contrast change processing are carried out on the vegetable leaf image sample so as to expand the vegetable leaf image sample.
Usually, training sample images of the disease recognition model are required to be shot under moderate light conditions, and the brightness of the sample images cannot be changed in data enhancement, but the light conditions in an actual production environment are difficult to maintain in the moderate state, and most of the time, the light is either too dark or too strong. In the pure image recognition task, the convolutional neural network finally outputs a set of feature maps, which are expanded into one-dimensional feature vectors in the full-connected layer and then directly transmitted into the classifier. However, in an actual greenhouse disease identification scene, under different light intensity and observation angles, the leaf image features of the same disease, such as shape, texture, color and the like, have great difference, and the feature difference between different diseases may be small. In view of this, the method for detecting vegetable diseases provided by the present application, when performing data enhancement, performs a flip operation on an image, and also performs a brightness/contrast change process to simulate the illumination intensity that changes in the natural environment as much as possible, so as to simulate the characteristics of the disease blades under different angles and light by using data enhancement, expand the vegetable blade image samples, and increase the robustness of the model to a certain extent.
On the basis, in order to strengthen the prediction of the lesion area range, the loss weight of disease type classification is weakened, and if the disease type is distinguished only by using the crop disease leaf image characteristics, the accuracy of final identification is lost.
Furthermore, because certain environmental conditions are required for the occurrence and development of any crop diseases, the crop disease identification rate can be further improved by combining the occurrence conditions and the occurrence rules of the crop diseases.
Specifically, as shown in fig. 5, the occurrence of a disease is closely related to environmental parameters such as air temperature and humidity, soil nutrient content, and illumination intensity, and physiological parameters such as growth period and disease-resistant variety, and environmental physiological parameters affecting the disease under study are extracted, and then these environmental information are quantized or discretized, thereby forming an environmental parameter vector (env)1,...,envX) Physiological parameter vector Bio (Bio)1,...,bioY)。
Before the layer is fully connected, all image feature maps are expanded into one-dimensional feature vectors Img (feature)1,...,featureN) Connecting the environmental and physiological characteristic vectors with the image characteristic vectorAnd (3) obtaining a one-dimensional comprehensive characteristic matrix FC:
and taking the comprehensive characteristic matrix as a whole, and using a softmax classifier for disease classification, so that the environmental parameters only influence the classification of the disease spots in the detection process, but not influence the calculation of the disease spot area and the mask, and share the weight with the whole network in the training process, thereby realizing the fusion training and prediction of the image characteristics and the environmental parameters of the vegetable disease detection model.
In order to effectively illustrate the difference between the prediction precision of the example segmentation network (Mask R-CNN) and the prediction precision of the target detection network (Fater R-CNN) adopted in the vegetable disease detection method provided by the embodiment of the invention, the invention respectively utilizes the Mask R-CNN and the Fater R-CNN to train and test 300 small sample datasets of leaf images with three diseases of tomato leaf blight, leaf mold and powdery mildew.
And (3) displaying a detection result: the Average accuracy rate (mAP) of disease detection of the Mask R-CNN is 58.08, which is 3.52 higher than that of the disease detection mAP of the fater R-CNN. However, as a few classes of target detection, such mAP is still low, and especially the effect of range prediction of the lesion bbox and mask is not ideal.
Furthermore, the invention enhances data of image in modes of turning and brightness change, and simultaneously increases dropout at full connection layer of bbox and classifier, reduces overfitting caused by less training samples, adjusts training loss weight, increases mask and frame training weight, and reduces classification weight. Under the condition that the classification weight is reduced, comprehensive detection is carried out on tomato disease image data, environmental data, plant physiological data and other multi-azimuth data by utilizing example segmentation, comprehensive analysis of disease images and growth environment data is carried out, image feature vectors are connected with environmental parameter vectors through a classification branch full-connection layer of a Mask R-CNN head network, a new full-connection layer is transmitted into a classifier to carry out disease type prediction, the mAP of model disease spot detection after the test result is improved is 75.64, 17.56 is improved compared with that before the improvement, the mAP of disease spot Mask prediction is 78.35, and 17.75 is improved.
In conclusion, the method solves the problem of high-precision diagnosis of the small sample vegetable image diseases in a complex scene, and realizes high-precision disease detection by combining the tomato leaf spot image and environmental parameters.
As shown in fig. 6, the present invention provides a vegetable disease detection device, which mainly comprises an image acquisition unit 1, a network prediction unit 2 and a result output unit 3, wherein:
the image acquisition unit 1 is mainly used for acquiring an image of a vegetable leaf to be detected;
the network prediction unit 2 is mainly used for inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance to obtain a recognition prediction result;
the result output unit 3 is mainly used for determining a vegetable disease detection result of the to-be-detected vegetable leaf image according to the identification prediction result;
the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
It should be noted that, in specific implementation, the vegetable disease detection device provided in the embodiment of the present invention may be implemented based on the vegetable disease detection method described in any of the above embodiments, and details of this embodiment are not described herein.
The vegetable disease detection system provided by the invention utilizes the mask residual convolution network to extract the characteristics of the disease spot image, combines the environmental characteristics, realizes the high-precision disease diagnosis problem of the small sample vegetable image under the complex production environment combining the disease spot image and the environmental parameters.
Fig. 7 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 7, the electronic device may include: a processor (processor)710, a communication interface (communication interface)720, a memory (memory)730, and a communication bus 740, wherein the processor 710, the communication interface 720, and the memory 730 communicate with each other via the communication bus 740. Processor 710 may invoke logic instructions in memory 730 to perform a vegetable disease detection method comprising: acquiring an image of a vegetable leaf to be detected; inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result; determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result; the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
In addition, the logic instructions in the memory 730 can be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to perform the vegetable disease detection method provided by the above methods, the method comprising: acquiring an image of a vegetable leaf to be detected; inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result; determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result; the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
In still another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the method for detecting vegetable diseases provided in the above embodiments, the method including: acquiring an image of a vegetable leaf to be detected; inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result; determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result; the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (11)
1. A vegetable disease detection method is characterized by comprising the following steps:
acquiring an image of a vegetable leaf to be detected;
inputting the vegetable leaf image to be detected into a mask residual error convolution network trained in advance to obtain a recognition prediction result;
determining a vegetable disease detection result of the vegetable leaf image to be detected according to the identification prediction result;
the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
2. A vegetable disease detection method according to claim 1, wherein the recognition prediction result includes a mask prediction result, a frame prediction result, and a classification prediction result;
the step of inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance to obtain a recognition prediction result comprises the following steps:
identifying the vegetable leaf image to be detected by utilizing the ResNet sub-network to obtain image characteristics related to the vegetable leaf image to be detected;
generating a plurality of feature maps according to the image features by utilizing the FPN sub-network;
determining a detection candidate region related to the vegetable leaf image to be detected according to the plurality of feature maps by utilizing the RPN sub-network;
mapping the detection candidate region to the to-be-detected vegetable leaf image based on a RoI Align algorithm to obtain a classification feature map, a frame regression feature map and a mask feature map;
performing mask feature calculation on the mask feature map by using the FCN sub-network to obtain a mask prediction result;
expanding the frame regression feature map into a one-dimensional feature vector;
performing frame prediction on the one-dimensional feature vector based on a positive and negative sample calculation method to obtain a frame prediction result;
connecting the one-dimensional characteristic vector with the environmental parameter vector to obtain a comprehensive characteristic matrix; and carrying out classification and identification on the comprehensive characteristic matrix based on the trained classifier to obtain a classification prediction result.
3. A vegetable disease detection method according to claim 2, wherein the ResNet subnetwork comprises:
a first feature extraction module comprising 64 convolution kernels of 7 x 3, convolution layers with a step size of 2, and 1 pooling layer with a step size of 2;
a second feature extraction module, including 1 first scale layer and 2 first feature layers connected in series in sequence, where the first scale layer includes 64 convolutional layers with convolution kernels 1 × 64, 64 convolutional layers with convolution kernels 3 × 64, and 256 convolutional layers with convolution kernels 1 × 64, and each of the first feature layers includes 64 convolutional layers with convolution kernels 1 × 256, 64 convolutional layers with convolution kernels 3 × 64, and 256 convolutional layers with convolution kernels 1 × 64;
a third feature extraction module, including a second scale layer and 3 second feature layers connected in series in sequence, where the second scale layer includes 128 convolutional layers with convolution kernels of 1 × 256 and 2, 128 convolutional layers with convolution kernels of 3 × 128, and 512 convolutional layers with convolution kernels of 1 × 128, and each of the second feature layers includes 128 convolutional layers with convolution kernels of 1 × 512, 128 convolutional layers with convolution kernels of 3 × 128, and 512 convolutional layers with convolution kernels of 1 × 128;
a fourth feature extraction module, including a third scale layer and 22 third feature layers connected in series in sequence, where the third scale layer includes 256 convolutional layers with convolution kernels of 1 × 512 by 2, 256 convolutional layers with convolution kernels of 3 × 256, and 1024 convolutional layers with convolution kernels of 1 × 256, and each of the third feature layers includes 256 convolutional layers with convolution kernels of 1 × 1024, 256 convolutional layers with convolution kernels of 3 × 256, and 1024 convolutional layers with convolution kernels of 1 × 256;
the fifth feature extraction module comprises a fourth scale layer and 2 fourth feature layers which are sequentially connected in series, wherein the fourth scale layer comprises 512 convolutional layers with convolution kernels of 1 × 1024 step size of 2, 512 convolutional layers with convolution kernels of 3 × 512, and 2048 convolutional layers with convolution kernels of 1 × 512, and each fourth feature layer comprises 512 convolutional layers with convolution kernels of 1 × 2048, 512 convolutional layers with convolution kernels of 3 × 512, and 2048 convolutional layers with convolution kernels of 1 × 512.
4. A vegetable disease detection method as claimed in claim 3, wherein the generating a plurality of feature maps from the image features using the FPN subnetwork comprises:
passing the output result of the fifth feature extraction module through a convolution layer with 256 convolution kernels of 1 × 2048 to obtain a fifth feature map;
passing the output result of the fourth feature extraction module through 256 convolution layers with convolution kernels of 1 × 1024, and fusing the output result with the up-sampling result of the fifth feature map to obtain a fourth feature map;
passing the output result of the third feature extraction module through 256 convolution layers with convolution kernels of 1 × 512, and fusing the output result with the up-sampling result of the fourth feature map to obtain a third feature map;
and passing the output result of the second feature extraction module through 256 convolution layers with convolution kernels of 1 × 256, and fusing the output result with the up-sampling result of the third feature map to obtain a second feature map.
5. A vegetable disease detection method as claimed in claim 4, wherein the determining, by the RPN subnetwork, a detection candidate region related to the vegetable leaf image to be detected from the plurality of feature maps includes:
respectively passing the second feature map, the third feature map, the fourth feature map and the fifth feature map through 256 convolution layers with convolution kernels of 3 × 256, and respectively acquiring a second detection candidate region, a third detection candidate region, a fourth detection candidate region and a fifth detection candidate region;
and performing maximum pooling processing for compensating the fifth feature map to 2 to obtain a sixth detection candidate region.
6. The vegetable disease detection method according to claim 2, wherein in the process of performing frame prediction on the one-dimensional feature vector based on a positive-negative sample calculation method, a dropout module is added at a full connection layer of Bounding box prediction;
and in the process of carrying out classification and identification on the comprehensive characteristic matrix based on the trained classifier, adding a dropout module in a fully connected layer predicted by the classifier.
7. The vegetable disease detection method of claim 2, wherein before inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance, the method further comprises pre-training the mask residual convolution network with a vegetable leaf image sample;
and in the process of pre-training the mask residual convolution network, increasing the loss weight when the mask prediction result is obtained and reducing the loss weight when the classification prediction result is obtained.
8. The vegetable disease detection method of claim 7, wherein prior to pre-training the mask residual convolution network with vegetable leaf image samples, further comprising: and turning over, brightness change processing and contrast change processing are carried out on the vegetable leaf image sample so as to expand the vegetable leaf image sample.
9. A vegetable disease detection device, comprising:
the image acquisition unit is used for acquiring an image of the vegetable leaf to be detected;
the network prediction unit is used for inputting the vegetable leaf image to be detected into a mask residual convolution network trained in advance to obtain a recognition prediction result;
the result output unit is used for determining a vegetable disease detection result of the to-be-detected vegetable leaf image according to the identification prediction result;
the masked residual convolution networks include ResNet, FPN, RPN, and FCN subnetworks.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the vegetable disease detection method according to any one of claims 1 to 8 when executing the computer program.
11. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program when executed by a processor implements the vegetable disease detection method steps of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011444655.XA CN112598031A (en) | 2020-12-08 | 2020-12-08 | Vegetable disease detection method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011444655.XA CN112598031A (en) | 2020-12-08 | 2020-12-08 | Vegetable disease detection method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112598031A true CN112598031A (en) | 2021-04-02 |
Family
ID=75192369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011444655.XA Pending CN112598031A (en) | 2020-12-08 | 2020-12-08 | Vegetable disease detection method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112598031A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065569A (en) * | 2021-04-13 | 2021-07-02 | 广东省科学院智能制造研究所 | Fish quality estimation method, system, device and storage medium based on neural network |
CN114239756A (en) * | 2022-02-25 | 2022-03-25 | 科大天工智能装备技术(天津)有限公司 | Insect pest detection method and system |
CN114399480A (en) * | 2021-12-30 | 2022-04-26 | 中国农业大学 | Method and device for detecting severity of vegetable leaf disease |
CN115660291A (en) * | 2022-12-12 | 2023-01-31 | 广东省农业科学院植物保护研究所 | Plant disease occurrence and potential occurrence identification and evaluation method and system |
CN116757332A (en) * | 2023-08-11 | 2023-09-15 | 北京市农林科学院智能装备技术研究中心 | Leaf vegetable yield prediction method, device, equipment and medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161275A (en) * | 2018-11-08 | 2020-05-15 | 腾讯科技(深圳)有限公司 | Method and device for segmenting target object in medical image and electronic equipment |
CN111369540A (en) * | 2020-03-06 | 2020-07-03 | 西安电子科技大学 | Plant leaf disease identification method based on mask convolutional neural network |
CN111400536A (en) * | 2020-03-11 | 2020-07-10 | 无锡太湖学院 | Low-cost tomato leaf disease identification method based on lightweight deep neural network |
CN111489327A (en) * | 2020-03-06 | 2020-08-04 | 浙江工业大学 | Cancer cell image detection and segmentation method based on Mask R-CNN algorithm |
-
2020
- 2020-12-08 CN CN202011444655.XA patent/CN112598031A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111161275A (en) * | 2018-11-08 | 2020-05-15 | 腾讯科技(深圳)有限公司 | Method and device for segmenting target object in medical image and electronic equipment |
CN111369540A (en) * | 2020-03-06 | 2020-07-03 | 西安电子科技大学 | Plant leaf disease identification method based on mask convolutional neural network |
CN111489327A (en) * | 2020-03-06 | 2020-08-04 | 浙江工业大学 | Cancer cell image detection and segmentation method based on Mask R-CNN algorithm |
CN111400536A (en) * | 2020-03-11 | 2020-07-10 | 无锡太湖学院 | Low-cost tomato leaf disease identification method based on lightweight deep neural network |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113065569A (en) * | 2021-04-13 | 2021-07-02 | 广东省科学院智能制造研究所 | Fish quality estimation method, system, device and storage medium based on neural network |
CN113065569B (en) * | 2021-04-13 | 2023-11-24 | 广东省科学院智能制造研究所 | Fish quality estimation method, system, device and storage medium based on neural network |
CN114399480A (en) * | 2021-12-30 | 2022-04-26 | 中国农业大学 | Method and device for detecting severity of vegetable leaf disease |
CN114239756A (en) * | 2022-02-25 | 2022-03-25 | 科大天工智能装备技术(天津)有限公司 | Insect pest detection method and system |
CN114239756B (en) * | 2022-02-25 | 2022-05-17 | 科大天工智能装备技术(天津)有限公司 | Insect pest detection method and system |
CN115660291A (en) * | 2022-12-12 | 2023-01-31 | 广东省农业科学院植物保护研究所 | Plant disease occurrence and potential occurrence identification and evaluation method and system |
CN115660291B (en) * | 2022-12-12 | 2023-03-14 | 广东省农业科学院植物保护研究所 | Plant disease occurrence and potential occurrence identification and evaluation method and system |
CN116757332A (en) * | 2023-08-11 | 2023-09-15 | 北京市农林科学院智能装备技术研究中心 | Leaf vegetable yield prediction method, device, equipment and medium |
CN116757332B (en) * | 2023-08-11 | 2023-12-05 | 北京市农林科学院智能装备技术研究中心 | Leaf vegetable yield prediction method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764292B (en) | Deep learning image target mapping and positioning method based on weak supervision information | |
CN108830326B (en) | Automatic segmentation method and device for MRI (magnetic resonance imaging) image | |
CN112598031A (en) | Vegetable disease detection method and system | |
CN108416266B (en) | Method for rapidly identifying video behaviors by extracting moving object through optical flow | |
Al Bashish et al. | A framework for detection and classification of plant leaf and stem diseases | |
US20190164047A1 (en) | Object recognition using a convolutional neural network trained by principal component analysis and repeated spectral clustering | |
Nawaz et al. | AI-based object detection latest trends in remote sensing, multimedia and agriculture applications | |
Xia et al. | A multi-scale segmentation-to-classification network for tiny microaneurysm detection in fundus images | |
Shewale et al. | High performance deep learning architecture for early detection and classification of plant leaf disease | |
Liu et al. | Deep learning based research on quality classification of shiitake mushrooms | |
Su et al. | LodgeNet: Improved rice lodging recognition using semantic segmentation of UAV high-resolution remote sensing images | |
CN114648806A (en) | Multi-mechanism self-adaptive fundus image segmentation method | |
Pramunendar et al. | A Robust Image Enhancement Techniques for Underwater Fish Classification in Marine Environment. | |
Jenifa et al. | Classification of cotton leaf disease using multi-support vector machine | |
CN114332473A (en) | Object detection method, object detection device, computer equipment, storage medium and program product | |
CN104573701B (en) | A kind of automatic testing method of Tassel of Corn | |
CN118230166A (en) | Corn canopy organ identification method and canopy phenotype detection method based on improved Mask2YOLO network | |
CN116452820B (en) | Method and device for determining environmental pollution level | |
Abishek et al. | Soil Texture Prediction Using Machine Learning Approach for Sustainable Soil Health Management | |
CN117437691A (en) | Real-time multi-person abnormal behavior identification method and system based on lightweight network | |
Bose et al. | Leaf diseases detection of medicinal plants based on image processing and machine learning processes | |
CN116523934A (en) | Image segmentation model based on improved Swin-Unet, training method thereof and image segmentation method | |
Wang et al. | Strawberry ripeness classification method in facility environment based on red color ratio of fruit rind | |
Harinadha et al. | Tomato Plant Leaf Disease Detection Using Transfer Learning-based ResNet110 | |
Jaya et al. | Enhancing Accuracy in Detection and Counting of Islands Using Object-Based Image Analysis: A Case Study of Kepulauan Seribu, DKI Jakarta |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210402 |
|
RJ01 | Rejection of invention patent application after publication |