Nothing Special   »   [go: up one dir, main page]

CN119091380A - A tea garden pest and disease identification method and system based on deep learning - Google Patents

A tea garden pest and disease identification method and system based on deep learning Download PDF

Info

Publication number
CN119091380A
CN119091380A CN202411211043.4A CN202411211043A CN119091380A CN 119091380 A CN119091380 A CN 119091380A CN 202411211043 A CN202411211043 A CN 202411211043A CN 119091380 A CN119091380 A CN 119091380A
Authority
CN
China
Prior art keywords
pest
disease
image
tea garden
tea
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411211043.4A
Other languages
Chinese (zh)
Inventor
姜兆亮
杨玉通
朱薪瑞
郭云龙
丁海超
牛平平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University Rizhao Research Institute
Shandong University
Original Assignee
Shandong University Rizhao Research Institute
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University Rizhao Research Institute, Shandong University filed Critical Shandong University Rizhao Research Institute
Priority to CN202411211043.4A priority Critical patent/CN119091380A/en
Publication of CN119091380A publication Critical patent/CN119091380A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a tea garden disease and pest identification method and system based on deep learning, wherein the method comprises the steps of obtaining tea garden tea images and environment information; the method comprises the steps of preprocessing tea images of a tea garden, carrying out preliminary pest identification on the preprocessed tea images of the tea garden by utilizing a pest target detection model to obtain a pest concentration area image and a pest type, inputting the pest concentration area image and the pest type into an area distribution analysis model to carry out deep pest identification to obtain a pest target image and generate a pest distribution map, and formulating a control suggestion by combining environment information, the pest concentration area image, the pest type and the pest distribution map. The invention can not only monitor the plant diseases and insect pests of the tea garden in real time and improve the information feedback speed, but also accurately control the control measures, thereby effectively guaranteeing the yield and quality of the tea garden.

Description

Tea garden pest and disease damage identification method and system based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a tea garden pest and disease damage identification method and system based on deep learning.
Background
With the increasing popularity of tea industry, the problem of tea garden diseases and insect pests is more and more prominent, and the problem of diseases and insect pests is always an important factor affecting the yield and quality of tea, and not only can the yield of tea be reduced, but also the appearance, flavor and quality of tea can be affected, and even in serious cases, the tea trees die. Early detection and accurate identification of plant diseases and insect pests are important for health management of tea gardens and timely implementation of prevention and control measures.
At present, the monitoring of tea garden diseases and insect pests mainly depends on manual observation and simple traditional monitoring equipment. The manual observation mode is time-consuming and labor-consuming, and the accuracy and timeliness of pest and disease identification are difficult to guarantee due to different experience levels of observers. Traditional monitoring facilities often only can cover a part of areas of tea garden, can't realize the real-time supervision to the tea garden universe, and the distribution condition of insect pest can't in time feed back, leads to prevention and cure measure lag. In addition, the traditional method is difficult to provide a fine pest distribution diagram, lacks accurate analysis on specific pest distribution areas and severity, and is easy for management personnel to make decision errors.
Disclosure of Invention
In order to solve the problems, the invention provides a tea garden pest identification method and a system based on deep learning, wherein the method is characterized in that a pest concentration area is positioned by utilizing a pest target detection model, and then the plant diseases and insect pests distribution map is obtained by more refined segmentation through the regional distribution analysis model, so that the positioning accuracy and the recognition accuracy are greatly improved.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, the invention provides a tea garden pest and disease damage identification method based on deep learning, comprising the following steps:
the tea garden tea image and the environment information are obtained;
performing preliminary pest identification on the preprocessed tea images of the tea garden by using a pest target detection model to obtain a pest concentration area image and pest types;
Inputting the plant disease and insect pest concentrated area image and the plant disease and insect pest type into an area distribution analysis model, carrying out depth plant disease and insect pest identification, obtaining a plant disease and insect pest target image, and generating a plant disease and insect pest distribution map;
And (5) formulating control suggestions by combining the environmental information, the plant disease and insect pest concentrated area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram.
Preferably, the environmental information comprises temperature, humidity, soil pH, wind speed and wind direction information of the tea garden.
Preferably, the preprocessing comprises unified brightness correction, histogram equalization, image size adjustment and normalization processing on the tea garden tea image.
Preferably, the plant disease and insect pest target detection model comprises CSPDARKNET networks, a multi-scale feature fusion network and an Anchor-free-based prediction module;
The CSPDARKNET network comprises a plurality of convolution modules, a C2F module and an SPPF module, wherein the convolution modules are used for extracting low-level features, the C2F module is used for fusing the low-level features output by the convolution modules, and the SPPF module is used for carrying out multi-level interaction on the fused features output by the C2F module;
the multi-scale feature fusion network consists of a feature pyramid network and a path aggregation network and is used for carrying out multi-scale fusion on the interaction features output by the SPPF module;
And the prediction module based on Anchor-free receives the multiscale characteristics output by the multiscale characteristic fusion network, and outputs the category loss and the prediction frame loss of the target.
Preferably, the step of inputting the pest centralized area image and the pest type into an area distribution analysis model to perform depth pest identification to obtain a pest target image specifically includes:
extracting an initial characteristic image from the image of the plant diseases and insect pests concentrated area through a continuous convolution layer and an activation layer;
Gradually restoring the high-level feature images to the resolution ratio of the original image to obtain a restored image, and splicing and fusing the restored image through jump connection;
And according to the disease and insect pest types, invoking a classifier trained based on a disease and insect pest database corresponding to the disease and insect pest types to classify the characteristics after splicing and fusion, and extracting a disease and insect pest target prospect to obtain a disease and insect pest target image.
Preferably, the plant disease and insect pest distribution map is that all tea garden tea images in the obtained tea garden are spliced, the same color is marked for the plant disease and insect pest target image belonging to one plant disease and insect pest type based on the position of the plant disease and insect pest target image, different colors are marked for different plant disease and insect pest types, and pixels of the background are not processed, so that the plant disease and insect pest distribution map is obtained.
Preferably, the method for preparing the control advice by combining the environmental information, the plant disease and insect pest concentrated area image, the plant disease and insect pest type and the plant disease and insect pest distribution map specifically comprises the following steps:
Calculating the area of the image of the pest concentration area where each pest is located to obtain the distribution ratio of each pest and disease and estimate the severity of each pest and disease;
calling corresponding control advice from an agricultural disease and insect pest database according to the disease and insect pest type, wherein the control advice comprises a medication type, a medication amount and a medication time;
and adjusting control suggestions according to the environmental information and the severity of the plant diseases and insect pests.
In a second aspect, the present invention provides a tea garden pest and disease damage identification system based on deep learning, comprising:
the data acquisition module is configured to acquire tea garden tea images and environment information;
The preliminary identification module is configured to perform preliminary pest identification on the preprocessed tea images of the tea garden by using the pest target detection model to obtain a pest concentration area image and a pest type;
the depth recognition module is configured to input the disease and pest concentrated area image and the disease and pest type into the area distribution analysis model, recognize the depth disease and pest to obtain a disease and pest target image and generate a disease and pest distribution diagram;
And the analysis feedback module is configured to combine the environmental information, the plant disease and insect pest concentration area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram to formulate a control suggestion.
In a third aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the steps of a tea garden pest identification method based on deep learning of the first aspect.
In a fourth aspect, the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps in the tea garden pest identification method based on deep learning according to the first aspect when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, the RGB image information of the tea garden is obtained, the area where the plant diseases and insect pests are located is positioned, in order to analyze the distribution situation of the plant diseases and insect pests, the image of the plant diseases and insect pests concentrated area is input into an area distribution analysis network to analyze the type and the fine distribution of the plant diseases and insect pests, and finally, the plant diseases and insect pests analysis report and the environmental information fed back by other environmental sensors are combined, so that decision suggestions are given to management staff. Through quick location plant diseases and insect pests area, can real-time supervision tea garden plant diseases and insect pests, promote information feedback speed to combine sensor feedback information and decision suggestion that plant diseases and insect pests analysis obtained scientific and reasonable more, managers can the accurate control prevention and cure measure, effectively ensure tea garden output and quality.
(2) According to the two-stage pest positioning and identifying method, the pest and disease concentrated area is positioned first, and then the pest and disease distribution map is obtained through finer segmentation, so that the positioning accuracy and the identifying precision are greatly improved.
(3) Based on the information of a plurality of environment sensors and the plant diseases and insect pests analysis report, the invention enables the manager to know the environment of the tea garden more comprehensively, can accurately prevent and treat different plant diseases and insect pests and serious degrees, can improve the pesticide utilization rate and reduce the environmental pollution.
Additional aspects of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
Fig. 1 is a main flowchart of a tea garden pest and disease damage identification method based on deep learning provided by an embodiment of the invention;
FIG. 2 is a detailed flowchart of a tea garden pest identification method based on deep learning according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a pest target detection module according to an embodiment of the present invention;
Fig. 4 is a schematic diagram of a regional distribution analysis model according to an embodiment of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
Example 1
As shown in fig. 1, this embodiment discloses a tea garden pest and disease damage identification method based on deep learning, which includes the following steps:
s1, acquiring tea image and environment information of a tea garden, and preprocessing the tea image of the tea garden;
S2, carrying out preliminary pest identification on the preprocessed tea images of the tea garden by using a pest target detection model to obtain a pest concentration area image and pest types;
S3, inputting the disease and pest centralized area image and the disease and pest type into an area distribution analysis model, carrying out depth disease and pest identification to obtain a disease and pest target image, and generating a disease and pest distribution map;
And S4, combining the environmental information, the plant disease and insect pest concentrated area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram to make a control suggestion.
Next, a detailed description will be given of a tea garden pest and disease damage identification method based on deep learning disclosed in this embodiment with reference to fig. 2.
In S1, through setting up 360 degrees high definition digtal cameras in a plurality of positions in tea garden, carry out image acquisition to tea garden tealeaves crop area, obtain tea garden tealeaves image.
As a specific implementation mode, the purpose of gathering tea garden tealeaves image is to acquire the tea bud image that is fit for training, and the shooting angle needs to press close to the top view as far as possible, and the camera level contained angle is greater than 45 degrees and is advisable, and every camera corresponds to gathering a part of tea garden region. If a certain tea garden is too large, a plurality of cameras are adopted to divide the area for separate work collection.
Wherein, the brightness of the image can change under different illumination conditions, which makes it difficult to distinguish the details of the image. Therefore, the present embodiment performs brightness correction, histogram equalization, image size adjustment and normalization processing on the image by using the current ambient illumination brightness fed back by the illumination sensor, so as to ensure the effectiveness of the image.
Temperature and humidity sensors, soil sensors and wind speed and direction sensors are arranged at a plurality of positions of the tea garden, and environmental information such as temperature, humidity, soil PH value, wind speed and wind direction of the tea garden is collected.
As a specific implementation mode, in an actual tea garden, tea fields are distributed according to blocks, the terrains are staggered and complicated, so that a temperature and humidity sensor and a wind speed and direction sensor are arranged on an upright pole where a camera is arranged, the upright pole is high enough, the wind speed and direction sensor is arranged at a higher position of the upright pole and is prevented from being influenced by the obstruction of terrains, the temperature and humidity sensor is arranged at a lower position and is additionally provided with an opaque shell to prevent sunlight from directly irradiating, the temperature and humidity of the tea field area are measured more accurately, and a soil sensor is randomly distributed and placed by taking the tea field as a unit, so that the robustness of soil PH value detection is improved. It should be understood that the specific installation and location may be set by those skilled in the art according to the actual circumstances.
In the embodiment, a preprocessed tea image of a tea garden is manufactured into a target detection sample data set for building a tea garden disease and pest target detection model, 6000 acquired residual samples are manually selected and adjusted, 3600 excellent samples with obvious characteristics are selected, labelme software is used for marking rectangular frames of common disease and pest types of various tea buds on the tea garden image sample, geometric transformation (including overturning, scaling and translation), noise injection (including Gaussian noise and spiced salt noise), affine transformation, blurring and sharpening (including Gaussian blurring, motion blurring and image sharpening) are used for data enhancement, the target detection data set is enhanced to 7800 residual data, the generalization capability of the data set is improved, the data set obtained in the steps is divided into a training set, a verification set and a testing set according to the proportion of 8:1:1, and the data set is input into the disease and pest target detection model for full training, and the disease and pest target detection model is obtained.
And S2, performing preliminary pest identification on the preprocessed tea images of the tea garden by using a pest target detection model to obtain a pest concentration area image and pest types.
As shown in FIG. 3, the plant disease and insect pest target detection model mainly comprises CSPDARKNET networks, a multi-scale feature fusion network and an Anchor-free-based prediction module.
CSPDARKNET the network includes a plurality of convolution modules, a C2F module, and an SPPF module.
The convolution module is used to extract low-level features of the image, such as edges, textures, etc. The convolution module comprises a convolution layer, a batch normalization layer and an activation function layer.
The convolution layer generates a series of feature images by sliding a filter on an input image, the batch normalization layer normalizes the feature images to enable the mean value of the feature images to be 0 and the variance of the feature images to be 1, so that the training process is accelerated, and the activation function layer applies a nonlinear function to the feature images to enhance the expression capacity of a model.
The convolution modules are used for further extracting fine-grained characteristic information and enhancing the detail capturing capability of the model.
The C2F module is a core component and is used for realizing deeper feature extraction and feature fusion. The C2F module is composed of a convolution module and a plurality of residual units, the residual units directly transmit two paths of input information through normal convolution and short circuit connection, the gradient vanishing problem in a deep network is relieved, training stability and model precision are improved, the C2F module fuses low-level features of different paths through a jump connection mode of partial convolution and the residual units to obtain fusion features, calculation cost and parameter quantity are reduced, meanwhile, the representation capability of the model is maintained, and the structure is beneficial to improving the diversity and the expression capability of the features.
The SPPF module consists of a convolution module, a connection layer and a maximum pooling layer. The maximum pooling is a downsampling operation that reduces the size of the feature map while preserving important information, reduces the resolution and computation of the feature map by applying a sliding window of fixed size over the feature map and taking the maximum value as output in each window, and helps to extract salient features in the image, enhance translational invariance of the model, and reduce the risk of overfitting. The connecting layer is used for transmitting and fusing characteristic information among different scales, and multi-level interaction of the characteristics is realized through jump connection and transverse connection, so that the information is ensured to be fully transmitted in the network.
In the SPPF module, after the fusion characteristics are convolved, the connection layer and the maximum pooling layer perform multi-scale pooling operation on the input characteristic diagram and splice the results together, so that multi-scale context information is obtained, and multi-level interaction is realized. The method can improve the recognition capability of the model to targets with different sizes, and meanwhile, higher efficiency is maintained.
The multi-scale feature fusion network consists of a feature pyramid network and a path aggregation network. The feature pyramid network FPN consists of an up-sampling module, a connecting layer and a C2F module. The path aggregation network PAN consists of a convolution module, a C2F module and a connection layer. The up-sampling module is used for recovering the low-resolution feature map to higher resolution, so that high-level semantic features can be fused with low-level detail features to enhance the detection capability of small targets, and the functions of the connecting layer, the C2F module and the convolution module are already described and are not repeated. The FPN and the PAN together form a multi-scale feature fusion network, so that the learning capacity of the multi-scale feature information of the model is improved.
The Anchor-free method refers to the mode that directly predicts the sizes of the center point and the boundary frame of the target, does not need to predefine an Anchor frame, reduces the dependence of super parameters and simplifies the network structure. The prediction module based on Anchor-free outputs detection results from three scale features respectively by three prediction modules, the prediction module consists of a convolution module and a convolution layer, receives the fused multi-scale features, outputs the class loss and the prediction frame loss of the target through regression and a classification head, and quantifies the advantages and disadvantages of the training model.
The CSPDARKNET network in this embodiment belongs to a network structure design method, and the efficiency and performance of the network are improved by dividing the network into a plurality of stage parts and connecting only part of the network in each stage. CSPDARKNET is a Darknet variant based on the CSPNet structure. It uses CSPNet design ideas on the Darknet basis to improve feature extraction capabilities and network performance, especially when dealing with complex object detection tasks.
In this embodiment, the multi-scale feature fusion network adopts a combined structure of a Feature Pyramid Network (FPN) and a Path Aggregation Network (PAN), further fuses and enhances features on different scales, improves the detection capability of pest and disease areas with different sizes, and finally generates a final target detection result through an Anchor-free mechanism, outputs a pest and disease detection image, and obtains a pest and disease concentration area and type.
In this embodiment, in the combination of the Feature Pyramid Network (FPN) and the Path Aggregation Network (PAN), the FPN structure fuses feature graphs with different scales together, and the PAN network further enhances the effect of feature fusion through path aggregation. The combined structure fully integrates the characteristic information of different layers on different scales, improves the detection capability of the network to pest and disease damage areas with different sizes, and particularly remarkably improves the detection effect to small targets and long-distance targets.
In this embodiment, a group of prior frames needs to be preset in the traditional detection method based on Anchor, and the Anchor-free mechanism directly predicts the boundary frame of the target, so that the influence of the preset frame on the detection result is reduced, and the detection precision and robustness are improved. Meanwhile, the mechanism is more suitable for complicated and changeable tea garden environments, and can better cope with pest and disease damage areas with different shapes, sizes and colors.
As a specific implementation mode, the plant disease and insect pest target detection model learns shallow features of plant disease and insect pests, mainly for regional location and identification of the type of plant disease and insect pests, the model does not need to determine the accurate position of the plant disease and insect pests, only needs to be capable of predicting the approximate region, and therefore model learning is biased to feature information of the plant disease and insect pests on a large scale. The method comprises the steps of inputting tea images of tea gardens into a disease and pest target detection model to obtain a disease and pest area predicted by the model, namely marking various disease and pest by rectangular frames, cutting a plurality of rectangular frames to obtain a disease and pest concentrated area image, and obtaining the disease and pest type based on a preset label. The plant diseases and insect pests comprise four diseases of tea leaf spot disease, tea anthracnose, tea cake disease and tea white spot disease, and four insect pests of tea short-hair mite, leafhopper, tea moth and black thorn whitefly.
And S3, inputting the plant disease and insect pest concentrated area image and the plant disease and insect pest type into an area distribution analysis model, and carrying out depth plant disease and insect pest identification to obtain a plant disease and insect pest target image. As shown in fig. 4, the method comprises a downsampling operation and an upsampling operation, and specifically comprises the following steps:
S301, the downsampling operation comprises the steps of extracting an initial feature map from an image of a disease and pest concentration area through a continuous convolution layer and an activation layer;
S302, extracting a high-level feature map from the initial feature map through a maximum pooling layer;
and S303, the up-sampling operation comprises the step of gradually restoring the high-level characteristic image to the resolution of the original image to obtain a restored image, and the restored image is spliced and fused through jump connection.
And according to the plant disease and insect pest types, invoking a classifier trained based on a plant disease and insect pest database corresponding to the plant disease and insect pest types to classify the characteristics after splicing and fusion, and extracting plant disease and insect pest target prospects to obtain a plant disease and insect pest target image at a pixel level.
In this embodiment, the region distribution analysis model learns deep features of the plant diseases and insect pests, relatively speaking, the model learns feature information on small scale of the plant diseases and insect pests, the plant diseases and insect pests concentrated region image and the plant diseases and insect pests type obtained in the last step are input into the region distribution analysis model, the model calls a classifier (Softmax multi-classification activation function) generated by training a plant diseases and insect pests database of the type according to the plant diseases and insect pests type to perform pixel-level semantic segmentation on the plant diseases and insect pests concentrated region image, and the plant diseases and insect pests target prospects are extracted, so that the plant diseases and insect pests target image is obtained.
It should be appreciated that the pest database contains various common pest types and corresponding pest characteristics, and is self-configurable by those skilled in the art according to actual situations.
And splicing all tea images of the tea garden in the obtained tea garden, marking the same color as the pest target image belonging to one pest type based on the positions of the pest target images, marking different colors by different pest types, and obtaining a pest distribution map without processing pixels of the background.
In this embodiment, the downsampling process uses successive convolution layers and activation layers, so that the model can learn deep features of the pest image, which are more representative of the nature of the pest than the original pixel values. The maximum pooling layer further extracts key information in the features, removes redundancy and noise, and improves recognition accuracy, and the up-sampling process model not only recovers resolution of images, but also performs splicing and fusion on feature images of different layers through jump connection. The multi-scale feature fusion mode enables the model to capture local details and global structures of diseases and insect pests at the same time, enhances the recognition capability of the model to diseases and insect pests with different types and different severity degrees, and improves the generalization performance of the model.
As an alternative implementation mode, 1200 images with more diseases and insect pests and types are manually screened and collected from the collected tea garden images, the disease and insect pest areas in the images are manually marked, a training data set with pixel-level labels is created, the data set is expanded by the data enhancement technology, finally 2600 semantic segmentation data sets of the images are obtained, training of the data sets is carried out by using a semantic segmentation network based on U-Net, and the performance of the super-parameter optimization model is adjusted to obtain the area distribution analysis model.
And S4, setting control suggestions by combining the environmental information, the plant disease and insect pest concentrated area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram.
In this embodiment, the environmental information, the pest concentration area image, the pest type, and the pest distribution map form a feedback report, and the manager performs targeted control according to the feedback report.
As a specific implementation mode, according to the pest distribution diagram, the distribution proportion of various pests is obtained by calculating the area of the pest concentration area image where various pests are located, and the severity is estimated by the area of the pest concentration area, and when the pest area proportion is respectively 0% -10%, 10% -25%, 25% -50% and more than 50%, the pest area proportion is classified into the grades of slight, medium, serious and extremely serious.
According to the plant and insect pest types, searching control measures corresponding to the plant and insect pest types through an agricultural plant and insect pest database, wherein the control advice comprises a medication type, a medication amount and a medication time.
The manager can apply the pesticide or other control according to the control measures in a targeted manner, and determine the pesticide application amount according to the severity of the plant diseases and insect pests, and adjust the control measures in combination with environmental data.
As an alternative embodiment, in one feedback report, a disease and pest concentrated area image exists on a part of a row 4 tea ridge of a No. 5 tea field, the specific position and distribution of the small leafhoppers on the row 4 tea ridge are more specifically shown on a disease and pest distribution map, the severity degree of the small leafhoppers is medium through pixel ratio calculation, control suggestions of the small leafhoppers are automatically retrieved from an agricultural disease and pest database, and sensors of the tea field where the small leafhoppers are located are called to display the current temperature and humidity of the tea field, the soil PH and the wind speed and wind direction.
The management personnel carries out judgment and analysis on the information and advice, and presumes that the advice of using the pesticide in the prevention and treatment advice of the agricultural disease and pest database is that the pesticide A needs to be used for the lesser leafhopper, because the severity is medium, the pesticide application amount is matched, the pesticide A is supposed to be volatile at high temperature, the management personnel determines the pesticide application time according to the temperature, determines whether the pesticide is suitable for application according to the wind speed and the wind direction, the pesticide is unsuitable for application when the wind speed is overlarge, the manual pesticide application is supposed, the pesticide application personnel is supposed to be in the wind direction on a tea field, and finally the management personnel synthesizes the information to give or implement specific measures for prevention and treatment.
It should be understood that the agricultural pest database is a relational database containing common pest types and corresponding control measures such as medication type, application rate, and application time.
The method comprises the steps of obtaining tea image information of a tea garden through a 360-degree camera, obtaining environment information of the tea garden through an environment sensor, carrying out unified brightness correction and histogram equalization on the collected image by utilizing illumination intensity fed back by an illumination sensor, guaranteeing effectiveness of the image, carrying out pest and disease identification on the tea garden image by utilizing a pest and disease target identification network, quickly positioning a pest and disease area and type of the pest and disease area, inputting the pest and disease area image into an area distribution analysis model, identifying areas and degrees of pest and disease distribution in the tea garden, generating a pest and disease distribution map and a feedback report by combining the environment information and a pest and disease distribution result, and giving decision advice according to a corresponding control method. By the method, the diseases and insect pests of the tea garden can be monitored in real time, the information feedback speed is improved, the control measures can be accurately controlled, the yield and quality of the tea garden are effectively guaranteed, and the problems that the feedback of the disease and insect pest information in the tea garden is not timely and the distribution of the disease and insect pests is not clear are effectively solved.
Example two
The embodiment provides a tea garden plant diseases and insect pests identification system based on deep learning, include:
the data acquisition module is configured to acquire tea garden tea images and environment information;
The preliminary identification module is configured to perform preliminary pest identification on the preprocessed tea images of the tea garden by using the pest target detection model to obtain a pest concentration area image and a pest type;
the depth recognition module is configured to input the disease and pest concentrated area image and the disease and pest type into the area distribution analysis model, recognize the depth disease and pest to obtain a disease and pest target image and generate a disease and pest distribution diagram;
And the analysis feedback module is configured to combine the environmental information, the plant disease and insect pest concentration area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram to formulate a control suggestion.
Example III
The present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in a tea garden pest identification method based on deep learning as described in the above embodiment.
Example IV
The embodiment provides a computer device, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps in the tea garden pest and disease damage identification method based on deep learning according to the embodiment.
The steps or modules in the second to fourth embodiments correspond to the first embodiment, and the detailed description of the first embodiment may be referred to in the related description section of the first embodiment. The term "computer-readable storage medium" shall be taken to include a single medium or multiple media that includes one or more sets of instructions, and shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the processor and that cause the processor to perform any one of the methodologies of the present invention.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A tea garden pest identification method based on deep learning is characterized by comprising the following steps:
the tea garden tea image and the environment information are obtained;
performing preliminary pest identification on the preprocessed tea images of the tea garden by using a pest target detection model to obtain a pest concentration area image and pest types;
Inputting the plant disease and insect pest concentrated area image and the plant disease and insect pest type into an area distribution analysis model, carrying out depth plant disease and insect pest identification, obtaining a plant disease and insect pest target image, and generating a plant disease and insect pest distribution map;
And (5) formulating control suggestions by combining the environmental information, the plant disease and insect pest concentrated area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram.
2. The tea garden pest and disease damage identification method based on deep learning as set forth in claim 1, wherein the environmental information includes temperature, humidity, soil PH, wind speed and wind direction information of the tea garden.
3. The tea garden pest and disease damage identification method based on deep learning as set forth in claim 1, wherein the preprocessing includes unified brightness correction, histogram equalization, image size adjustment and normalization of the tea garden tea leaves image.
4. The tea garden pest and disease damage identification method based on deep learning as set forth in claim 1, wherein the pest and disease damage target detection model comprises CSPDARKNET networks, a multi-scale feature fusion network and an Anchor-free based prediction module;
The CSPDARKNET network comprises a plurality of convolution modules, a C2F module and an SPPF module, wherein the convolution modules are used for extracting low-level features, the C2F module is used for fusing the low-level features output by the convolution modules, and the SPPF module is used for carrying out multi-level interaction on the fused features output by the C2F module;
the multi-scale feature fusion network consists of a feature pyramid network and a path aggregation network and is used for carrying out multi-scale fusion on the interaction features output by the SPPF module;
And the prediction module based on Anchor-free receives the multiscale characteristics output by the multiscale characteristic fusion network, and outputs the category loss and the prediction frame loss of the target.
5. The tea garden pest identification method based on deep learning as claimed in claim 1, wherein the steps of inputting the pest concentrated area image and the pest type into an area distribution analysis model for deep pest identification to obtain a pest target image include:
extracting an initial characteristic image from the image of the plant diseases and insect pests concentrated area through a continuous convolution layer and an activation layer;
Gradually restoring the high-level feature images to the resolution ratio of the original image to obtain a restored image, and splicing and fusing the restored image through jump connection;
And according to the disease and insect pest types, invoking a classifier trained based on a disease and insect pest database corresponding to the disease and insect pest types to classify the characteristics after splicing and fusion, and extracting a disease and insect pest target prospect to obtain a disease and insect pest target image.
6. The method for identifying tea garden diseases and insect pests based on deep learning as set forth in claim 1, wherein the disease and insect pest distribution map is obtained by splicing all tea garden tea leaves images in the tea garden, marking the same color as the disease and insect pest target image belonging to one disease and insect pest type based on the position of the disease and insect pest target image, marking different colors for different disease and insect pest types, and not processing pixels of the background.
7. The method for identifying tea garden diseases and insect pests based on deep learning as set forth in claim 1, wherein the method for formulating the control advice by combining the environmental information, the disease and insect pest concentration area image, the disease and insect pest type and the disease and insect pest distribution map comprises the following steps:
Calculating the area of the image of the pest concentration area where each pest is located to obtain the distribution ratio of each pest and disease and estimate the severity of each pest and disease;
calling corresponding control advice from an agricultural disease and insect pest database according to the disease and insect pest type, wherein the control advice comprises a medication type, a medication amount and a medication time;
and adjusting control suggestions according to the environmental information and the severity of the plant diseases and insect pests.
8. Tea garden pest identification system based on deep learning, characterized by comprising:
the data acquisition module is configured to acquire tea garden tea images and environment information;
The preliminary identification module is configured to perform preliminary pest identification on the preprocessed tea images of the tea garden by using the pest target detection model to obtain a pest concentration area image and a pest type;
the depth recognition module is configured to input the disease and pest concentrated area image and the disease and pest type into the area distribution analysis model, recognize the depth disease and pest to obtain a disease and pest target image and generate a disease and pest distribution diagram;
And the analysis feedback module is configured to combine the environmental information, the plant disease and insect pest concentration area image, the plant disease and insect pest type and the plant disease and insect pest distribution diagram to formulate a control suggestion.
9. A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor performs the steps in a deep learning based tea garden pest identification method as claimed in any one of claims 1-7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, performs the steps of a deep learning based tea garden pest identification method as claimed in any one of claims 1 to 7.
CN202411211043.4A 2024-08-30 2024-08-30 A tea garden pest and disease identification method and system based on deep learning Pending CN119091380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411211043.4A CN119091380A (en) 2024-08-30 2024-08-30 A tea garden pest and disease identification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411211043.4A CN119091380A (en) 2024-08-30 2024-08-30 A tea garden pest and disease identification method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN119091380A true CN119091380A (en) 2024-12-06

Family

ID=93696719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411211043.4A Pending CN119091380A (en) 2024-08-30 2024-08-30 A tea garden pest and disease identification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN119091380A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119359476A (en) * 2024-12-26 2025-01-24 贵州大学 Disease pesticide knowledge graph-based long tail disease identification and medication integrated method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN119359476A (en) * 2024-12-26 2025-01-24 贵州大学 Disease pesticide knowledge graph-based long tail disease identification and medication integrated method

Similar Documents

Publication Publication Date Title
Jia et al. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot
Chen et al. A YOLOv3-based computer vision system for identification of tea buds and the picking point
Balram et al. Crop field monitoring and disease detection of plants in smart agriculture using internet of things
CN111046880A (en) Infrared target image segmentation method and system, electronic device and storage medium
de Silva et al. Towards agricultural autonomy: crop row detection under varying field conditions using deep learning
Ji et al. In-field automatic detection of maize tassels using computer vision
Keller et al. Soybean leaf coverage estimation with machine learning and thresholding algorithms for field phenotyping
CN115661650A (en) Farm management system based on data monitoring of Internet of things
CN119091380A (en) A tea garden pest and disease identification method and system based on deep learning
CN117876823B (en) Tea garden image detection method and model training method and system thereof
CN116543386A (en) Agricultural pest image identification method based on convolutional neural network
Yuan et al. Sensitivity examination of YOLOv4 regarding test image distortion and training dataset attribute for apple flower bud classification
CN118762360A (en) A maturity identification system for field melons
Yan et al. High-resolution mapping of paddy rice fields from unmanned airborne vehicle images using enhanced-TransUnet
CN115861686A (en) Litchi key growth period identification and detection method and system based on edge deep learning
Sheng et al. Automatic detection and counting of planthoppers on white flat plate images captured by AR glasses for planthopper field survey
CN117640898B (en) Monitoring self-adaptive adjusting method based on agricultural Internet of things technology
AHM et al. A deep convolutional neural network based image processing framework for monitoring the growth of soybean crops
Li et al. Image processing for crop/weed discrimination in fields with high weed pressure
CN112418112A (en) A kind of orchard disease and insect pest monitoring and early warning method and system
CN117612031A (en) A remote sensing recognition method for abandoned land based on semantic segmentation
Ashok Kumar et al. A review on crop and weed segmentation based on digital images
Habib et al. Wavelet frequency transformation for specific weeds recognition
Mayuri et al. Artificial Neural Network (ANN) With Chan-Vese (CV) Algorithm-Based Plant Disease Detection And Classification
CN117152609A (en) A crop appearance feature detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination