CN118736436A - A crop recognition method based on multispectral satellite images - Google Patents
A crop recognition method based on multispectral satellite images Download PDFInfo
- Publication number
- CN118736436A CN118736436A CN202410846076.XA CN202410846076A CN118736436A CN 118736436 A CN118736436 A CN 118736436A CN 202410846076 A CN202410846076 A CN 202410846076A CN 118736436 A CN118736436 A CN 118736436A
- Authority
- CN
- China
- Prior art keywords
- image
- remote sensing
- sample set
- multispectral
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012545 processing Methods 0.000 claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000001914 filtration Methods 0.000 claims abstract description 14
- 230000003044 adaptive effect Effects 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 230000008859 change Effects 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 11
- 230000009467 reduction Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 6
- 238000001228 spectrum Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 3
- 238000003709 image segmentation Methods 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 abstract description 20
- 230000006870 function Effects 0.000 description 20
- 239000010410 layer Substances 0.000 description 17
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 3
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 239000002689 soil Substances 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 229920000742 Cotton Polymers 0.000 description 1
- 241000219146 Gossypium Species 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 240000008042 Zea mays Species 0.000 description 1
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 235000005822 corn Nutrition 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004720 fertilization Effects 0.000 description 1
- 238000003306 harvesting Methods 0.000 description 1
- 239000011229 interlayer Substances 0.000 description 1
- 238000003973 irrigation Methods 0.000 description 1
- 230000002262 irrigation Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 235000018343 nutrient deficiency Nutrition 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000009331 sowing Methods 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/188—Vegetation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
本发明提供了一种基于多光谱卫星影像的作物识别方法,涉及图像处理技术领域。本发明首先通过获取原始样本集,利用自适应滤波方法进行预处理,将高光谱遥感图像作为标签图像,对高光谱和多光谱图像进行空间和光谱下采样,得到训练数据集。本发明结合高光谱和多光谱图像,充分利用光谱和空间信息,能够提高作物识别准确性。
The present invention provides a crop recognition method based on multispectral satellite images, and relates to the field of image processing technology. The present invention first obtains the original sample set, uses the adaptive filtering method for preprocessing, uses the hyperspectral remote sensing image as the label image, performs spatial and spectral downsampling on the hyperspectral and multispectral images, and obtains a training data set. The present invention combines hyperspectral and multispectral images, makes full use of spectral and spatial information, and can improve the accuracy of crop recognition.
Description
技术领域Technical Field
本发明涉及图像处理技术领域,特别是涉及一种基于多光谱卫星影像的作物识别方法。The present invention relates to the technical field of image processing, and in particular to a crop recognition method based on multispectral satellite images.
背景技术Background Art
高光谱遥感图像(HSI)以其丰富的光谱分辨率在环境监测、土地分类和灾害检测等领域应用广泛。然而,受传感器物理条件限制,HSI通常具有较低的空间分辨率。Hyperspectral remote sensing images (HSI) are widely used in environmental monitoring, land classification, and disaster detection due to their rich spectral resolution. However, HSI usually has low spatial resolution due to the physical conditions of the sensor.
相较之下,多光谱遥感图像(MSI)具有较低的光谱分辨率但更高的空间分辨率,能更完整地描述土地覆盖的形态和分布。In comparison, multispectral remote sensing images (MSI) have lower spectral resolution but higher spatial resolution, and can more completely describe the morphology and distribution of land cover.
申请号为201810901457.8的发明专利中公开了一种基于多光谱卫星影像的作物识别方法,包括以下步骤:S1、采集作物样本;S2、获取所述作物样本的多光谱卫星影像数据;S3、通过所述作物样本的采集位置确定所述多光谱卫星影像数据上作物样本对应的像元;S4、将所述像元的时序光谱信息和所述作物样本的作物种类作为输入,训练机器学习模型;S5、通过训练好的所述机器学习模型,对其他采样区域进行作物分类。该专利将所述像元的时序光谱信息作为训练机器学习模型的输入,不仅大大的扩展了作物光谱信息的数量,解决了单一时刻作物光谱信息不足的问题,还从作物的生长全周期的光谱信息对作物进行识别,相比单一时刻的识别更加准确,从而提高作物识别效率。但是,其仅利用多光谱卫星影像数据,无法并未考虑高光谱遥感图像中的丰富的光谱分辨率,造成识别的结果并不精准。The invention patent with application number 201810901457.8 discloses a crop identification method based on multispectral satellite images, including the following steps: S1, collecting crop samples; S2, obtaining multispectral satellite image data of the crop samples; S3, determining the pixel corresponding to the crop sample on the multispectral satellite image data through the collection position of the crop sample; S4, using the time series spectral information of the pixel and the crop type of the crop sample as input to train the machine learning model; S5, using the trained machine learning model to classify crops in other sampling areas. This patent uses the time series spectral information of the pixel as the input for training the machine learning model, which not only greatly expands the amount of crop spectral information and solves the problem of insufficient crop spectral information at a single moment, but also identifies crops from the spectral information of the entire growth cycle of the crop, which is more accurate than the identification at a single moment, thereby improving the efficiency of crop identification. However, it only uses multispectral satellite image data, and fails to consider the rich spectral resolution in hyperspectral remote sensing images, resulting in inaccurate identification results.
发明内容Summary of the invention
为了克服现有技术的不足,本发明的目的是提供一种基于多光谱卫星影像的作物识别方法。In order to overcome the deficiencies of the prior art, the object of the present invention is to provide a crop identification method based on multispectral satellite images.
为实现上述目的,本发明提供了如下方案:To achieve the above object, the present invention provides the following solutions:
一种基于多光谱卫星影像的作物识别方法,包括:A crop identification method based on multispectral satellite images, comprising:
获取原始样本集;所述原始样本集中包括各个样本作物区域对应的高光谱遥感图像和多光谱遥感图像;Acquire an original sample set; the original sample set includes hyperspectral remote sensing images and multispectral remote sensing images corresponding to each sample crop area;
利用自适应滤波方法对所述原始样本集进行预处理,得到处理样本集;Preprocessing the original sample set by using an adaptive filtering method to obtain a processed sample set;
将所述处理样本集中的高光谱遥感图像作为标签图像,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像分别进行空间和光谱方面的下采样,得到用于网络调参的训练数据集;The hyperspectral remote sensing image in the processing sample set is used as a label image, and the hyperspectral remote sensing image and the multispectral remote sensing image in the processing sample set are downsampled in terms of space and spectrum respectively to obtain a training data set for network parameter adjustment;
将所述训练数据集输入至初始网络模型中进行训练,得到训练好的作物识别模型;Inputting the training data set into the initial network model for training to obtain a trained crop recognition model;
将待测数据输入至所述作物识别模型中,得到识别结果。The data to be tested is input into the crop recognition model to obtain a recognition result.
优选地,利用自适应滤波方法对所述原始样本集进行预处理,得到处理样本集,包括:Preferably, the original sample set is preprocessed using an adaptive filtering method to obtain a processed sample set, including:
以所述原始样本集中的高光谱遥感图像和多光谱遥感图像的每个像素点为中心取一个正方形窗口,并检测正方形窗口下每个像素点的变化量;检测正方形窗口下每个像素点的变化量的计算公式为:其中,fx(xi,yj)表示所述原始样本集中的高光谱遥感图像和多光谱遥感图像在x方向上的梯度值,fy(xi,yj)表示所述原始样本集中的高光谱遥感图像和多光谱遥感图像在y方向上的梯度值,f(xi,yj)表示所述原始样本集中的高光谱遥感图像和多光谱遥感图像在(xi,yj)位置处的灰度值,r表示像素点f(xi,yj)的变化量;A square window is taken with each pixel point of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set as the center, and the change amount of each pixel point under the square window is detected; the calculation formula for detecting the change amount of each pixel point under the square window is: Wherein, fx ( xi , yj ) represents the gradient value of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set in the x direction, fy ( xi , yj ) represents the gradient value of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set in the y direction, f( xi , yj ) represents the grayscale value of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set at the position ( xi , yj ), and r represents the change amount of the pixel point f( xi , yj );
当像素点的变化量大于预设阈值时,使用正方形窗口内像素点的中值对相应正方形窗口内的高光谱遥感图像和多光谱遥感图像进行平滑处理,得到平滑后的像素点;When the change amount of the pixel point is greater than the preset threshold, the median value of the pixel point in the square window is used to smooth the hyperspectral remote sensing image and the multispectral remote sensing image in the corresponding square window to obtain the smoothed pixel point;
不断移动所述正方形窗口直到完成整个原始样本集中的高光谱遥感图像和多光谱遥感图像的平滑过程,得到平滑图像;所述平滑后的像素点的输出公式为:Idenoised(x,y)=median(Ioriginal(x-h,y-h),...,Ioriginal(x+h,y+h));其中,Idenoised(x,y)为平滑后像素点,h为正方形窗口的边长,Ioriginal(x,y)为所述原始样本集中的高光谱遥感图像和多光谱遥感图像上的像素点在(x,y)位置处的值;The square window is continuously moved until the smoothing process of the hyperspectral remote sensing image and the multispectral remote sensing image in the entire original sample set is completed to obtain a smoothed image; the output formula of the smoothed pixel point is: I denoised (x, y) = median (I original (xh, yh), ..., I original (x + h, y + h)); wherein, I denoised (x, y) is the smoothed pixel point, h is the side length of the square window, and I original (x, y) is the value of the pixel point on the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set at the position (x, y);
基于所述平滑图像的卷积值构建像素增强函数;constructing a pixel enhancement function based on the convolution value of the smoothed image;
利用所述像素增强函数对所述原始样本集中的影像进行图像增强处理,得到所述处理样本集。The pixel enhancement function is used to perform image enhancement processing on the images in the original sample set to obtain the processed sample set.
优选地,基于所述平滑图像的卷积值构建像素增强函数,包括:Preferably, constructing a pixel enhancement function based on the convolution value of the smoothed image comprises:
对所述平滑图像进行卷积处理,得到卷积后的图像;Performing convolution processing on the smoothed image to obtain a convolved image;
基于卷积后图像的像素值确定增强系数;Determine the enhancement coefficient based on the pixel value of the convolved image;
基于增强系数构建像素增强函数;其中,像素增强函数为:A pixel enhancement function is constructed based on the enhancement coefficient; wherein the pixel enhancement function is:
其中,S(x,y)表示所述处理样本集中的影像,E(x,y)表示增强系数,G(x,y)表示卷积后的图像在(x,y)处的像素值,I(x,y)表示平滑图像,σ表示平滑图像与卷积后的图像之间的均方差。 Wherein, S(x,y) represents the image in the processing sample set, E(x,y) represents the enhancement coefficient, G(x,y) represents the pixel value of the convolved image at (x,y), I(x,y) represents the smoothed image, and σ represents the mean square error between the smoothed image and the convolved image.
优选地,将所述处理样本集中的高光谱遥感图像作为标签图像,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像分别进行空间和光谱方面的下采样,得到用于网络调参的训练数据集,包括:Preferably, the hyperspectral remote sensing image in the processing sample set is used as a label image, and the hyperspectral remote sensing image and the multispectral remote sensing image in the processing sample set are downsampled in space and spectrum respectively to obtain a training data set for network parameter adjustment, including:
根据预设的Wald协议,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像进行高斯滤波处理,得到滤波图像;According to a preset Wald protocol, Gaussian filtering is performed on the hyperspectral remote sensing image and the multispectral remote sensing image in the processing sample set to obtain a filtered image;
然后使用双线性插值法对所述滤波图像进行相应倍数的下采样,以使处理后的图像被用作模拟低分辨率输入的高光谱遥感图像和多光谱遥感图像,并且原始高光谱图像则被保留作为参考图像。The filtered image is then downsampled by a corresponding multiple using a bilinear interpolation method so that the processed image is used as a hyperspectral remote sensing image and a multispectral remote sensing image that simulate a low-resolution input, and the original hyperspectral image is retained as a reference image.
优选地,所述作物识别模型的构建方法包括:Preferably, the method for constructing the crop identification model includes:
对获取到的所述训练数据集进行图像处理,得到样本图像;Performing image processing on the acquired training data set to obtain a sample image;
将所述样本图像输入至具有多层残差块的CNN网络进行特征提取,并利用最大池化层进行降维,得到降维后的局部特征图;The sample image is input into a CNN network with a multi-layer residual block for feature extraction, and a maximum pooling layer is used for dimensionality reduction to obtain a local feature map after dimensionality reduction;
将降维后的局部特征图输入至特征图嵌入模块,得到具有位置信息的一维向量;The local feature map after dimensionality reduction is input into the feature map embedding module to obtain a one-dimensional vector with position information;
将所述一维向量输入至Transformer块中重复24次,得到全局特征融合数据,并将全局特征融合数据进行调整,得到二维全局特征图;Input the one-dimensional vector into the Transformer block and repeat 24 times to obtain global feature fusion data, and adjust the global feature fusion data to obtain a two-dimensional global feature map;
将所述二维全局特征图输入到特征金字塔模块,以对所述二维全局特征图进行多尺度特征提取和融合,得到融合了不同尺度特征的特征图;Inputting the two-dimensional global feature map into a feature pyramid module to perform multi-scale feature extraction and fusion on the two-dimensional global feature map to obtain a feature map that fuses features of different scales;
将最后一层的具有残差块的CNN网络提取的局部特征图与融合了不同尺度特征的特征图进行通道拼接,并经过二维卷积聚合,然后使用双线性插值上采样方法将聚合后的特征图尺寸空间扩大到样本图像相同尺寸,以得到预测图像分割掩码;The local feature map extracted by the CNN network with residual blocks in the last layer is concatenated with the feature map that fuses features of different scales, and then aggregated by two-dimensional convolution. Then, the bilinear interpolation upsampling method is used to expand the aggregated feature map size space to the same size as the sample image to obtain the predicted image segmentation mask.
以多分类损失函数最小为目标,对基于CNN-Transformer和特征金字塔模块的神经网络模型进行训练,得到训练好的所述作物识别模型。With the goal of minimizing the multi-classification loss function, a neural network model based on a CNN-Transformer and a feature pyramid module is trained to obtain the trained crop recognition model.
优选地,所述多分类损失函数为非对称损失函数。Preferably, the multi-classification loss function is an asymmetric loss function.
根据本发明提供的具体实施例,本发明公开了以下技术效果:According to the specific embodiments provided by the present invention, the present invention discloses the following technical effects:
本发明提供了一种基于多光谱卫星影像的作物识别方法,包括:获取原始样本集;所述原始样本集中包括各个样本作物区域对应的高光谱遥感图像和多光谱遥感图像;利用自适应滤波方法对所述原始样本集进行预处理,得到处理样本集;将所述处理样本集中的高光谱遥感图像作为标签图像,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像分别进行空间和光谱方面的下采样,得到用于网络调参的训练数据集;将所述训练数据集输入至初始网络模型中进行训练,得到训练好的作物识别模型;将待测数据输入至所述作物识别模型中,得到识别结果。本发明利用高光谱遥感图像和多光谱遥感图像的组合,高光谱图像提供了丰富的光谱信息,可以精细地区分不同物质的光谱特征,而多光谱图像则提供了较高的空间分辨率。结合这两种数据类型,可以充分利用高光谱图像的详细光谱信息和多光谱图像的空间细节,从而提高作物识别的准确性和可靠性。The present invention provides a crop recognition method based on multispectral satellite images, comprising: obtaining an original sample set; the original sample set includes hyperspectral remote sensing images and multispectral remote sensing images corresponding to each sample crop area; preprocessing the original sample set using an adaptive filtering method to obtain a processed sample set; using the hyperspectral remote sensing image in the processed sample set as a label image, and performing spatial and spectral downsampling on the hyperspectral remote sensing image and the multispectral remote sensing image in the processed sample set to obtain a training data set for network parameter adjustment; inputting the training data set into an initial network model for training to obtain a trained crop recognition model; inputting the data to be tested into the crop recognition model to obtain a recognition result. The present invention utilizes a combination of hyperspectral remote sensing images and multispectral remote sensing images. The hyperspectral image provides rich spectral information and can finely distinguish the spectral characteristics of different substances, while the multispectral image provides a higher spatial resolution. Combining these two data types, the detailed spectral information of the hyperspectral image and the spatial details of the multispectral image can be fully utilized, thereby improving the accuracy and reliability of crop recognition.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings required for use in the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present invention. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying creative labor.
图1为本发明实施例提供的方法流程图。FIG1 is a flow chart of a method provided by an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.
本发明的目的是提供一种基于多光谱卫星影像的作物识别方法,提高作物识别的准确性和可靠性。The purpose of the present invention is to provide a crop identification method based on multispectral satellite images to improve the accuracy and reliability of crop identification.
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above-mentioned objects, features and advantages of the present invention more obvious and easy to understand, the present invention is further described in detail below with reference to the accompanying drawings and specific embodiments.
图1为本发明实施例提供的方法流程图,如图1所示,本发明提供了一种基于多光谱卫星影像的作物识别方法,包括:FIG1 is a flow chart of a method provided by an embodiment of the present invention. As shown in FIG1 , the present invention provides a crop identification method based on multispectral satellite images, including:
步骤100:获取原始样本集;所述原始样本集中包括各个样本作物区域对应的高光谱遥感图像和多光谱遥感图像;Step 100: Acquire an original sample set; the original sample set includes a hyperspectral remote sensing image and a multispectral remote sensing image corresponding to each sample crop area;
步骤200:利用自适应滤波方法对所述原始样本集进行预处理,得到处理样本集;Step 200: preprocessing the original sample set using an adaptive filtering method to obtain a processed sample set;
步骤300:将所述处理样本集中的高光谱遥感图像作为标签图像,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像分别进行空间和光谱方面的下采样,得到用于网络调参的训练数据集;Step 300: using the hyperspectral remote sensing image in the processing sample set as a label image, downsampling the hyperspectral remote sensing image and the multispectral remote sensing image in the processing sample set in terms of space and spectrum respectively, to obtain a training data set for network parameter adjustment;
步骤400:将所述训练数据集输入至初始网络模型中进行训练,得到训练好的作物识别模型;Step 400: inputting the training data set into the initial network model for training to obtain a trained crop recognition model;
步骤500:将待测数据输入至所述作物识别模型中,得到识别结果。Step 500: input the data to be tested into the crop recognition model to obtain the recognition result.
优选地,利用自适应滤波方法对所述原始样本集进行预处理,得到处理样本集,包括:Preferably, the original sample set is preprocessed using an adaptive filtering method to obtain a processed sample set, including:
以所述原始样本集中的高光谱遥感图像和多光谱遥感图像的每个像素点为中心取一个正方形窗口,并检测正方形窗口下每个像素点的变化量;检测正方形窗口下每个像素点的变化量的计算公式为:其中,fx(xi,yj)表示所述原始样本集中的高光谱遥感图像和多光谱遥感图像在x方向上的梯度值,fy(xi,yj)表示所述原始样本集中的高光谱遥感图像和多光谱遥感图像在y方向上的梯度值,f(xi,yj)表示所述原始样本集中的高光谱遥感图像和多光谱遥感图像在(xi,yj)位置处的灰度值,r表示像素点f(xi,yj)的变化量;A square window is taken with each pixel point of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set as the center, and the change amount of each pixel point under the square window is detected; the calculation formula for detecting the change amount of each pixel point under the square window is: Wherein, fx ( xi , yj ) represents the gradient value of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set in the x direction, fy ( xi , yj ) represents the gradient value of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set in the y direction, f( xi , yj ) represents the grayscale value of the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set at the position ( xi , yj ), and r represents the change amount of the pixel point f( xi , yj );
当像素点的变化量大于预设阈值时,使用正方形窗口内像素点的中值对相应正方形窗口内的高光谱遥感图像和多光谱遥感图像进行平滑处理,得到平滑后的像素点;When the change amount of the pixel point is greater than the preset threshold, the median value of the pixel point in the square window is used to smooth the hyperspectral remote sensing image and the multispectral remote sensing image in the corresponding square window to obtain the smoothed pixel point;
不断移动所述正方形窗口直到完成整个原始样本集中的高光谱遥感图像和多光谱遥感图像的平滑过程,得到平滑图像;所述平滑后的像素点的输出公式为:Idenoised(x,y)=median(Ioriginal(x-h,y-h),…,Ioriginal(x+h,y+h));其中,Idenoised(x,y)为平滑后像素点,h为正方形窗口的边长,Ioriginal(x,y)为所述原始样本集中的高光谱遥感图像和多光谱遥感图像上的像素点在(x,y)位置处的值;The square window is continuously moved until the smoothing process of the hyperspectral remote sensing image and the multispectral remote sensing image in the entire original sample set is completed to obtain a smoothed image; the output formula of the smoothed pixel point is: I denoised (x, y) = median (I original (xh, yh), ..., I original (x + h, y + h)); wherein, I denoised (x, y) is the smoothed pixel point, h is the side length of the square window, and I original (x, y) is the value of the pixel point on the hyperspectral remote sensing image and the multispectral remote sensing image in the original sample set at the position (x, y);
基于所述平滑图像的卷积值构建像素增强函数;constructing a pixel enhancement function based on the convolution value of the smoothed image;
利用所述像素增强函数对所述原始样本集中的影像进行图像增强处理,得到所述处理样本集。The pixel enhancement function is used to perform image enhancement processing on the images in the original sample set to obtain the processed sample set.
具体的,在本实施例的实际遥感应用中,由于影像在采集过程中避免不了噪点,而噪点与周围的像素一般具有较大的差异,因此本实施例通过检测正方形窗口下每个像素点的变化量,可以找出图像上的噪点,同时使用正方形窗口下像素序列的中值作为平滑后像素点,可以在去除噪声的同时能够更好地保持光谱遥感影像的细节信息。Specifically, in the actual remote sensing application of this embodiment, since noise is inevitable in the image acquisition process, and the noise is generally very different from the surrounding pixels, this embodiment can find the noise on the image by detecting the change of each pixel under the square window, and at the same time use the median of the pixel sequence under the square window as the smoothed pixel point, which can better maintain the detail information of the spectral remote sensing image while removing the noise.
优选地,基于所述平滑图像的卷积值构建像素增强函数,包括:Preferably, constructing a pixel enhancement function based on the convolution value of the smoothed image comprises:
对所述平滑图像进行卷积处理,得到卷积后的图像;Performing convolution processing on the smoothed image to obtain a convolved image;
基于卷积后图像的像素值确定增强系数;Determine the enhancement coefficient based on the pixel value of the convolved image;
基于增强系数构建像素增强函数;其中,像素增强函数为:A pixel enhancement function is constructed based on the enhancement coefficient; wherein the pixel enhancement function is:
其中,S(x,y)表示所述处理样本集中的影像,E(x,y)表示增强系数,G(x,y)表示卷积后的图像在(x,y)处的像素值,I(x,y)表示平滑图像,σ表示平滑图像与卷积后的图像之间的均方差。 Wherein, S(x,y) represents the image in the processing sample set, E(x,y) represents the enhancement coefficient, G(x,y) represents the pixel value of the convolved image at (x,y), I(x,y) represents the smoothed image, and σ represents the mean square error between the smoothed image and the convolved image.
具体的,本发明通过计算原始图像与卷积图像之间的均方差,可以量化两者之间的差异程度,当均方差较大时,说明原始图像与卷积后的图像差异较大,需要进行增强处理;当均方差较小时,说明原始图像与卷积后的图像差异较小,可以考虑不进行增强处理。在图像增强中,根据均方差的大小可以决定是否需要对图像进行增强处理,这样可以使图像达到更好的视觉效果。Specifically, the present invention can quantify the degree of difference between the original image and the convolved image by calculating the mean square error between the two. When the mean square error is large, it means that the original image and the image after convolution are very different and need to be enhanced; when the mean square error is small, it means that the original image and the image after convolution are small and no enhancement can be considered. In image enhancement, whether the image needs to be enhanced can be determined based on the size of the mean square error, so that the image can achieve a better visual effect.
优选地,将所述处理样本集中的高光谱遥感图像作为标签图像,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像分别进行空间和光谱方面的下采样,得到用于网络调参的训练数据集,包括:Preferably, the hyperspectral remote sensing image in the processing sample set is used as a label image, and the hyperspectral remote sensing image and the multispectral remote sensing image in the processing sample set are downsampled in space and spectrum respectively to obtain a training data set for network parameter adjustment, including:
根据预设的Wald协议,对所述处理样本集中的高光谱遥感图像和所述多光谱遥感图像进行高斯滤波处理,得到滤波图像;According to a preset Wald protocol, Gaussian filtering is performed on the hyperspectral remote sensing image and the multispectral remote sensing image in the processing sample set to obtain a filtered image;
然后使用双线性插值法对所述滤波图像进行相应倍数的下采样,以使处理后的图像被用作模拟低分辨率输入的高光谱遥感图像和多光谱遥感图像,并且原始高光谱图像则被保留作为参考图像。The filtered image is then downsampled by a corresponding multiple using a bilinear interpolation method so that the processed image is used as a hyperspectral remote sensing image and a multispectral remote sensing image that simulate a low-resolution input, and the original hyperspectral image is retained as a reference image.
具体的,本实施例考虑实际情况中,由于融合图像并不存在,采用了Wald协议中规定的样本构建方法。具体而言,将高光谱图像作为标签图像,然后对高光谱图像和多光谱图像分别进行空间和光谱方面的下采样,得到用于网络调参所需的数据集。Specifically, this embodiment takes into account the actual situation that since the fusion image does not exist, the sample construction method specified in the Wald protocol is adopted. Specifically, the hyperspectral image is used as the label image, and then the hyperspectral image and the multispectral image are downsampled in space and spectrum respectively to obtain the data set required for network parameter adjustment.
优选地,所述作物识别模型的构建方法包括:Preferably, the method for constructing the crop identification model includes:
对获取到的所述训练数据集进行图像处理,得到样本图像;Performing image processing on the acquired training data set to obtain a sample image;
将所述样本图像输入至具有多层残差块的CNN网络进行特征提取,并利用最大池化层进行降维,得到降维后的局部特征图;The sample image is input into a CNN network with a multi-layer residual block for feature extraction, and a maximum pooling layer is used for dimensionality reduction to obtain a local feature map after dimensionality reduction;
将降维后的局部特征图输入至特征图嵌入模块,得到具有位置信息的一维向量;The local feature map after dimensionality reduction is input into the feature map embedding module to obtain a one-dimensional vector with position information;
将所述一维向量输入至Transformer块中重复24次,得到全局特征融合数据,并将全局特征融合数据进行调整,得到二维全局特征图;Input the one-dimensional vector into the Transformer block and repeat 24 times to obtain global feature fusion data, and adjust the global feature fusion data to obtain a two-dimensional global feature map;
将所述二维全局特征图输入到特征金字塔模块,以对所述二维全局特征图进行多尺度特征提取和融合,得到融合了不同尺度特征的特征图;Inputting the two-dimensional global feature map into a feature pyramid module to perform multi-scale feature extraction and fusion on the two-dimensional global feature map to obtain a feature map that fuses features of different scales;
将最后一层的具有残差块的CNN网络提取的局部特征图与融合了不同尺度特征的特征图进行通道拼接,并经过二维卷积聚合,然后使用双线性插值上采样方法将聚合后的特征图尺寸空间扩大到样本图像相同尺寸,以得到预测图像分割掩码;The local feature map extracted by the CNN network with residual blocks in the last layer is concatenated with the feature map that fuses features of different scales, and then aggregated by two-dimensional convolution. Then, the bilinear interpolation upsampling method is used to expand the aggregated feature map size space to the same size as the sample image to obtain the predicted image segmentation mask.
以多分类损失函数最小为目标,对基于CNN-Transformer和特征金字塔模块的神经网络模型进行训练,得到训练好的所述作物识别模型。With the goal of minimizing the multi-classification loss function, a neural network model based on a CNN-Transformer and a feature pyramid module is trained to obtain the trained crop recognition model.
可选地,本实施例通过24层的Transformer层重复,每一层都进行自注意力计算和多层感知机(MLP)处理,深层次的特征融合能够使模型捕捉更复杂的数据关系和模式,提升了数据的表达能力。且本实施例在每个Transformer层中,都使用了层间残差连接(例如,第一层全局特征数据与第一个LayerNorm层输入的数据相加融合),本实施例能够避免在深层网络中常见的梯度消失问题,同时允许模型在深层结构中保持信息的流动。Optionally, this embodiment repeats 24 layers of Transformer layers, and each layer performs self-attention calculation and multi-layer perceptron (MLP) processing. Deep feature fusion enables the model to capture more complex data relationships and patterns, and improves the data expression ability. In addition, this embodiment uses inter-layer residual connections in each Transformer layer (for example, the first layer of global feature data is added and fused with the data input by the first LayerNorm layer). This embodiment can avoid the common gradient vanishing problem in deep networks, while allowing the model to maintain information flow in the deep structure.
进一步地,本实施例的特征金字塔池化模块的设计包括四个MCBR层(MaxPool,卷积,批归一化,ReLU),一个上采样层、一个1×1卷积层和一个跳跃连接。本实施例的特殊的结构有助于在不同尺度上捕捉特征,增强模型对尺度变化的适应性。此外,本实施例的1×1卷积用于调整通道数,上采样层用于恢复空间分辨率,跳跃连接则有助于保留更多原始特征信息。Furthermore, the design of the feature pyramid pooling module of this embodiment includes four MCBR layers (MaxPool, convolution, batch normalization, ReLU), an upsampling layer, a 1×1 convolution layer, and a jump connection. The special structure of this embodiment helps to capture features at different scales and enhance the adaptability of the model to scale changes. In addition, the 1×1 convolution of this embodiment is used to adjust the number of channels, the upsampling layer is used to restore the spatial resolution, and the jump connection helps to retain more original feature information.
优选地,所述多分类损失函数为非对称损失函数。Preferably, the multi-classification loss function is an asymmetric loss function.
具体的,本实施例考虑,当标签总数很大时,会出现正负样本不均衡的情况,导致分类不准确。数据集中的每张图像,大部分都只包含标签类中的小部分,这意味无论对于那种类别,其正样本数都是远远小于负样本数的。多标签分类常常使用二分类交叉熵函数作为损失函数,但是这样无法解决正负样本不均衡的问题,所以本实施例采用了一种简化的非对称损失函数,通过非对称损失函数,可以尽可能的平衡数据集中正负样本不均衡的问题,从而使模型更好的学习数据中正样本的特征。Specifically, this embodiment considers that when the total number of labels is large, there will be an imbalance between positive and negative samples, resulting in inaccurate classification. Most of the images in the data set only contain a small part of the label class, which means that no matter which category, the number of positive samples is far less than the number of negative samples. Multi-label classification often uses a binary cross entropy function as a loss function, but this cannot solve the problem of imbalance between positive and negative samples. Therefore, this embodiment adopts a simplified asymmetric loss function. Through the asymmetric loss function, the imbalance between positive and negative samples in the data set can be balanced as much as possible, so that the model can better learn the characteristics of positive samples in the data.
进一步地,本实施例的识别结果包括:Furthermore, the recognition result of this embodiment includes:
作物类型识别:这是最直接的识别结果,即识别出各种作物的种类。例如,模型可以区分小麦、玉米、水稻、棉花等不同的作物。Crop type recognition: This is the most direct recognition result, that is, identifying the types of various crops. For example, the model can distinguish different crops such as wheat, corn, rice, cotton, etc.
作物健康状况:通过分析多光谱和高光谱图像,作物识别模型不仅能识别作物种类,还能评估作物的健康状态。例如,模型可以检测到作物的受病害影响的区域、营养缺乏或过剩的情况等。Crop health: By analyzing multispectral and hyperspectral images, crop recognition models can not only identify crop species, but also assess the health of crops. For example, the model can detect areas of crops affected by diseases, nutrient deficiencies or excesses, etc.
作物覆盖面积:本实施例的模型可以估计特定区域内各种作物的覆盖面积,这对于作物产量预测和土地使用效率分析非常重要。Crop coverage area: The model of this embodiment can estimate the coverage area of various crops in a specific area, which is very important for crop yield prediction and land use efficiency analysis.
作物生长阶段:本实施例还可以通过结合时间序列的多光谱和高光谱图像分析,识别作物的不同生长阶段,如播种期、生长期、成熟期等。这有助于农业生产者做出合理的灌溉、施肥和收割决策。Crop growth stage: This embodiment can also identify different growth stages of crops, such as sowing period, growth period, maturity period, etc., by combining multispectral and hyperspectral image analysis of time series. This helps agricultural producers make reasonable irrigation, fertilization and harvesting decisions.
土壤条件:虽然不是本实施例直接的作物识别结果,但通过与作物生长相关的多光谱和高光谱图像分析,模型可以间接提供有关土壤条件的信息,如土壤湿度、pH值等,这些都是影响作物生长的重要因素。Soil conditions: Although not a direct crop identification result in this embodiment, through the analysis of multispectral and hyperspectral images related to crop growth, the model can indirectly provide information about soil conditions, such as soil moisture, pH value, etc., which are important factors affecting crop growth.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
(1)本发明利用高光谱遥感图像和多光谱遥感图像的组合。高光谱图像提供了丰富的光谱信息,可以精细地区分不同物质的光谱特征,而多光谱图像则提供了较高的空间分辨率。结合这两种数据类型,本发明可以充分利用高光谱图像的详细光谱信息和多光谱图像的空间细节,从而提高作物识别的准确性和可靠性。(1) The present invention utilizes a combination of hyperspectral remote sensing images and multispectral remote sensing images. Hyperspectral images provide rich spectral information that can finely distinguish the spectral characteristics of different substances, while multispectral images provide higher spatial resolution. By combining these two data types, the present invention can fully utilize the detailed spectral information of hyperspectral images and the spatial details of multispectral images, thereby improving the accuracy and reliability of crop identification.
(2)在处理原始样本集时,本发明采用自适应滤波方法进行预处理。本发明的自适应滤波可以根据图像内容动态调整滤波参数,有效降低噪声同时保留重要的边缘和细节信息。本发明的预处理步骤为后续的图像分析和特征提取提供了更清晰、更准确的输入数据。(2) When processing the original sample set, the present invention uses an adaptive filtering method for preprocessing. The adaptive filtering of the present invention can dynamically adjust the filtering parameters according to the image content, effectively reducing noise while retaining important edge and detail information. The preprocessing step of the present invention provides clearer and more accurate input data for subsequent image analysis and feature extraction.
(3)在训练数据集的准备过程中,本发明使用高光谱遥感图像作为标签图像。即本发明的高光谱图像的丰富光谱信息被用来直接指导多光谱图像的学习过程,确保了学习目标的高度精确性和相关性。本发明可以显著提高多光谱图像在作物识别任务中的性能。(3) In the process of preparing the training data set, the present invention uses hyperspectral remote sensing images as label images. That is, the rich spectral information of the hyperspectral images of the present invention is used to directly guide the learning process of the multispectral images, ensuring the high accuracy and relevance of the learning objectives. The present invention can significantly improve the performance of multispectral images in crop recognition tasks.
(4)本发明对处理样本集中的图像进行空间和光谱方面的下采样,旨在减少训练数据集的大小,从而减少模型训练所需的计算资源和时间。本发明的这种下采样策略是通过精心设计,以确保在减少数据量的同时,仍然保留足够的信息进行有效的学习。(4) The present invention performs spatial and spectral downsampling on the images in the processing sample set, aiming to reduce the size of the training data set, thereby reducing the computing resources and time required for model training. The downsampling strategy of the present invention is carefully designed to ensure that while reducing the amount of data, sufficient information is still retained for effective learning.
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。The various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments. The same or similar parts between the various embodiments can be referenced to each other.
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。This article uses specific examples to illustrate the principles and implementation methods of the present invention. The above examples are only used to help understand the method and core ideas of the present invention. At the same time, for those skilled in the art, according to the ideas of the present invention, there will be changes in the specific implementation methods and application scope. In summary, the content of this specification should not be understood as limiting the present invention.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410846076.XA CN118736436A (en) | 2024-06-27 | 2024-06-27 | A crop recognition method based on multispectral satellite images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410846076.XA CN118736436A (en) | 2024-06-27 | 2024-06-27 | A crop recognition method based on multispectral satellite images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118736436A true CN118736436A (en) | 2024-10-01 |
Family
ID=92863499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410846076.XA Pending CN118736436A (en) | 2024-06-27 | 2024-06-27 | A crop recognition method based on multispectral satellite images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118736436A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119226715A (en) * | 2024-12-04 | 2024-12-31 | 杭州瑞盛电气有限公司 | A method and system for detecting safety hazards of distribution boxes |
-
2024
- 2024-06-27 CN CN202410846076.XA patent/CN118736436A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119226715A (en) * | 2024-12-04 | 2024-12-31 | 杭州瑞盛电气有限公司 | A method and system for detecting safety hazards of distribution boxes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Palacios et al. | Automated grapevine flower detection and quantification method based on computer vision and deep learning from on-the-go imaging using a mobile sensing platform under field conditions | |
CN114120037B (en) | An image recognition method of sprouted potato based on improved yolov5 model | |
CN113657294B (en) | Crop disease and insect pest detection method and system based on computer vision | |
CN108416774A (en) | A Fabric Type Recognition Method Based on Fine-grained Neural Network | |
CN106650812A (en) | City water body extraction method for satellite remote sensing image | |
CN116071560A (en) | A method of fruit recognition based on convolutional neural network | |
Kazi | Fruit grading, disease detection, and an image processing strategy | |
CN116703932A (en) | CBAM-HRNet model wheat spike grain segmentation and counting method based on convolution attention mechanism | |
CN114965501A (en) | Peanut disease detection and yield prediction method based on canopy parameter processing | |
CN118736436A (en) | A crop recognition method based on multispectral satellite images | |
CN116205879A (en) | Unmanned aerial vehicle image and deep learning-based wheat lodging area estimation method | |
Ozguven et al. | A new approach to detect mildew disease on cucumber (Pseudoperonospora cubensis) leaves with image processing | |
CN116844053A (en) | Wheat planting area identification method, system, electronic equipment and storage medium | |
CN116385717A (en) | Foliar disease identification method, device, electronic equipment, storage medium and product | |
Zahan et al. | A deep learning-based approach for mushroom diseases classification | |
Kumar et al. | Deep Learning-Based Web Application for Real-Time Apple Leaf Disease Detection and Classification | |
Rony et al. | BottleNet18: Deep Learning-Based Bottle Gourd Leaf Disease Classification | |
CN115273131A (en) | Animal identification method based on dual-channel feature fusion | |
Thakur et al. | ELSET: design of an ensemble deep learning model for improving satellite image classification efficiency via temporal analysis | |
Singh et al. | Automated detection of plant leaf diseases using image processing techniques | |
Sood et al. | AI-driven mustard disease identification: A multiclass and binary classification approach for advanced crop health monitoring | |
CN115565168A (en) | A Sugarcane Disease Recognition Method Based on Residual Capsule Network of Attention Mechanism | |
CN115828181A (en) | Potato Disease Type Identification Method Based on Deep Learning Algorithm | |
CN114723952A (en) | A Model Construction Method and System for Recognition of Dark Stripe Noise in High-speed TDI CCD Camera Images | |
CN115620131A (en) | A glacial lake extraction method that combines spectral features and multi-scale spatial features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |