Nothing Special   »   [go: up one dir, main page]

CN118691989A - A method for detecting aquatic plant coverage based on airborne hyperspectral - Google Patents

A method for detecting aquatic plant coverage based on airborne hyperspectral Download PDF

Info

Publication number
CN118691989A
CN118691989A CN202410687368.3A CN202410687368A CN118691989A CN 118691989 A CN118691989 A CN 118691989A CN 202410687368 A CN202410687368 A CN 202410687368A CN 118691989 A CN118691989 A CN 118691989A
Authority
CN
China
Prior art keywords
sample
image
band
hyperspectral
aquatic weed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410687368.3A
Other languages
Chinese (zh)
Inventor
朱启兵
余子健
黄敏
赵鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangnan University
Original Assignee
Jiangnan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangnan University filed Critical Jiangnan University
Priority to CN202410687368.3A priority Critical patent/CN118691989A/en
Publication of CN118691989A publication Critical patent/CN118691989A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/194Terrestrial scenes using hyperspectral data, i.e. more or other wavelengths than RGB
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Remote Sensing (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

本申请公开了一种基于机载高光谱的水草覆盖度检测方法,涉及高光谱技术领域,该方法通过无人机搭载的高光谱成像设备获取样本高光谱图像,将样本高光谱图像转换为样本反射率光谱曲线并进行波段选择筛选出特征波段实现数据降维,并结合光谱连续统去除变换构建融合光谱特征和光谱指数的融合特征,利用融合特征来训练得到水草识别模型,利用训练得到水草识别模型可以对高光谱图像进行像素级分类检测,从而快速获取水草覆盖度检测结果。高光谱图像可以获取丰富的光谱信息,且融合特征使得即便在复杂背景下也能准确快速的实现像素级分类,能够自动化快速准确区分水草及其颜色相近的其他水生植物,实现水草覆盖度检测,自动化程度和准确度都较高。

The present application discloses a method for detecting waterweed coverage based on airborne hyperspectral, which relates to the field of hyperspectral technology. The method obtains sample hyperspectral images through a hyperspectral imaging device carried by an unmanned aerial vehicle, converts the sample hyperspectral images into sample reflectance spectrum curves, and performs band selection to screen out characteristic bands to achieve data dimension reduction, and combines spectral continuum removal transformation to construct fusion features of fused spectral features and spectral indexes, and uses the fusion features to train a waterweed recognition model. The waterweed recognition model obtained by training can be used to perform pixel-level classification detection on the hyperspectral image, thereby quickly obtaining the waterweed coverage detection result. Hyperspectral images can obtain rich spectral information, and the fusion features enable accurate and rapid pixel-level classification even in complex backgrounds, and can automatically and quickly distinguish waterweeds and other aquatic plants with similar colors, so as to achieve waterweed coverage detection, with high automation and accuracy.

Description

一种基于机载高光谱的水草覆盖度检测方法A method for detecting aquatic plant coverage based on airborne hyperspectral

技术领域Technical Field

本申请涉及高光谱技术领域,尤其是一种基于机载高光谱的水草覆盖度检测方法。The present application relates to the field of hyperspectral technology, and in particular to a method for detecting aquatic plant coverage based on airborne hyperspectral technology.

背景技术Background Art

水草在蟹塘养殖中扮演着重要角色,不仅作为螃蟹的重要食物来源和庇护场所,而且水草茂盛还有助于增加水体的溶氧量和净化水质,有效防止水土流失和减少水体污染,为蟹塘生态系统的健康发展提供了坚实支撑,因此对水草长势进行检测有利于评估蟹塘生态系统的发展情况。Aquatic plants play an important role in crab pond farming. Not only do they serve as an important food source and shelter for crabs, but lush aquatic plants also help increase the dissolved oxygen content in the water and purify the water quality, effectively preventing soil erosion and reducing water pollution, providing solid support for the healthy development of the crab pond ecosystem. Therefore, testing the growth of aquatic plants is helpful in evaluating the development of the crab pond ecosystem.

传统的水草长势检测方法主要依赖于人工观察,这种方式费时费力,且受主观影响较大。随着技术的发展,基于光谱分析技术的遥感方法也被逐渐应用于水草长势检测领域,但是水草中的叶绿素会使水体整体呈现绿色,遥感获取到的RGB图像或多光谱图像中不同水生植物的颜色十分相近,往往难以准确从图像中分析识别出水草,导致检测准确度不高,难以普及。Traditional methods for detecting the growth of aquatic plants mainly rely on manual observation, which is time-consuming and labor-intensive, and is subject to significant subjective influences. With the development of technology, remote sensing methods based on spectral analysis technology have also been gradually applied to the field of aquatic plant growth detection. However, the chlorophyll in aquatic plants will make the water body appear green overall. The colors of different aquatic plants in RGB images or multispectral images obtained by remote sensing are very similar, and it is often difficult to accurately analyze and identify aquatic plants from the image, resulting in low detection accuracy and difficulty in popularization.

发明内容Summary of the invention

本申请针对上述问题及技术需求,提出了一种基于机载高光谱的水草覆盖度检测方法,本申请的技术方案如下:In response to the above problems and technical requirements, this application proposes a method for detecting aquatic plant coverage based on airborne hyperspectral. The technical solution of this application is as follows:

一种基于机载高光谱的水草覆盖度检测方法,该水草覆盖度检测方法包括:A method for detecting aquatic plant coverage based on airborne hyperspectral, the method comprising:

通过无人机搭载的高光谱成像设备拍摄生长有水草的水域的样本高光谱图像,并对样本高光谱图像中各个像素点的分类结果进行标注;The sample hyperspectral images of the water area with aquatic plants are taken by the hyperspectral imaging equipment carried by the drone, and the classification results of each pixel point in the sample hyperspectral images are annotated;

将样本高光谱图像中每个像素点的辐射量转换为地表反射率,得到对应的样本反射率光谱曲线;The radiation amount of each pixel in the sample hyperspectral image is converted into surface reflectance to obtain the corresponding sample reflectance spectrum curve;

根据样本反射率光谱曲线的曲线特征筛选出特征波段;Screen out characteristic bands according to curve characteristics of the sample reflectance spectrum curve;

对样本反射率光谱曲线进行连续统去除变换后计算光谱指数;The spectral index is calculated after performing continuum removal transformation on the sample reflectance spectral curve;

将样本高光谱图像划分为若干个训练样本图像,分别提取每个训练样本图像在特征波段下的光谱特征以及每个训练样本图像的光谱指数;将每个训练样本图像的谱特征和光谱指数在通道维度上拼接得到训练样本图像的融合特征;The sample hyperspectral image is divided into several training sample images, and the spectral features of each training sample image in the characteristic band and the spectral index of each training sample image are extracted respectively; the spectral features and spectral index of each training sample image are spliced in the channel dimension to obtain the fusion features of the training sample image;

以每个训练样本图像的融合特征作为输入、训练样本图像中像素点的分类结果作为输出,基于神经网络模型训练得到水草识别模型;Taking the fusion features of each training sample image as input and the classification results of the pixels in the training sample image as output, a waterweed recognition model is obtained based on the neural network model training;

利用水草识别模型确定待检测水域的高光谱图像中各个像素点的分类结果,并计算属于水草类别的像素点的数量在像素点总数中的占比得到待检测水域的水草覆盖度。The aquatic plant recognition model is used to determine the classification results of each pixel in the hyperspectral image of the water area to be detected, and the proportion of the number of pixels belonging to the aquatic plant category in the total number of pixels is calculated to obtain the aquatic plant coverage of the water area to be detected.

本申请的有益技术效果是:The beneficial technical effects of this application are:

本申请公开了一种基于机载高光谱的水草覆盖度检测方法,该方法利用无人机搭载高光谱成像设备采集样本高光谱图像,对样本高光谱图像进行波段选择筛选出特征波段实现数据降维,并结合光谱连续统去除变换构建融合光谱特征和光谱指数的融合特征,利用融合特征来训练得到水草识别模型,利用该水草识别模型可以对高光谱图像进行像素级分类检测,从而快速获取水草覆盖度检测结果,高光谱图像可以获取丰富的光谱信息,本申请这种基于高光谱图像的检测方法能够准确区分水草及其颜色相近的其他水生植物,且融合特征的应用使得即便在复杂背景下也能准确快速的实现像素级分类,该方法自动化程度高,且检测准确度高,具有较高的实际应用价值。The present application discloses a method for detecting aquatic plant coverage based on airborne hyperspectral. The method uses an unmanned aerial vehicle equipped with a hyperspectral imaging device to collect sample hyperspectral images, performs band selection on the sample hyperspectral images to screen out characteristic bands to achieve data dimension reduction, and combines the spectral continuum removal transformation to construct a fusion feature of fused spectral features and spectral indexes, and uses the fusion features to train an aquatic plant recognition model. The aquatic plant recognition model can be used to perform pixel-level classification detection on the hyperspectral image, thereby quickly obtaining the aquatic plant coverage detection result. The hyperspectral image can obtain rich spectral information. The detection method based on hyperspectral images in the present application can accurately distinguish aquatic plants and other aquatic plants with similar colors, and the application of fusion features enables accurate and rapid pixel-level classification even in complex backgrounds. The method has a high degree of automation and high detection accuracy, and has high practical application value.

该方法利用分步波段选择法筛选特征波段,并根据类别可分性与特征相关性成反比的思想,引入了改进的综合JM距离计算方法来评估不同类别之间的可区分程度,可以增强水草与其他类别之间的差异性和可分性,从而提高分类准确性,提高水草覆盖度检测精度。This method uses the step-by-step band selection method to screen feature bands, and based on the idea that category separability is inversely proportional to feature correlation, introduces an improved comprehensive JM distance calculation method to evaluate the distinguishability between different categories, which can enhance the difference and separability between aquatic plants and other categories, thereby improving classification accuracy and improving the detection accuracy of aquatic plant coverage.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本申请一个实施例的基于机载高光谱的水草覆盖度检测方法的方法流程图。FIG1 is a flow chart of a method for detecting aquatic plant coverage based on airborne hyperspectral according to an embodiment of the present application.

图2是一个实施例中采用分步波段选择法来筛选特征波段的方法流程图。FIG. 2 is a flow chart of a method for screening characteristic bands using a step-by-step band selection method in one embodiment.

图3是用于训练得到水草识别模型的神经网络模型的模型结构图。FIG. 3 is a model structure diagram of a neural network model used to train an aquatic plant recognition model.

具体实施方式DETAILED DESCRIPTION

下面结合附图对本申请的具体实施方式做进一步说明。The specific implementation of the present application is further described below in conjunction with the accompanying drawings.

本申请公开了一种基于机载高光谱的水草覆盖度检测方法,请参考图1所示的流程图,该水草覆盖度检测方法包括:The present application discloses a method for detecting waterweed coverage based on airborne hyperspectral. Please refer to the flowchart shown in FIG1 . The method for detecting waterweed coverage includes:

步骤1,通过无人机搭载的高光谱成像设备拍摄生长有水草的水域的样本高光谱图像,并对样本高光谱图像中各个像素点的分类结果进行标注,样本高光谱图像中像素点所属类别包括水草类别以及若干个其他类别,其他类别主要有其他各种水生植物类别、水生动物类别以及水底障碍物类别等等。Step 1: Use the hyperspectral imaging device carried by the drone to capture sample hyperspectral images of water areas with aquatic plants, and annotate the classification results of each pixel in the sample hyperspectral image. The categories to which the pixel points in the sample hyperspectral image belong include aquatic plants and several other categories. The other categories mainly include various other aquatic plant categories, aquatic animal categories, underwater obstacle categories, etc.

一般水域面积较广,高光谱成像设备往往难以直接获取所需的全域范围内的样本高光谱图像,因此在一个实施例中,控制无人机依次沿着多条平行的航线飞行,并在沿着每条航线飞行过程中通过搭载的高光谱成像设备垂直拍摄水域的单航线遥感图像,将沿着多条航线飞行时拍摄到的单航线遥感图像依次进行图像拼接,得到样本高光谱图像。具体的图像拼接方法为常用的图像处理技术,本步骤不再赘述。Generally, the water area is large, and the hyperspectral imaging equipment often finds it difficult to directly obtain the required sample hyperspectral images in the entire domain. Therefore, in one embodiment, the drone is controlled to fly along multiple parallel routes in sequence, and the single-route remote sensing images of the water area are vertically captured by the onboard hyperspectral imaging equipment during the flight along each route. The single-route remote sensing images captured when flying along multiple routes are sequentially stitched to obtain the sample hyperspectral images. The specific image stitching method is a commonly used image processing technology, and this step will not be repeated.

步骤2,将样本高光谱图像中每个像素点的辐射量转换为地表反射率,得到对应的样本反射率光谱曲线,将辐射量转换为地表反射率的方法为本领域常用方法,本步骤不再赘述。Step 2, converting the radiation of each pixel in the sample hyperspectral image into surface reflectance to obtain the corresponding sample reflectance spectrum curve. The method of converting the radiation into surface reflectance is a common method in the field and will not be repeated in this step.

步骤3,根据样本反射率光谱曲线的曲线特征筛选出特征波段。Step 3: select characteristic bands according to the curve characteristics of the sample reflectance spectrum curve.

由于样本高光谱图像中波段较多,会导致数据处理量较大,因此首先筛选特征波段实现对样本高光谱图像的降维处理。在一个实施例中,采用分步波段选择法来筛选特征波段,包括如下步骤,请参考图2的流程图:Since there are many bands in the sample hyperspectral image, the amount of data processing will be large, so the characteristic bands are first screened to achieve dimensionality reduction processing of the sample hyperspectral image. In one embodiment, a step-by-step band selection method is used to screen the characteristic bands, including the following steps, please refer to the flowchart of Figure 2:

(1)、根据样本反射率光谱曲线的曲线特征筛选出包含的光谱信息量较大的若干个代表波段,包括:(1) According to the curve characteristics of the sample reflectance spectrum curve, several representative bands containing large amounts of spectral information are selected, including:

(a)根据样本反射率光谱曲线的曲线特征对样本反射率光谱曲线进行波段划分为若干个波段子集,包括:(a) According to the curve characteristics of the sample reflectance spectrum curve, the sample reflectance spectrum curve is divided into a plurality of band subsets, including:

首先基于样本反射率光谱曲线的波峰和波谷进行波段粗划分得到若干个原始波段,每个原始波段包括相邻的波峰和波谷之间的波段范围。Firstly, the bands are roughly divided based on the peaks and troughs of the sample reflectance spectrum curve to obtain a number of original bands, each of which includes the band range between adjacent peaks and troughs.

然后根据样本反射率光谱曲线在每个原始波段内的曲线斜率,对原始波段进行波段细划分为若干个波段子集,包括:以原始波段内的样本反射率光谱曲线的曲线斜率达到斜率阈值处为划分位置,将原始波段划分为若干个波段子集,每个波段子集包括原始波段内样本反射率光谱曲线的曲线斜率未超过斜率阈值的连续波段。其中,斜率阈值是自定义预设值。Then, according to the curve slope of the sample reflectance spectrum curve in each original band, the original band is subdivided into several band subsets, including: taking the position where the curve slope of the sample reflectance spectrum curve in the original band reaches the slope threshold as the division position, the original band is divided into several band subsets, each band subset includes a continuous band where the curve slope of the sample reflectance spectrum curve in the original band does not exceed the slope threshold. Among them, the slope threshold is a custom preset value.

样本反射率光谱曲线的曲线斜率较小处说明相邻波段的相关性较大,此时可以将其合并为同一个波段子集,反之,样本反射率光谱曲线的曲线斜率较大处说明波段的相关性较小,需要划分至不同的波段子集,如此按照样本反射率光谱曲线的变化程度就能将每个原始波段进一步划分为多个波段子集。The smaller the slope of the sample reflectance spectrum curve, the greater the correlation between adjacent bands. In this case, they can be merged into the same band subset. Conversely, the larger the slope of the sample reflectance spectrum curve, the smaller the correlation between the bands. They need to be divided into different band subsets. In this way, each original band can be further divided into multiple band subsets according to the degree of change of the sample reflectance spectrum curve.

(b)计算样本反射率光谱曲线在每个波段子集内的反射率标准差,反射率标准差反映波段子集内样本反射率光谱曲线的分布程度,反射率标准差越大说明信息量越大,像素值分布范围更大,因此筛选出反射率标准差最大的若干个波段子集作为候选波段,保留的候选波段的数量可以自定义设置。(b) Calculate the reflectance standard deviation of the sample reflectance spectral curve in each band subset. The reflectance standard deviation reflects the distribution degree of the sample reflectance spectral curve in the band subset. The larger the reflectance standard deviation, the greater the amount of information and the larger the pixel value distribution range. Therefore, several band subsets with the largest reflectance standard deviation are selected as candidate bands. The number of retained candidate bands can be customized.

(c)计算不同候选波段之间的相关系数,筛选出相关系数较小的若干个候选波段作为代表波段,滤除相关系数较大的多个候选波段,保留的代表波段的数量可以自定义设置。(c) Calculate the correlation coefficients between different candidate bands, select several candidate bands with smaller correlation coefficients as representative bands, and filter out multiple candidate bands with larger correlation coefficients. The number of retained representative bands can be customized.

(2)、对多个代表波段进行全排列组合得到多个波段组合,每个波段组合包括若干个代表波段。(2) Perform full permutation and combination on multiple representative bands to obtain multiple band combinations, each of which includes several representative bands.

(3)、提取样本高光谱图像在每一种波段组合下的多光谱图像,计算多光谱图像的OIF值以及综合JM距离Jnew(3) Extract the multispectral image of the sample hyperspectral image under each band combination, and calculate the OIF value of the multispectral image and the comprehensive JM distance J new .

多光谱图像的OIF(最优索引因子)值可以采用标准计算公式计算得到,该实施例不再赘述。多光谱图像的综合JM距离Jnew用于表征水草类别与其他类别之间的可区分程度,综合JM距离越大Jnew,水草类别与其他类别之间的可区分程度越高。The OIF (optimal index factor) value of the multispectral image can be calculated using a standard calculation formula, which will not be described in detail in this embodiment. The comprehensive JM distance J new of the multispectral image is used to characterize the degree of distinguishability between the aquatic plant category and other categories. The larger the comprehensive JM distance J new , the higher the degree of distinguishability between the aquatic plant category and other categories.

多光谱图像的综合JM距离Jnew基于水草类别与其他任意第j个类别之间的JM距离计算得到,因此首先根据多光谱图像计算水草类别和其他任意第j个类别之间的JM距离JMj,具体计算公式采用标准计算公式,该实施例不再赘述。The comprehensive JM distance J new of the multispectral image is calculated based on the JM distance between the aquatic plant category and any other j-th category. Therefore, the JM distance JM j between the aquatic plant category and any other j-th category is first calculated according to the multispectral image. The specific calculation formula adopts the standard calculation formula, which will not be repeated in this embodiment.

在处理多类别问题时,传统方法通常计算不同类别之间的JM距离的平均值来表征不同类别之间的可区分程度,但是计算JM距离的平均值的方法,会综合考虑各个类别之间的JM距离,只是从总体上评估可区分程度,而掩盖了JM距离较小的类别之间的分离度较小的问题。为了解决这种问题,该实施例根据类别可区分程度与特征相关性成反比的思想来计算综合JM距离JnewWhen dealing with multi-class problems, traditional methods usually calculate the average value of the JM distances between different classes to characterize the degree of distinguishability between different classes. However, the method of calculating the average value of the JM distances will comprehensively consider the JM distances between each class, and only evaluate the degree of distinguishability from the overall perspective, which conceals the problem that the separation between classes with smaller JM distances is smaller. In order to solve this problem, this embodiment calculates the comprehensive JM distance J new based on the idea that the degree of class distinguishability is inversely proportional to the feature correlation:

其中,n是样本高光谱图像中包含的类别总数。第i个类别表示水草类别,因此有j≠i。Rj是基于多光谱图像确定的水草类别与第j个类别的皮尔逊相关系数,且:Where n is the total number of categories contained in the sample hyperspectral image. The i-th category represents the waterweed category, so j≠i. Rj is the Pearson correlation coefficient between the waterweed category determined based on the multispectral image and the j-th category, and:

其中,Xik是多光谱图像的第k个波段中属于水草类别的像素点的像素值之和,Xjk是多光谱图像的第k个波段中属于第j个类别的像素点的像素值之和;是多光谱图像中各个波段的Xik的平均值,是多光谱图像中各个波段的Xjk的平均值,K是多光谱图像包含的波段总数。Wherein, Xik is the sum of the pixel values of the pixels belonging to the aquatic plant category in the k-th band of the multispectral image, and Xjk is the sum of the pixel values of the pixels belonging to the j-th category in the k-th band of the multispectral image; is the average value of Xik in each band of the multispectral image, is the average value of X jk of each band in the multispectral image, and K is the total number of bands contained in the multispectral image.

(4)、确定综合JM距离Jnew达到距离阈值且OIF值最大的多光谱图像对应的波段组合中的代表波段为特征波段。(4) Determine the representative band in the band combination corresponding to the multispectral image whose comprehensive JM distance J new reaches the distance threshold and whose OIF value is the largest as the characteristic band.

步骤4,对样本反射率光谱曲线进行连续统去除变换后计算光谱指数,光谱的连续统去除变换可以增强水草与其他类别之间的反射率光谱差异本申请分析水草生长环境特点,选择贡献较大的植被指数来计算光谱指数,包括:分别计算增强植被指数、光化学反应指数、归一化水指数以及绿光归一化植被指数后,合并得到高维矩阵作为光谱指数。Step 4, calculate the spectral index after performing a continuum removal transformation on the sample reflectance spectral curve. The continuum removal transformation of the spectrum can enhance the reflectance spectral difference between aquatic plants and other categories. This application analyzes the characteristics of the aquatic plant growth environment and selects the vegetation index with a greater contribution to calculate the spectral index, including: respectively calculating the enhanced vegetation index, photochemical reaction index, normalized water index and green light normalized vegetation index, and merging them to obtain a high-dimensional matrix as the spectral index.

步骤5,将样本高光谱图像划分为若干个训练样本图像,分别提取每个训练样本图像在特征波段下的光谱特征以及每个训练样本图像的光谱指数。然后将每个训练样本图像的谱特征和光谱指数在通道维度上拼接得到训练样本图像的融合特征。Step 5: Divide the sample hyperspectral image into several training sample images, extract the spectral features of each training sample image in the characteristic band and the spectral index of each training sample image, and then concatenate the spectral features and spectral index of each training sample image in the channel dimension to obtain the fusion features of the training sample image.

在一个实施例中,利用滑动切割窗口的方法来划分得到训练样本图像,内包括:定义切割窗口的边长以及滑动步长均为x。然后对包含H行像素点的样本高光谱图像添加行背景像素点,对包含W列像素点的样本高光谱图像添加列背景像素点,得到扩充后样本高光谱图像,添加的背景像素点的地表反射率为0。最后利用切割窗口对所述扩充后样本高光谱图像进行滑动切割,提取切割窗口内的图像作为训练样本图像。其中,表示对向上取整,表示对向上取整。In one embodiment, a sliding cutting window method is used to divide the training sample image, which includes: defining the side length of the cutting window and the sliding step length to be x. Then, the sample hyperspectral image containing H rows of pixels is added Row background pixels, add to the sample hyperspectral image containing W columns of pixels The background pixels are added to obtain the expanded sample hyperspectral image, and the surface reflectance of the added background pixels is 0. Finally, the expanded sample hyperspectral image is subjected to sliding cutting using a cutting window, and the image within the cutting window is extracted as a training sample image. Express Round up, Express Round up.

步骤6,以每个训练样本图像的融合特征作为输入、训练样本图像中像素点的分类结果作为输出,基于神经网络模型训练得到水草识别模型。Step 6, using the fusion features of each training sample image as input and the classification results of the pixels in the training sample image as output, a waterweed recognition model is obtained based on neural network model training.

在一个实施例中,用于训练得到水草识别模型的神经网络模型的模型结构如图3所示,神经网络模型包括下采样模块、特征提取模块、上采样特征融合模块以及分割预测模块:In one embodiment, the model structure of the neural network model used to train the aquatic plant recognition model is shown in FIG3 . The neural network model includes a downsampling module, a feature extraction module, an upsampling feature fusion module, and a segmentation prediction module:

下采样模块包括级联的三个卷积层,下采样模块用于对输入图像进行特征提取后得到三组不同层级的特征图,尺寸分别为输入图像的尺寸的前两层特征图和原始的输入图像作为上采样特征融合模块的三个分支的输入,最后一层尺寸最小的特征图输入到特征提取模块。The downsampling module consists of three cascaded convolutional layers. The downsampling module is used to extract features from the input image and obtain three sets of feature maps at different levels, with sizes of the input image and The first two layers of feature maps and the original input image are used as inputs to the three branches of the upsampling feature fusion module, and the last layer of feature maps with the smallest size is input to the feature extraction module.

特征提取模块,包括级联的多个transformer模块,对于每个transformer模块核心为融合通道注意力的多头自注意力模块,其中通道注意力模块主要提取不同波段的像素关系,而多头自注意力模块主要提升模型对于全局像素关系的表达能力。之后融合通道注意力模块、多头自注意力模块输出特征图和输入特征图生成中间特征图,再次通过mlp模块输出transformer模块特征图。The feature extraction module includes multiple cascaded transformer modules. The core of each transformer module is a multi-head self-attention module that integrates channel attention. The channel attention module mainly extracts pixel relationships in different bands, while the multi-head self-attention module mainly improves the model's ability to express global pixel relationships. Then, the output feature map of the channel attention module, the multi-head self-attention module, and the input feature map are integrated to generate an intermediate feature map, and the transformer module feature map is output again through the MLP module.

上采样特征融合模块,通过整合多尺度特征图信息,在重建图像细节特征的同时,提升高层特征图的分辨率。该模块主要分为局部和全局分支,分别学习特征图的局部和全局像素关系。对于局部分支,在宽度维进行线性变换,调整维度后相同特征点在局部区域内密集分布。对于全局分支,在高度维进行线性变换,调整维度后特征点均匀分布在全局范围内。将两个分支的特征图融合后调整通道维度,输出该上采样特征融合模块的特征图。The upsampling feature fusion module integrates multi-scale feature map information to improve the resolution of high-level feature maps while reconstructing image detail features. This module is mainly divided into local and global branches, which learn the local and global pixel relationships of feature maps respectively. For the local branch, a linear transformation is performed in the width dimension, and the same feature points are densely distributed in the local area after the dimension is adjusted. For the global branch, a linear transformation is performed in the height dimension, and the feature points are evenly distributed in the global range after the dimension is adjusted. The feature maps of the two branches are fused and the channel dimension is adjusted to output the feature map of the upsampling feature fusion module.

分割预测模块,通过卷积层调整特征图通道维度为预测类别,并输出像素分类结果。The segmentation prediction module adjusts the feature map channel dimension to the predicted category through the convolution layer and outputs the pixel classification result.

步骤7,利用水草识别模型确定待检测水域的高光谱图像中各个像素点的分类结果,并计算属于水草类别的像素点的数量在像素点总数中的占比得到待检测水域的水草覆盖度。Step 7, using the aquatic plant recognition model to determine the classification results of each pixel in the hyperspectral image of the water area to be detected, and calculating the proportion of the number of pixels belonging to the aquatic plant category in the total number of pixels to obtain the aquatic plant coverage of the water area to be detected.

待检测水域的高光谱图像的获取方法与步骤1获取样本高光谱图像的方法类似,该实施例不再赘述。获取到待检测水域的高光谱图像后,提取高光谱图像的融合特征,提取融合特征的方法与训练阶段得到每个训练样本图像的融合特征的方法类似。将待检测水域的高光谱图像的融合特征输入水草识别模型,得到待检测水域的高光谱图像中各个像素点所属类别的预测结果。The method for acquiring the hyperspectral image of the water area to be detected is similar to the method for acquiring the sample hyperspectral image in step 1, and this embodiment will not be repeated. After acquiring the hyperspectral image of the water area to be detected, the fusion features of the hyperspectral image are extracted. The method for extracting the fusion features is similar to the method for obtaining the fusion features of each training sample image in the training stage. The fusion features of the hyperspectral image of the water area to be detected are input into the aquatic plant recognition model to obtain the prediction results of the category to which each pixel point in the hyperspectral image of the water area to be detected belongs.

以上所述的仅是本申请的优选实施方式,本申请不限于以上实施例。可以理解,本领域技术人员在不脱离本申请的精神和构思的前提下直接导出或联想到的其他改进和变化,均应认为包含在本申请的保护范围之内。The above is only a preferred embodiment of the present application, and the present application is not limited to the above embodiments. It is understood that other improvements and changes directly derived or associated by those skilled in the art without departing from the spirit and concept of the present application should be considered to be included in the scope of protection of the present application.

Claims (10)

1. The aquatic weed coverage detection method based on the airborne hyperspectrum is characterized by comprising the following steps of:
shooting a sample hyperspectral image of a water area growing with waterweeds through hyperspectral imaging equipment carried by an unmanned aerial vehicle, and marking classification results of all pixel points in the sample hyperspectral image;
Converting the radiation quantity of each pixel point in the sample hyperspectral image into the surface reflectivity to obtain a corresponding sample reflectivity spectrum curve;
screening out characteristic wave bands according to curve characteristics of the sample reflectivity spectrum curve;
calculating a spectrum index after continuously removing and transforming the sample reflectivity spectrum curve;
Dividing the sample hyperspectral image into a plurality of training sample images, and respectively extracting spectral characteristics of each training sample image under the characteristic wave band and spectral indexes of each training sample image; splicing the spectral features and the spectral indexes of each training sample image in the channel dimension to obtain the fusion features of the training sample images;
taking the fusion characteristic of each training sample image as input, and the classification result of the pixel points in the training sample image as output, and training based on a neural network model to obtain a water plant recognition model;
And determining a classification result of each pixel point in the hyperspectral image of the water area to be detected by using the water grass recognition model, and calculating the ratio of the number of the pixel points belonging to the water grass category in the total number of the pixel points to obtain the water grass coverage of the water area to be detected.
2. The method for detecting aquatic weed coverage according to claim 1, wherein the screening out the characteristic band according to the curve characteristic of the sample reflectance spectrum curve comprises:
Screening out a plurality of representative wave bands with larger spectral information content according to the curve characteristics of the sample reflectivity spectrum curve;
Performing full permutation and combination on the plurality of representative wave bands to obtain a plurality of wave band combinations, wherein each wave band combination comprises a plurality of representative wave bands;
Extracting a multispectral image of the sample hyperspectral image under each band combination, and calculating an OIF value of the multispectral image and a comprehensive JM distance J new, wherein the comprehensive JM distance J new is used for representing the distinguishable degree between the aquatic weed category and other categories, and the greater the comprehensive JM distance is, the higher the distinguishable degree between the aquatic weed category and other categories is;
and determining a representative wave band in a wave band combination corresponding to the multispectral image with the largest OIF value, wherein the distance J new of the comprehensive JM reaches a distance threshold value, as a characteristic wave band.
3. The aquatic weed coverage detection method according to claim 2, wherein calculating the integrated JM distance J new for each multispectral image comprises:
according to the multispectral image, JM distance JM j between the aquatic weed category and any other j-th category is calculated;
Calculating the comprehensive JM distance of the multispectral image as N is the total number of categories contained in the sample hyperspectral image, R j is the pearson correlation coefficient between the aquatic weed category determined based on the multispectral image and any other j-th category, and the i-th category is the aquatic weed category.
4. A method of detecting aquatic weed coverage according to claim 3, characterized in that the pearson correlation coefficient R j between the aquatic weed category determined based on the multispectral image and any other j-th category is:
Wherein, X ik is the sum of the pixel values of the pixel points belonging to the aquatic weed class in the kth wave band of the multispectral image, and X jk is the sum of the pixel values of the pixel points belonging to the jth class in the kth wave band of the multispectral image; Is the average value of X ik for each band in the multispectral image, Is the average value of X jk of each band in the multispectral image, and K is the total number of the bands contained in the multispectral image.
5. The method for detecting the coverage of aquatic weeds according to claim 2, wherein the step of screening out a plurality of representative bands with larger spectral information content according to the curve characteristics of the sample reflectivity spectrum curve comprises the following steps:
Performing band division on the sample reflectivity spectrum curve into a plurality of band subsets according to the curve characteristics of the sample reflectivity spectrum curve;
Calculating the standard deviation of the reflectivity of the sample reflectivity spectrum curve in each band subset, and screening out a plurality of band subsets with the maximum standard deviation of the reflectivity as candidate bands;
and calculating correlation coefficients among different candidate wave bands, and screening out a plurality of candidate wave bands with smaller correlation coefficients as representative wave bands.
6. The aquatic weed coverage detection method of claim 5, wherein band division of the sample reflectance spectrum curve into a plurality of band subsets comprises:
Performing band coarse division based on peaks and troughs of the sample reflectivity spectrum curve to obtain a plurality of original bands, wherein each original band comprises a band range between adjacent peaks and troughs;
and according to the curve slope of the sample reflectivity spectrum curve in each original wave band, carrying out wave band subdivision on the original wave band into a plurality of wave band subsets.
7. The aquatic weed coverage detection method of claim 6 wherein band subdivision for each original band comprises:
Dividing the original wave band into a plurality of wave band subsets by taking the position, where the slope of the sample reflectivity spectrum curve in the original wave band reaches the slope threshold, as a dividing position, wherein each wave band subset comprises continuous wave bands, where the slope of the sample reflectivity spectrum curve in the original wave band does not exceed the slope threshold.
8. The aquatic weed coverage detection method of claim 1, wherein calculating a spectral index comprises:
And respectively calculating an enhanced vegetation index, a photochemical reaction index, a normalized water index and a green light normalized vegetation index, and then merging to obtain a high-dimensional matrix as a spectrum index.
9. The aquatic weed coverage detection method according to claim 1, wherein the photographing of the sample hyperspectral image of the aquatic weed growing water area by the hyperspectral imaging device mounted on the unmanned aerial vehicle comprises:
And controlling the unmanned aerial vehicle to fly along a plurality of parallel airlines in sequence, vertically shooting single-airline remote sensing images of a water area through carried hyperspectral imaging equipment in the flying process along each airline, and sequentially performing image stitching on the single-airline remote sensing images shot during flying along the plurality of airlines to obtain the sample hyperspectral image.
10. The aquatic weed coverage detection method according to claim 1, wherein dividing the sample hyperspectral image into a number of training sample images comprises:
defining the side length of a cutting window and the sliding step length as x;
adding to a sample hyperspectral image containing H rows of pixels Row background pixel points, adding to a sample hyperspectral image containing W columns of pixel pointsThe background pixel points are listed to obtain an expanded sample hyperspectral image, and the surface reflectivity of the added background pixel points is 0; wherein, Representation pairThe round-up is carried out upwards,Representation pairRounding upwards;
and performing sliding cutting on the extended sample hyperspectral image by using a cutting window, and extracting an image in the cutting window as a training sample image.
CN202410687368.3A 2024-05-30 2024-05-30 A method for detecting aquatic plant coverage based on airborne hyperspectral Pending CN118691989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410687368.3A CN118691989A (en) 2024-05-30 2024-05-30 A method for detecting aquatic plant coverage based on airborne hyperspectral

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410687368.3A CN118691989A (en) 2024-05-30 2024-05-30 A method for detecting aquatic plant coverage based on airborne hyperspectral

Publications (1)

Publication Number Publication Date
CN118691989A true CN118691989A (en) 2024-09-24

Family

ID=92771045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410687368.3A Pending CN118691989A (en) 2024-05-30 2024-05-30 A method for detecting aquatic plant coverage based on airborne hyperspectral

Country Status (1)

Country Link
CN (1) CN118691989A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118918483A (en) * 2024-10-10 2024-11-08 江西省煤田地质勘察研究院 Water quality prediction method and system based on vegetation image and mixed water quality model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118918483A (en) * 2024-10-10 2024-11-08 江西省煤田地质勘察研究院 Water quality prediction method and system based on vegetation image and mixed water quality model

Similar Documents

Publication Publication Date Title
CN112101271B (en) Hyperspectral remote sensing image classification method and device
CN109325431B (en) Method and device for detecting vegetation coverage in grassland grazing sheep feeding paths
CN106951836B (en) Crop Coverage Extraction Method Based on Prior Threshold Optimizing Convolutional Neural Network
Baron et al. Combining image processing and machine learning to identify invasive plants in high-resolution images
CN111462223B (en) Recognition Method of Soybean and Corn Planting Area in Jianghuai Region Based on Sentinel-2 Image
CN110889394A (en) Rice lodging recognition method based on deep learning UNet network
CN111161362A (en) Tea tree growth state spectral image identification method
CN114965501A (en) Peanut disease detection and yield prediction method based on canopy parameter processing
CN118691989A (en) A method for detecting aquatic plant coverage based on airborne hyperspectral
Meng et al. Fine hyperspectral classification of rice varieties based on attention module 3D-2DCNN
CN117197450A (en) SAM model-based land parcel segmentation method
CN117392382A (en) Single tree fruit tree segmentation method and system based on multi-scale dense instance detection
Shao et al. Quantifying effect of tassels on near-ground maize canopy RGB images using deep learning segmentation algorithm
Suzuki et al. Image segmentation between crop and weed using hyperspectral imaging for weed detection in soybean field
CN107944413A (en) Aquatic vegetation Classification in Remote Sensing Image threshold value calculation method based on spectral index ranking method
CN112528726B (en) A method and system for monitoring cotton aphid pests based on spectral imaging and deep learning
CN118396422A (en) Intelligent peach blossom recognition and fruit yield prediction method based on fusion model
CN117765370A (en) Leaf vegetable crop detection method and device based on Mask R-CNN
CN117392535A (en) Fruit tree flower bud target detection and white point rate estimation method oriented to complex environment
CN112580504B (en) Method and device for tree species classification and counting based on high-resolution satellite remote sensing images
CN116189005A (en) Automatic extraction method of aquatic vegetation and algal blooms in eutrophic lakes based on Landsat images
López et al. Multi-Spectral Imaging for Weed Identification in Herbicides Testing
CN113705454A (en) Method for extracting forest land containing infrared spectrum remote sensing image
Ji et al. Automated extraction of Camellia oleifera crown using unmanned aerial vehicle visible images and the ResU-Net deep learning model
Tejasri et al. Drought stress segmentation on drone captured maize using ensemble u-net framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination