Nothing Special   »   [go: up one dir, main page]

CN112561883B - Method for reconstructing hyperspectral image by crop RGB image - Google Patents

Method for reconstructing hyperspectral image by crop RGB image Download PDF

Info

Publication number
CN112561883B
CN112561883B CN202011494670.5A CN202011494670A CN112561883B CN 112561883 B CN112561883 B CN 112561883B CN 202011494670 A CN202011494670 A CN 202011494670A CN 112561883 B CN112561883 B CN 112561883B
Authority
CN
China
Prior art keywords
image
hyperspectral
pixel
matrix
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011494670.5A
Other languages
Chinese (zh)
Other versions
CN112561883A (en
Inventor
易强
王政
于洪志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Asionstar Technology Co ltd
Original Assignee
Chengdu Asionstar Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Asionstar Technology Co ltd filed Critical Chengdu Asionstar Technology Co ltd
Priority to CN202011494670.5A priority Critical patent/CN112561883B/en
Publication of CN112561883A publication Critical patent/CN112561883A/en
Application granted granted Critical
Publication of CN112561883B publication Critical patent/CN112561883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for reconstructing a hyperspectral image of an RGB image of a crop, which comprises the steps of acquiring the image under the same environment by using a common camera and a hyperspectral camera, and converting a coordinate system by using a mark point to convert the coordinate systems of the two types of images into the same coordinate system; the two types of images are consistent in size and the pixel points are in one-to-one correspondence through data cleaning; then, using the data acquired by the two types of images to establish a matrix equation, and calculating and acquiring a conversion matrix from the RGB image of the common camera to the hyperspectral image; converting the RGB image into a hyperspectral image by using a conversion matrix; and further adopting a Hermite segmented interpolation algorithm to perform polynomial interpolation to perform spectrum curve expansion fitting, so as to obtain a spectrum image on the hyperspectral expansion frequency point. The invention can greatly reduce the cost of crop disease detection by using an image processing technology, overcomes the defect that a hyperspectral camera cannot acquire a specific frequency spectrum image, and is suitable for detecting novel diseases on an extended frequency point.

Description

Method for reconstructing hyperspectral image by crop RGB image
Technical Field
The invention relates to a method for reconstructing a hyperspectral image of an RGB image of a crop, and belongs to the technical field of intelligent monitoring of the crop.
Background
From the current development of agriculture, intelligent agriculture can lead the agricultural field to develop to intelligence and high quality by science and technology. As the concrete expression of the intelligent economic form in agriculture, the intelligent agriculture applies the computer technology and the Internet of things technology to the traditional agriculture, so that the constraint of 'seeing the sky and eating and selling the power and planting the field' is gradually broken through in the agricultural production. In the greenhouse, proper air temperature and humidity, soil humidity, carbon dioxide concentration and various crop growth environment parameters can be detected and adjusted through a computer screen; the seeding and harvesting work can be finished in a 'quick and good-saving' way on a wide field and an advanced agricultural machine. The intelligent agriculture promotes the high-quality development of agriculture, and lays a solid foundation for the development of agricultural industry.
Crop disease refers to pathological changes in morphology, physiology and biochemistry that occur under the influence of biological or non-biological factors of crops, which can hinder the normal growth, development and progress of the results of plants. The fine management is a necessary trend of global agricultural development, the technical basis is acquisition of farmland and crop information, and how to acquire crop disease information rapidly and in real time is a key problem of realizing agricultural fine management and improving crop yield and quality.
The disease detection technology based on the crop image mainly comprises the technical means of early detection of crop leaf surface disease based on vein features, detection of crop leaf surface disease based on image texture features, detection of crop leaf surface disease based on spectrum features and the like, wherein the detection of the crop leaf surface disease is carried out by adopting images with specified spectrum frequencies and utilizing spectrum features, and the detection technology is a main stream detection means at present.
In the actual agricultural production process, the problems of high cost, large volume, heavy weight and the like of the hyperspectral camera exist, particularly, the unmanned aerial vehicle is used for shooting, the load of the unmanned aerial vehicle is increased, and the agricultural production cost is further improved; on the other hand, in hyperspectral cameras, a certain spectral band is usually preset according to the specific design of the optical devices and electronic devices in the camera, and imaging can only be performed on the series of spectral bands. When a specific disease needs to be analyzed, the hyperspectral camera cannot acquire spectral characteristics on a non-preset frequency point, so that spectral imaging on a specified frequency and corresponding disease analysis cannot be realized.
Therefore, how to reconstruct a spectrum image shot by a hyperspectral camera by utilizing an RGB image shot by a common camera, realize spectrum continuity and meet the requirement of spectral imaging with specific frequency is one of important technologies for crop disease detection by utilizing spectral characteristics. The method for reconstructing the hyperspectral image by the RGB image of the crop can greatly reduce the cost of disease detection and can be widely popularized and applied to intelligent agricultural production practice.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provide a method for reconstructing hyperspectral images of crop RGB images. The invention can greatly reduce the cost of crop disease detection by using an image technology, overcomes the defect that a hyperspectral camera cannot collect a spectrum image with specific frequency, and can be used for detecting novel diseases on an extended frequency point.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a method for reconstructing a hyperspectral image from an RGB image of a crop, comprising the steps of:
a. crop image data acquisition:
Under the same environment, the common camera and the hyperspectral camera are utilized to respectively collect crop images;
b. image pixel coordinate system conversion:
calculating coordinate system conversion parameters by using a Mologold seven-parameter model, so that the coordinate systems of two types of pictures are converted; c. Data cleaning:
the two types of pictures are consistent in size and the pixel points are in one-to-one correspondence through data cleaning;
d. conversion of RGB image to hyperspectral image:
Establishing an RGB acquisition matrix of a common camera and an intensity matrix of a hyperspectral camera, and calculating a conversion matrix under the condition of acquiring the minimum mean square error by using a least squares method to convert an RGB image into a hyperspectral image;
e. hyperspectral curve expansion fitting:
And (3) performing polynomial interpolation by adopting a Hermite piecewise interpolation algorithm to perform spectrum curve expansion fitting so as to obtain a spectrum image on the hyperspectral expansion frequency point.
In the step a,4 marking points Q1, Q2, Q3 and Q4 are deployed around the collected crops and used for subsequent positioning of images and conversion of a camera coordinate system; the collected images are RGB images of a common camera and a plurality of images of a hyperspectral camera on different frequency bands; the image collected by the common camera is a, the image collected by the hyperspectral camera is B, and the number of frequency bands of the hyperspectral camera is n, and then the image collected by the n frequency bands of the hyperspectral camera is B i (i=1, 2..n).
In the step b, on the image a collected by the common camera, the pixel coordinates of the marking points Q1, Q2, Q3, Q4 are recorded as X A=(xA,i,yA,i), where i=1, 2,3,4 are the serial numbers of the marking points; in an image B i (i=1, 2..n) acquired by a hyperspectral camera, one image is arbitrarily selected, and the pixel coordinates of the recording mark points Q1, Q2, Q3, Q4 are X B=(xB,i,yB,i), where i=1, 2,3,4 are 4 mark point numbers.
The Mologold seven-parameter model coordinate conversion formula is X B=Xp+(1+α)R(XA-Xp) +dX, wherein X p is a transition point coordinate, and the geometric gravity center is obtained through the Q1, Q2, Q3 and Q4 points, and can be seen as a known value in the equation; x A、XB is a coordinate vector of two coordinate systems acquired by points Q1, Q2, Q3 and Q4 on an image, dX is translation quantity of three coordinates, alpha is a scaling quantity scale parameter, and R is a rotation matrix generated by rotation angles of three coordinate axes; the translation amount dX, the scaling amount alpha and the three rotation angles omega x、ωy、ωz of the selected rotation amount R are seven parameters to be solved by using a molojinski model, and 7 parameter vectors to be solved can be expressed as Y= { dX, dy, dz, omega xyz, alpha }; wherein the rotation matrix may be expressed as the product of three sub-matrices:
From this, the matrix equation X B=X0+(1+α)R(XA-X0) +dx is a nonlinear equation, the nonlinear equation is solved by using the gaussian newton method and the iterative algorithm, and finally, 7 parameter vectors y= { dX, dy, dz, ω xyz, α } are solved, so as to complete coordinate transformation.
The specific calculation steps of the coordinate conversion are as follows:
1. giving an initial value Y 0;
2. Obtaining a Jacobian matrix expression J by differential calculation of f (X) from a functional form f (X) =X- [ X p+(1+α)R(XA-Xp) +dX of a Mologold matrix equation; substituting Y 0 to calculate J (Y 0);
3. Calculate H(Y0)=JT(Y0)*J(Y0),B(Y0)=-JT(Y0)*f(Y0), where f (X) =x- [ X p+(1+α)R(XA-Xp) +dx ];
4. Solving an equation h×Δy=b to solve Δy;
5. if delta Y is smaller than the set threshold, stopping iteration, wherein the Y value at the moment is seven parameter values to be solved; otherwise, setting Y=Y 0 +DeltaY, repeating the steps of 2,3 and 4, and repeating iterative calculation.
And calculating 7 parameters Y= { dX, dy, dz, omega xyz, alpha } of the coordinate conversion, and performing high-precision conversion of coordinates between two images by using a matrix equation X B=X0+(1+α)R(XA-X0) +dx during actual coordinate conversion calculation.
The specific steps of the step c are as follows:
1. basic image selection: selecting the image A as a basic image;
2. image clipping: on the image A, selecting a pixel coordinate range to be analyzed as X A,min、XA,max on the image according to the range of the target crop, forming a rectangular frame, and performing the following pixel coordinate conversion and pixel convergence in the rectangular frame; substituting X A,min、XA,max into a coordinate conversion formula X B=X0+(1+α)R(XA-X0) +dX respectively, and calculating to obtain X B,min、XB,max, so as to form a rectangular frame on the picture B; respectively acquiring pixel data in rectangular frames on A, B images to form a A, B image dataset P A={R,G,B,XA},PB={I,XB }, wherein R, G, B is a pixel three-channel value, I is a spectrum intensity value, and X A、XB is a coordinate vector under respective coordinate systems;
3. Pixel coordinate conversion: for P B={I,XB }, firstly, transforming a matrix conversion formula X B=X0+(1+α)R(XA-X0) +dX into X A=X0+R-1(XB-dX-X0)/(1+alpha), and using the formula, converting the coordinate X B of the P B set into the coordinate on the A image coordinate system to form a B image sampling data set P B->A={I,XB->A };
4. image pixel convergence: rounding the coordinate X B->A of P B->A={I,XB->A, and taking the average value of the data as the data value of the integral pixel point for the data of the same integral coordinate value;
Through the steps, A, B types of image acquisition data are in one-to-one correspondence with pixel coordinates, and the two types of images correspond to the same physical position on the same pixel coordinate point; after the above conversion is performed on the pixel sampling values of the B image set, a coordinate-converted image set D i is formed (i=1, 2..n).
The step d specifically comprises the following steps:
1. For an image C, D i (i=1, 2..n) subjected to coordinate conversion and data cleaning, the pixel positions and coordinates thereof are in one-to-one correspondence, and the sampling data on the same position point is identified; for image C, traversing all pixels, and establishing an RGB acquisition matrix Wherein R, G, B is the up-sampled value of the RGB 3 channels on the pixel, m is the total number of pixels; for image set D i (i=1, 2..n), using the same pixel traversal of image C, extracting the intensity values on each image to form a hyperspectral camera intensity sample value matrix/>Wherein, I is the intensity sampling value on the pixel point, m is the total number of pixels, n is the frequency band number of the hyperspectral camera, I i,j is the pixel serial number, j is the frequency band serial number; further establishing a S, I matrix conversion matrix equation as i=s×t, wherein T is a 3*n conversion matrix to be solved; performing matrix transformation on the equation to form a new matrix equation form (S TS)-1ST*I*IT=T*IT, wherein the superscript T is a matrix transposition, the superscript-1 is the inverse of the matrix, the matrix equation is converted into a standard matrix equation R=T=P (S TS)-1ST*I*IT,P=IT), R, P is obtained by using a sampling value matrix S, I through operation, and T is a conversion matrix with solution;
2. calculating the intensity values on different frequency bands by using the RGB channel values of pixel points on an RGB image and I=S×T through calculating the obtained conversion matrix T, and completing fitting calculation from the RGB image to the intensity of the hyperspectral frequency band;
3. Traversing each pixel point on the RGB image, and calculating I=S×T to obtain intensity values of hyperspectral n frequency bands on each pixel point; and (3) performing pixel point arrangement according to the hyperspectral frequency bands, reconstructing a spectrum image aiming at each frequency band, and completing spectrum reconstruction images of n frequency bands.
In the step e, the conversion matrix T is calculated by using the data of the n frequency bands generated by the hyperspectral camera and the RGB images photographed in the same environment, so as to reconstruct hyperspectral images of the n frequency bands from the RGB images.
In the step e, for the RGB channel sampling value (R, G, B) of a certain pixel on the RGB image, the spectral intensity (I 1,I2...,In) on n frequency bands is calculated by using i=s×t, the center frequency of the n frequency bands is (f 1,f2...,fn), the discrete curve I (f) of the I-f is constructed, further, the smooth I (f) curve is generated by interpolation by using the piecewise curve fitting method of Hermite interpolation, and in the curve, the corresponding frequency value f is substituted, the spectral intensity value on the frequency is calculated, and the fitting calculation of the spectral intensity from the RGB pixel point to the frequency f is completed.
The fitting calculation step of the spectrum intensity from the RGB pixel point to the frequency f is as follows:
For the segment fitting of the spectrum intensity I, two-point three-time Hermite segment interpolation is carried out on two adjacent data points I i,Ii+1 by utilizing (I 1,I2...,In), and the interpolation function is a 3-time function H i(f)=a0+a1f+a2f2+a3f3, wherein f is a frequency variable; in the Hermite piecewise interpolation process, the H i (f) function satisfies:
1. I i、Ii+1 is on function H i (f);
2. to satisfy the smoothness of the function, at endpoint I i, the derivative of H i (f) is satisfied to be equal to the derivative of H i-1 (f), and at endpoint I i+1, the derivative of H i (f) is satisfied to be equal to the derivative of H i+1 (f), thereby satisfying the smoothness of the fit function;
Obtaining 4 known conditions by utilizing the two conditions, substituting the conditions into a 3-degree function H i(f)=a0+a1f+a2f2+a3f3, solving the conditions in a segmented way to fit a pending coefficient a i (i=0, 1,2, 3), and fitting a 3-degree polynomial fitting curve H i (f) (i=1, 2,..n-1) between (I 1,I2)......(In-1,In) points;
Aiming at the frequency f needing fitting calculation, f k<f≤fk+1 is judged in a frequency set (f 1,f2...,fn), the intensity value at the frequency f is calculated by utilizing the H k (f) function of the piecewise fitting, and fitting calculation of the intensity is completed;
By the method, spectral intensity fitting calculation of the frequency f is carried out for each pixel point in the RGB image, conversion from the RGB image to the spectral image of the frequency f is completed, and the spectral image at the frequency f is reconstructed.
The invention has the advantages that:
1. the invention provides a method for reconstructing a hyperspectral image of a crop RGB image, which is suitable for reconstructing the hyperspectral image of the crop RGB image shot by a common camera, and adopts an advanced image processing technology and algorithm to reconstruct the hyperspectral image, thereby effectively reducing the disease detection cost, being suitable for large-area detection of crop diseases and having the characteristics of low cost, advancement, practicability, intellectualization, extensibility, high efficiency and the like.
2. The invention utilizes the Mologold seven-parameter model to realize the coordinate system conversion of the common camera and the hyperspectral camera. The invention realizes the pixel coordinate conversion of different cameras by referring to the map conversion technology, has high calculation speed and high precision, and is an efficient and high-precision method for converting the pixel coordinates of the images.
3. The invention utilizes the collected image data of the common camera and the hyperspectral camera, establishes a conversion equation through data cleaning and data collection matrix, adopts a least square method to solve the conversion matrix, effectively realizes the conversion from the RGB image of the common camera to the spectrum image of the hyperspectral camera, effectively reduces the cost of crop disease detection, and can be widely popularized to the production practice of large-area disease detection of intelligent agriculture.
4. The invention further expands the discrete spectrum intensity into a continuous spectrum intensity curve by utilizing the mathematical method of the Hermite piecewise interpolation, obtains the spectrum image on the hyperspectral expansion frequency point, overcomes the defect that the existing hyperspectral camera can only image on the appointed frequency point, and is suitable for detecting novel diseases on the expansion frequency point.
In conclusion, the method greatly reduces the cost of crop disease detection by using an image technology, overcomes the defect that a hyperspectral camera cannot collect a spectrum image with specific frequency, and can be used for detecting novel diseases on an extended frequency point. Therefore, the technical method of the invention can be widely applied to disease detection in intelligent agriculture.
Detailed Description
The invention firstly uses a common camera and a hyperspectral camera to collect images in the same environment, and uses a mark point and a Mologold seven-parameter model to convert the coordinate system so as to realize the coordinate system conversion of two types of images; the data cleaning method is further utilized to realize the one-to-one correspondence of the pixel points with the consistent sizes of the two types of images; then, a matrix equation is established by utilizing data acquired by the two types of images, and a conversion matrix from an RGB image of a common camera to a hyperspectral image is calculated and acquired by a least square method; the conversion from RGB image to hyperspectral image can be realized by using the conversion matrix; and further adopting a Hermite segmentation interpolation algorithm to perform polynomial interpolation to realize spectrum curve expansion fitting, and obtaining a spectrum image on a hyperspectral expansion frequency point.
The present invention will be described in further detail below.
A method for reconstructing hyperspectral image of crop RGB image comprises the following steps:
Crop image data acquisition:
Under the same environment, an ordinary camera and a hyperspectral camera are utilized to respectively collect images, 4 marks are deployed around the collected crops and used for subsequent positioning of the images and conversion of a camera coordinate system; the acquired images are RGB images of a common camera and a plurality of images of a hyperspectral camera on different frequency bands.
Image pixel coordinate system conversion:
Because the common camera and the hyperspectral camera have different camera parameters such as image resolution, image length-width ratio, focal length and the like, the images have inconsistent displacement, rotation and scaling, the invention adopts a Molojinski seven-parameter model to calculate coordinate system conversion parameters, realizes coordinate system conversion of two types of pictures, and is convenient for subsequent data cleaning, extraction and data conversion of the same position points of the pictures.
Data cleaning:
aiming at different image resolutions of a common camera and a hyperspectral camera, the data cleaning method is utilized to realize the consistent size of the two types of pictures and the one-to-one correspondence of pixel points so as to facilitate the subsequent data extraction and data conversion of the same position points of the pictures.
Conversion of RGB image to hyperspectral image:
And establishing an RGB acquisition matrix of a common camera and an intensity matrix of a hyperspectral camera, and calculating a conversion matrix under the condition of acquiring the minimum mean square error by using a least squares method to realize the conversion from an RGB image to a hyperspectral image.
Hyperspectral curve expansion fitting:
Aiming at a frequency point f which is not possessed by a hyperspectral camera, utilizing a spectrum intensity sampling value of the existing frequency point, performing polynomial interpolation by using a two-point cubic Hermite piecewise interpolation algorithm, realizing hyperspectral curve expansion fitting, and obtaining a spectrum image of the spread frequency point f of the hyperspectral camera.
The crop image data acquisition part:
And under the same environment, the common camera and the hyperspectral camera are utilized to respectively acquire images. Around the collected crops, 4 markers (marked points are Q1, Q2, Q3 and Q4) are deployed for subsequent positioning of images and conversion of a camera coordinate system.
The acquired images are RGB images of a common camera and a plurality of images of a hyperspectral camera on different frequency bands. The image collected by the common camera is a, the image collected by the hyperspectral camera is B, and the number of frequency bands of the hyperspectral camera is n, and then the image collected by the n frequency bands of the hyperspectral camera is B i (i=1, 2..n).
The image pixel coordinate system conversion section:
On an image A acquired by a common camera, recording pixel coordinates of marking points Q1, Q2, Q3 and Q4 as X A=(xA,i,yA,i, wherein i=1, 2,3 and 4 are the serial numbers of the 4 marking points; since the images B i (i=1, 2..n) acquired in the n frequency bands of the hyperspectral camera are all images acquired by classifying the same acquired data according to the frequency bands, the pixel positions of the images are all identical, therefore, in the image B i (i=1, 2..n) acquired by the hyperspectral camera, one image is arbitrarily selected, and the pixel coordinates of the recording mark points Q1, Q2, Q3, Q4 are X B=(xB,i,yB,i), where i=1, 2,3,4 are the number of the 4 mark points. Because the resolution, the image length-width ratio, the focal length and other camera parameters of the two cameras are inconsistent, the displacement, the rotation and the scaling of the two images are inconsistent, and the invention adopts a Mologold seven-parameter model to convert the coordinate systems of the two images.
The coordinate conversion formula of the Molojinski seven-parameter model is X B=Xp+(1+α)R(XA-Xp) +dX, wherein X p is a transition point coordinate, and in actual calculation, the geometric gravity center is obtained by calculating the Q1, Q2, Q3 and Q4 points, and the geometric gravity center can be seen as a known value in the formula; x A、XB is the coordinate vector of two coordinate systems acquired at the points Q1, Q2, Q3 and Q4 on the image, dX is the translation amount of three coordinates, alpha is the scale parameter of the scaling amount, and R is the rotation matrix generated by the rotation angles of the three coordinate axes. The translation dX, the scaling α and the rotation R are three rotation angles ω x、ωy、ωz, that is, seven parameters to be solved by using the molojinski model, and the 7 parameter vectors to be solved may be expressed as y= { dX, dy, dz, ω xyz, α }. Wherein the rotation matrix may be expressed as the product of three sub-matrices:
It can be seen that the matrix equation X B=X0+(1+α)R(XA-X0) +dx is a nonlinear equation, typically solved by linearizing the equation assuming a small rotation angle ω x、ωy、ωz, and then solving using the least squares method. However, considering that the rotation angle may be large when two cameras are taking a photograph, a linearization method cannot be used for solving. According to the technical scheme, a Gaussian Newton method is adopted, a nonlinear equation is solved by an iterative algorithm, and finally 7 parameter vectors Y= { dx, dy, dz, omega xyz and alpha }, so that high-precision coordinate conversion is realized, wherein the specific calculation steps are as follows:
1. giving an initial value Y 0;
2. Obtaining a Jacobian matrix expression J by differential calculation of f (X) from a functional form f (X) =X- [ X p+(1+α)R(XA-Xp) +dX of a Mologold matrix equation; substituting Y 0 to calculate J (Y 0);
3. Calculate H(Y0)=JT(Y0)*J(Y0),B(Y0)=-JT(Y0)*f(Y0), where f (X) =x- [ X p+(1+α)R(XA-Xp) +dx ];
4. Solving an equation h×Δy=b to solve Δy;
5. if delta Y is smaller than the set threshold, stopping iteration, wherein the Y value at the moment is seven parameter values to be solved; otherwise, setting Y=Y 0 +DeltaY, repeating the steps of 2,3 and 4, and repeating iterative calculation.
The above-mentioned method uses Gauss Newton method to calculate and obtain 7 parameters Y= { dx, dy, dz, ω xyz, α } of coordinate transformation. In the actual coordinate conversion calculation, the high-precision conversion of the coordinates between the two images can be performed by using the matrix equation X B=X0+(1+α)R(XA-X0) +dX.
The data cleaning part:
in the images A and B, the two types of images are consistent in size and correspond to pixels one by one through a data cleaning method, and the calculation steps are as follows:
1. Basic image selection: since the a image is a visualized RGB image, the position of the analysis target crop is easily determined, and thus the a image is selected as a base image.
2. Image clipping: on the image A, selecting a pixel coordinate range to be analyzed as X A,min、XA,max on the image according to the range of the target crop, forming a rectangular frame, and performing the following pixel coordinate conversion and pixel convergence in the rectangular frame; x A,min、XA,max is substituted into the coordinate conversion formulas X B=X0+(1+α)R(XA-X0) +dx, respectively, and X B,min、XB,max is calculated and obtained, so that a rectangular frame on the picture B is formed. Pixel data in rectangular frames on two A, B images are acquired respectively to form a A, B image dataset P A={R,G,B,XA},PB={I,XB, wherein R, G, B is a pixel three-channel value, I is a spectrum intensity value, and X A、XB is a coordinate vector under respective coordinate systems.
3. Pixel coordinate conversion: for P B={I,XB, the matrix conversion formula X B=X0+(1+α)R(XA-X0) +dX is first transformed into X A=X0+R-1(XB-dX-X0)/(1+α), and the coordinate X B of the P B set is converted into the coordinate on the A image coordinate system by using the formula, so as to form the B image sampling data set P B->A={I,XB->A after coordinate conversion.
4. Image pixel convergence: the coordinate X B->A of P B->A={I,XB->A is rounded, and for data of the same integer coordinate value, an average value of the data is taken as the data value of the integer pixel.
Through the steps, the one-to-one correspondence of A, B types of image acquisition data according to pixel coordinates is realized, and the two types of images correspond to the same physical position on the same pixel coordinate point. After the above conversion is performed on the pixel sampling values of the B image set, a coordinate-converted image set D i is formed (i=1, 2..n).
A conversion section of the RGB image to a hyperspectral image:
for the coordinate-converted and data-cleaned image C, D i (i=1, 2..n), the pixel positions and coordinates thereof are in one-to-one correspondence, identifying the sample data at the same position point. Aiming at the image C, traversing all pixels in a certain mode, and establishing an RGB acquisition matrix Wherein R, G, B is the up-sampled value of the pixel point of the RGB 3 channels on the pixel, and m is the total number of pixels. For image set D i (i=1, 2..n), using the same pixel traversal of image C, extracting the intensity values on each image to form a hyperspectral camera intensity sample value matrix/>Wherein I is an intensity sampling value on a pixel point, m is the total number of pixels, n is the frequency band number of the hyperspectral camera, and I i,j is a pixel number, and j is a frequency band number. The conversion matrix equation of S, I matrix is further established as i=s×t, where T is 3*n conversion matrix to be solved. The equation is subjected to matrix transformation to form a new matrix equation form (S TS)-1ST*I*IT=T*IT, wherein the superscript T is a matrix transpose, the superscript-1 is the inverse of the matrix, the matrix equation can be converted into a standard matrix equation R=TxP (S TS)-1ST*I*IT,P=IT), the R, P can be obtained by utilizing a sampling value matrix S, I through operation, T is a conversion matrix with solution, and the conversion matrix T= (P TP)-1PT R) under the condition of least mean square error can be calculated by utilizing a least square method aiming at the matrix equation R=TxP.
The obtained conversion matrix T is calculated, namely the RGB channel values of pixel points on an RGB image can be utilized, and the intensity values on different frequency bands are calculated by utilizing I=S×T, so that fitting calculation from the RGB image to the intensity of the hyperspectral frequency band is realized.
Traversing each pixel point on the RGB image, and calculating I=S×T to obtain intensity values of hyperspectral n frequency bands on each pixel point; and (3) performing pixel point arrangement according to the hyperspectral frequency bands, and reconstructing a spectrum image aiming at each frequency band, thereby realizing spectrum reconstruction images of n frequency bands.
The hyperspectral curve extension fitting method comprises the following steps:
According to the method, under the same environment, the conversion matrix T is calculated by utilizing the data of the n frequency bands generated by the hyperspectral camera and the RGB images shot under the same environment, so that the hyperspectral images of the n frequency bands are reconstructed from the RGB images. In the above method, for the hyperspectral camera non-preset frequency point f, a corresponding spectrogram image cannot be given yet. The invention further discloses a method for fitting the spectrum image of the non-preset frequency point f of the hyperspectral camera by adopting a polynomial fitting mode.
For an RGB channel sampling value (R, G, B) of a certain pixel on an RGB image, calculating to obtain the spectrum intensity (I 1,I2...,In) on n frequency bands by using I=S×T, wherein the center frequency of the n frequency bands is (f 1,f2...,fn), so that an I-f discrete curve I (f) can be constructed, further, a segmentation curve fitting method of Hermite interpolation is utilized to interpolate and generate a smooth I (f) curve, and the spectrum intensity value on the frequency can be calculated by substituting a corresponding frequency value f into the curve, so that fitting calculation of the spectrum intensity from an RGB pixel point to the frequency f is realized. The calculation steps are as follows:
For the piecewise fitting of the spectral intensities I, two-point cubic Hermite piecewise interpolation is performed for two adjacent data points I i,Ii+1 using (I 1,I2...,In), the interpolation function being a 3-degree function H i(f)=a0+a1f+a2f2+a3f3, where f is the frequency variable. In the Hermite piecewise interpolation process, the H i (f) function satisfies:
1. I i、Ii+1 is on the function H i (f).
2. To satisfy the smoothness of the function, at endpoint I i, the derivative of H i (f) is satisfied to be equal to the derivative of H i-1 (f), and at endpoint I i+1, the derivative of H i (f) is satisfied to be equal to the derivative of H i+1 (f), thereby satisfying the smoothness of the fit function.
By using the two conditions, namely obtaining 4 known conditions, substituting the conditions into a 3-degree function H i(f)=a0+a1f+a2f2+a3f3, and solving the conditions in a segmented way to fit the undetermined coefficient a i (i=0, 1,2, 3), a 3-degree polynomial fitting curve H i (f) (i=1, 2..n-1) between the (I 1,I2)......(In-1,In) points can be fitted.
For the frequency f needing fitting calculation, f k<f≤fk+1 is judged in a frequency set (f 1,f2...,fn), namely the intensity value at the frequency f can be calculated by utilizing the H k (f) function of piecewise fitting, so that fitting calculation of the intensity is realized.
By using the method, the spectral intensity fitting calculation of the frequency f is carried out for each pixel point in the RGB image, so that the conversion from the RGB image to the spectral image of the frequency f can be realized, and the spectral image at the frequency f is reconstructed.

Claims (8)

1. A method for reconstructing a hyperspectral image from an RGB image of a crop, comprising the steps of:
a. crop image data acquisition:
Under the same environment, the common camera and the hyperspectral camera are utilized to respectively collect crop images;
b. image pixel coordinate system conversion:
calculating coordinate system conversion parameters by using a Mologold seven-parameter model, so that the coordinate systems of two types of pictures are converted;
c. data cleaning:
the two types of pictures are consistent in size and the pixel points are in one-to-one correspondence through data cleaning;
d. conversion of RGB image to hyperspectral image:
Establishing an RGB acquisition matrix of a common camera and an intensity matrix of a hyperspectral camera, and calculating a conversion matrix under the condition of acquiring the minimum mean square error by using a least square method to convert an RGB image into a hyperspectral image;
e. hyperspectral curve expansion fitting:
performing polynomial interpolation by adopting a Hermite piecewise interpolation algorithm to perform spectrum curve expansion fitting to obtain a spectrum image on a hyperspectral expansion frequency point;
The specific steps of the step c are as follows:
1. basic image selection: selecting the image A as a basic image;
2. Image clipping: on the image A, selecting a pixel coordinate range to be analyzed as X A,min、XA,max on the image according to the range of the target crop, forming a rectangular frame, and performing the following pixel coordinate conversion and pixel convergence in the rectangular frame; substituting X A,min、XA,max into a coordinate conversion formula X B=X0+(1+α)R(XA-X0) +dX respectively, and calculating to obtain X B,min、XB,max, so as to form a rectangular frame on the picture B; respectively acquiring pixel data in rectangular frames on A, B images to form a A, B image dataset P A={R,G,B,XA},PB={I,XB }, wherein R, G, B is a pixel three-channel value, I is a spectrum intensity value, and X A、XB is a coordinate vector under respective coordinate systems;
3. Pixel coordinate conversion: for P B={I,XB }, firstly, transforming a matrix conversion formula X B=X0+(1+α)R(XA-X0) +dX into X A=X0+R-1(XB-dX-X0)/(1+alpha), and using the formula, converting the coordinate X B of the P B set into the coordinate on the A image coordinate system to form a B image sampling data set P B->A={I,XB->A };
4. Image pixel convergence: rounding the coordinate X B->A of P B->A={I,XB->A, and taking the average value of the data of the same integer coordinate values as the data value of the same integer pixel point;
Through the steps, A, B types of image acquisition data are in one-to-one correspondence with pixel coordinates, and the two types of images correspond to the same physical position on the same pixel coordinate point; after the above conversion of the pixel sample values of the B image set, forming a coordinate-converted image set D i (i=1, 2..n);
The step d specifically comprises the following steps:
1. For an image C, D i (i=1, 2..n) subjected to coordinate conversion and data cleaning, the pixel positions and coordinates thereof are in one-to-one correspondence, and the sampling data on the same position point is identified; for image C, traversing all pixels, and establishing an RGB acquisition matrix Wherein R, G, B is the up-sampled value of the RGB 3 channels on the pixel, m is the total number of pixels; for image set D i (i=1, 2..n), using the same pixel traversal of image C, extracting the intensity values on each image to form a hyperspectral camera intensity sample value matrix/>Wherein, I is the intensity sampling value on the pixel point, m is the total number of pixels, n is the frequency band number of the hyperspectral camera, I i,j is the pixel serial number, j is the frequency band serial number; further establishing a S, I matrix conversion matrix equation as i=s×t, wherein T is a 3*n conversion matrix to be solved; performing matrix transformation on the equation to form a new matrix equation form (S TS)-1ST*I*IT=T*IT, wherein the superscript T is a matrix transposition, the superscript-1 is the inverse of the matrix, the matrix equation is converted into a standard matrix equation R=T=P (S TS)-1ST*I*IT,P=IT), R, P is obtained by using a sampling value matrix S, I through operation, and T is a conversion matrix with solution;
2. calculating the intensity values on different frequency bands by using the RGB channel values of pixel points on an RGB image and I=S×T through calculating the obtained conversion matrix T, and completing fitting calculation from the RGB image to the intensity of the hyperspectral frequency band;
3. Traversing each pixel point on the RGB image, and calculating I=S×T to obtain intensity values of hyperspectral n frequency bands on each pixel point; and (3) performing pixel point arrangement according to the hyperspectral frequency bands, reconstructing a spectrum image aiming at each frequency band, and completing spectrum reconstruction images of n frequency bands.
2. A method of reconstructing a hyperspectral image from an RGB image of a crop as claimed in claim 1 wherein: in the step a, 4 marking points Q1, Q2, Q3 and Q4 are deployed around the collected crops and used for subsequent positioning of images and conversion of a camera coordinate system; the collected images are RGB images of a common camera and a plurality of images of a hyperspectral camera on different frequency bands; the image collected by the common camera is a, the image collected by the hyperspectral camera is B, and the number of frequency bands of the hyperspectral camera is n, and then the image collected by the n frequency bands of the hyperspectral camera is B i (i=1, 2..n).
3. A method of reconstructing a hyperspectral image from an RGB image of a crop as claimed in claim 2 wherein: in the step b, on the image a collected by the common camera, the pixel coordinates of the marking points Q1, Q2, Q3, Q4 are recorded as X A=(xA,i,yA,i), where i=1, 2,3,4 are the serial numbers of the marking points; in an image B i (i=1, 2..n) acquired by a hyperspectral camera, one image is arbitrarily selected, and the pixel coordinates of the recording mark points Q1, Q2, Q3, Q4 are X B=(xB,i,yB,i), where i=1, 2,3,4 are 4 mark point numbers.
4. A method of reconstructing a hyperspectral image from an RGB image of a crop as claimed in claim 3 wherein: the Mologold seven-parameter model coordinate conversion formula is X B=Xp+(1+α)R(XA-Xp) +dX, wherein X p is a transition point coordinate, and the geometric gravity center is obtained through the Q1, Q2, Q3 and Q4 points, and can be seen as a known value in the equation; x A、XB is a coordinate vector of two coordinate systems acquired by points Q1, Q2, Q3 and Q4 on an image, dX is translation quantity of three coordinates, alpha is a scaling quantity scale parameter, and R is a rotation matrix generated by rotation angles of three coordinate axes; the translation amount dX, the scaling amount alpha and the three rotation angles omega x、ωy、ωz of the selected rotation amount R are seven parameters to be solved by using a molojinski model, and 7 parameter vectors to be solved can be expressed as Y= { dX, dy, dz, omega xyz, alpha }; wherein the rotation matrix may be expressed as the product of three sub-matrices:
From this, the matrix equation X B=X0+(1+α)R(XA-X0) +dx is a nonlinear equation, the nonlinear equation is solved by using the gaussian newton method and the iterative algorithm, and finally, 7 parameter vectors y= { dX, dy, dz, ω xyz, α } are solved, so as to complete coordinate transformation.
5. The method for reconstructing a hyperspectral image from an RGB image of a crop as recited in claim 4, wherein: the specific calculation steps of the coordinate conversion are as follows:
1. giving an initial value Y 0;
2. Obtaining a Jacobian matrix expression J by differential calculation of f (X) from a functional form f (X) =X- [ X p+(1+α)R(XA-Xp) +dX of a Mologold matrix equation; substituting Y 0 to calculate J (Y 0);
3. calculate H(Y0)=JT(Y0)*J(Y0),B(Y0)=-JT(Y0)*f(Y0), where f (X) =x- [ X p+(1+α)R(XA-Xp) +dx ];
4. Solving an equation h×Δy=b to solve Δy;
5. If delta Y is smaller than the set threshold, stopping iteration, wherein the Y value at the moment is seven parameter values to be solved; otherwise, setting Y=Y 0 +DeltaY, repeating the steps 2,3 and 4, and repeating iterative calculation;
And calculating 7 parameters Y= { dX, dy, dz, omega xyz, alpha } of the coordinate conversion, and performing high-precision conversion of coordinates between two images by using a matrix equation X B=X0+(1+α)R(XA-X0) +dx during actual coordinate conversion calculation.
6. The method for reconstructing a hyperspectral image from an RGB image of a crop as recited in claim 5, wherein: in the step e, the conversion matrix T is calculated by using the data of the n frequency bands generated by the hyperspectral camera and the RGB images photographed in the same environment, so as to reconstruct hyperspectral images of the n frequency bands from the RGB images.
7. The method for reconstructing a hyperspectral image from an RGB image of a crop as recited in claim 6, wherein: in the step e, for the RGB channel sampling value (R, G, B) of a certain pixel on the RGB image, the spectral intensity (I 1,I2...,In) on n frequency bands is calculated by using i=s×t, the center frequency of the n frequency bands is (f 1,f2...,fn), the discrete curves I (f) of I to f are constructed, further, the smooth I (f) curve is generated by interpolation by using the piecewise curve fitting method of Hermite interpolation, and in the curve, the corresponding frequency value f is substituted, the spectral intensity value on the frequency is calculated, and the fitting calculation of the spectral intensity from the RGB pixel point to the frequency f is completed.
8. The method for reconstructing a hyperspectral image from an RGB image of a crop as recited in claim 7, wherein: the fitting calculation step of the spectrum intensity from the RGB pixel point to the frequency f is as follows:
For the segment fitting of the spectrum intensity I, two-point three-time Hermite segment interpolation is carried out on two adjacent data points I i,Ii+1 by utilizing (I 1,I2...,In), and the interpolation function is a 3-time function H i(f)=a0+a1f+a2f2+a3f3, wherein f is a frequency variable; in the Hermite piecewise interpolation process, the H i (f) function satisfies:
1. I i、Ii+1 is on function H i (f);
2. to satisfy the smoothness of the function, at endpoint I i, the derivative of H i (f) is satisfied to be equal to the derivative of H i-1 (f), and at endpoint I i+1, the derivative of H i (f) is satisfied to be equal to the derivative of H i+1 (f), thereby satisfying the smoothness of the fit function;
Obtaining 4 known conditions by utilizing the two conditions, substituting the conditions into a 3-degree function H i(f)=a0+a1f+a2f2+a3f3, solving the conditions in a segmented way to fit a pending coefficient a i (i=0, 1,2, 3), and fitting a 3-degree polynomial fitting curve H i (f) (i=1, 2,..n-1) between (I 1,I2)......(In-1,In) points;
Aiming at the frequency f needing fitting calculation, f k<f≤fk+1 is judged in a frequency set (f 1,f2...,fn), the intensity value at the frequency f is calculated by utilizing the H k (f) function of the piecewise fitting, and fitting calculation of the intensity is completed;
By the method, spectral intensity fitting calculation of the frequency f is carried out for each pixel point in the RGB image, conversion from the RGB image to the spectral image of the frequency f is completed, and the spectral image at the frequency f is reconstructed.
CN202011494670.5A 2020-12-17 2020-12-17 Method for reconstructing hyperspectral image by crop RGB image Active CN112561883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011494670.5A CN112561883B (en) 2020-12-17 2020-12-17 Method for reconstructing hyperspectral image by crop RGB image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011494670.5A CN112561883B (en) 2020-12-17 2020-12-17 Method for reconstructing hyperspectral image by crop RGB image

Publications (2)

Publication Number Publication Date
CN112561883A CN112561883A (en) 2021-03-26
CN112561883B true CN112561883B (en) 2024-06-21

Family

ID=75064451

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011494670.5A Active CN112561883B (en) 2020-12-17 2020-12-17 Method for reconstructing hyperspectral image by crop RGB image

Country Status (1)

Country Link
CN (1) CN112561883B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663785B (en) * 2022-03-18 2024-06-07 华南农业大学 Unmanned aerial vehicle hyperspectral-based litchi disease detection method and system
CN115753691B (en) * 2022-08-23 2024-10-29 合肥工业大学 Water quality parameter detection method based on RGB reconstruction hyperspectrum
CN116188465B (en) * 2023-04-26 2023-07-04 济宁市保田农机技术推广专业合作社 Crop growth state detection method based on image processing technology
CN118505573B (en) * 2024-07-18 2024-10-01 奥谱天成(湖南)信息科技有限公司 Spectral data recovery method, device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133976A (en) * 2017-04-24 2017-09-05 浙江大学 A kind of method and apparatus for obtaining three-dimensional hyperspectral information

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2959092B1 (en) * 2010-04-20 2013-03-08 Centre Nat Rech Scient DIGITAL COMPENSATION PROCESSING OF SIGNALS FROM PHOTOSITES OF A COLOR SENSOR
CN103808286A (en) * 2012-11-08 2014-05-21 谢荣 Total station-based steel structure three dimensional precision detection analysis method and application thereof
US10013811B2 (en) * 2016-06-13 2018-07-03 Umm-Al-Qura University Hyperspectral image visualization in patients with medical conditions
WO2018223267A1 (en) * 2017-06-05 2018-12-13 Shanghaitech University Method and system for hyperspectral light field imaging
CN107783937B (en) * 2017-10-19 2018-08-14 西安科技大学 A method of solving arbitrary rotation angle three-dimensional coordinate conversion parameter in space geodetic surveying
CN108520495B (en) * 2018-03-15 2021-09-07 西北工业大学 Hyperspectral image super-resolution reconstruction method based on clustering manifold prior
CN108759665B (en) * 2018-05-25 2021-04-27 哈尔滨工业大学 Spatial target three-dimensional reconstruction precision analysis method based on coordinate transformation
CN109146787B (en) * 2018-08-15 2022-09-06 北京理工大学 Real-time reconstruction method of dual-camera spectral imaging system based on interpolation
CN109978931B (en) * 2019-04-04 2021-12-31 中科海微(北京)科技有限公司 Three-dimensional scene reconstruction method and device and storage medium
CN111386549B (en) * 2019-04-04 2023-10-13 合刃科技(深圳)有限公司 Method and system for reconstructing hybrid hyperspectral image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133976A (en) * 2017-04-24 2017-09-05 浙江大学 A kind of method and apparatus for obtaining three-dimensional hyperspectral information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"不同高光谱成像方式的马铃薯内外部品质检测方法研究";库静;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170215;全文 *

Also Published As

Publication number Publication date
CN112561883A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN112561883B (en) Method for reconstructing hyperspectral image by crop RGB image
CN102829739B (en) Object-oriented remote sensing inversion method of leaf area index of crop
CN112418188A (en) Crop growth whole-course digital assessment method based on unmanned aerial vehicle vision
CN108171715B (en) Image segmentation method and device
CN104574347A (en) On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data
CN112200854B (en) Leaf vegetable three-dimensional phenotype measuring method based on video image
CN106097252B (en) High spectrum image superpixel segmentation method based on figure Graph model
CN105069749A (en) Splicing method for tire mold images
CN107274441A (en) The wave band calibration method and system of a kind of high spectrum image
CN115861546A (en) Crop geometric perception and three-dimensional phenotype reconstruction method based on nerve body rendering
CN109934765B (en) High-speed camera panoramic image splicing method
Zhong et al. Identification and depth localization of clustered pod pepper based on improved Faster R-CNN
He et al. A calculation method of phenotypic traits of soybean pods based on image processing technology
CN109360269B (en) Ground three-dimensional plane reconstruction method based on computer vision
CN114972625A (en) Hyperspectral point cloud generation method based on RGB spectrum super-resolution technology
CN112836678B (en) Intelligent planning method for intelligent agricultural park
CN116740703B (en) Wheat phenotype parameter change rate estimation method and device based on point cloud information
CN116052141B (en) Crop growth period identification method, device, equipment and medium
Schneider et al. Towards predicting vine yield: Conceptualization of 3d grape models and derivation of reliable physical and morphological parameters
CN108050929B (en) Method and system for measuring spatial distribution of plant root system
Akila et al. Automation in plant growth monitoring using high-precision image classification and virtual height measurement techniques
CN115330747A (en) DPS-Net deep learning-based rice plant counting, positioning and size estimation method
CN110823311B (en) Method for rapidly estimating volume of rape pod
CN110264433B (en) Depth map interpolation method based on color segmentation guidance
Wang et al. Visual measurement method of crop height based on color feature in harvesting robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant