CN111062927A - Method, system and equipment for detecting image quality of unmanned aerial vehicle - Google Patents
Method, system and equipment for detecting image quality of unmanned aerial vehicle Download PDFInfo
- Publication number
- CN111062927A CN111062927A CN201911310486.8A CN201911310486A CN111062927A CN 111062927 A CN111062927 A CN 111062927A CN 201911310486 A CN201911310486 A CN 201911310486A CN 111062927 A CN111062927 A CN 111062927A
- Authority
- CN
- China
- Prior art keywords
- images
- image
- quality
- unmanned aerial
- aerial vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method, a system and equipment for detecting the image quality of an unmanned aerial vehicle, which comprises the following steps: acquiring images which are shot by an unmanned aerial vehicle and need to be detected, and selecting two images from the images; calculating the characteristic points of the selected image by using a Harris algorithm; extracting high-quality feature points from the feature points by adopting a Lowe's algorithm, and matching every two high-quality feature points; and re-selecting two images to be detected and repeating the steps to obtain the number of the high-quality feature point matches of all the images to be detected, and performing quality grading on the images according to the number of the high-quality feature point matches of the images. The method has the characteristics of high running speed, strong robustness and the like by extracting and matching the characteristic points of the images through the Harris algorithm, and greatly improves the working efficiency by quickly extracting the characteristic points of two photos in the same scene, matching every two high-quality characteristic points on the images and grading the quality of the images according to the number of the high-quality matching points.
Description
Technical Field
The invention relates to the technical field of image quality detection, in particular to a method, a system and equipment for detecting the image quality of an unmanned aerial vehicle.
Background
After the unmanned aerial vehicle returns from the external aerial photography operation, the process of processing the image data obtained by the unmanned aerial vehicle at the present stage is mainly manual examination, each photo needs to be opened manually, and whether the quality of the data is qualified or not is determined according to the standards of whether the image is fuzzy or not, whether an object needing to be shot exists in the photo or not, and whether the exposure of the photo is too bright or too dark; it is also necessary to check whether there is an image discontinuity between the photographs. Manual means are used for checking, so that the detection time of the image is too long, the efficiency is too low, and the labor cost is greatly increased.
In conclusion, in the prior art, the quality of the images of the aerial operation of the unmanned aerial vehicle needs to be checked manually, and the technical problem of low efficiency exists.
Disclosure of Invention
The invention provides a method, a system and equipment for detecting the image quality of an unmanned aerial vehicle, which solve the technical problem of low efficiency caused by the fact that the quality of an image of aerial operation of the unmanned aerial vehicle needs to be checked manually in the prior art.
The invention provides a method for detecting the image quality of an unmanned aerial vehicle, which comprises the following steps:
step S1: acquiring images which are shot by an unmanned aerial vehicle and need to be detected, and selecting two images from the images;
step S2: calculating the characteristic points of the selected image by using a Harris algorithm;
step S3: extracting high-quality feature points from the feature points by adopting a Lowe's algorithm, and matching every two high-quality feature points;
step S4: and reselecting the two images to be detected, repeating the steps S2-S3 to obtain the number of the high-quality feature point matches of all the images to be detected, and grading the quality of the images according to the number of the high-quality feature point matches of the images.
Preferably, in step S1, the capturing time of the image to be detected is extracted, the two images with the closest capturing time are grouped two by two, and one group of images is selected from the images to perform the subsequent steps.
Preferably, in step S1, the position information of the image to be detected is extracted, two images with the closest position information are grouped two by two, and one group of images is selected from the images to perform the subsequent steps.
Preferably, in step S1, the specific steps of selecting two images from the images to be detected are as follows:
extracting position information and shooting time information from the images to be detected, if no position information data is detected in the images to be detected, pairwise grouping two images with the closest shooting time, and selecting one group of images from the images to execute the subsequent steps;
if the image contains the position information, selecting the image which is closest to the shooting time of the position information according to the position information of the image, grouping the images in pairs, and selecting one group of images from the images to execute the subsequent steps.
Preferably, the Harris matrix in the Harris algorithm is as follows:
wherein, W (x, y) is a window centered at a point (x, y) in the image, also called window function; i isx,IyIs the partial derivative of (x, y) on the image.
A system for drone image quality detection, the system comprising: the system comprises an image acquisition module, a Harris algorithm calculation module, a Lowe's algorithm calculation module and an image quality grading module;
the image acquisition module is used for acquiring images shot by the unmanned aerial vehicle;
the Harris algorithm calculation module is used for randomly selecting two images from the shot images and calculating the characteristic points of the images by adopting a Harris algorithm on the selected images;
the Lowe's algorithm calculation module is used for extracting high-quality feature points from the feature points and matching the high-quality feature points;
and the image quality grading module is used for grading the quality of the image according to the matching quantity of the high-quality feature points.
Preferably, the system further comprises an image sorting module and an image grouping module;
the image sorting module is used for sorting images;
the image grouping module is used for grouping images.
Preferably, the system further comprises a storage module.
Preferably, the system further comprises a display module.
An apparatus for unmanned aerial vehicle image quality detection, the apparatus comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is used for executing the unmanned aerial vehicle image quality detection method according to the instructions in the program codes.
According to the technical scheme, the embodiment of the invention has the following advantages:
the method extracts and matches the characteristic points of the image by collecting the Harris algorithm, and has the characteristics of high running speed, simple principle, strong robustness and the like. The characteristic points of the two photos in the same scene are extracted quickly, the high-quality characteristic points on the images are matched in pairs, the quality of the images is graded according to the number of the high-quality matching points, the working efficiency is greatly improved, and the technical problems that in the prior art, the quality of the images of the aerial photography operation of the unmanned aerial vehicle needs to be checked manually, and the efficiency is low are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a flowchart of a method, a system, and an apparatus for detecting image quality of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 2 is a system structure diagram of a method, a system, and an apparatus for detecting image quality of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 3 is an apparatus framework diagram of a method, a system, and an apparatus for detecting image quality of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 4 is a diagram of a Harris corner detection process performed by the method, system, and apparatus for detecting image quality of an unmanned aerial vehicle according to the embodiment of the present invention.
Fig. 5 is a diagram of a Harris corner detection process performed by the method, system, and apparatus for detecting image quality of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 6 is a result derivation diagram of the method, system, and apparatus for detecting image quality of an unmanned aerial vehicle according to the embodiments of the present invention.
Detailed Description
The embodiment of the invention provides a method, a system and equipment for detecting the image quality of an unmanned aerial vehicle, which are used for solving the technical problem of low efficiency caused by the fact that the image of the aerial operation of the unmanned aerial vehicle needs manual quality verification in the prior art.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a diagram illustrating an image quality detection method for an unmanned aerial vehicle according to an embodiment of the present invention.
The invention provides a method for detecting the image quality of an unmanned aerial vehicle, which comprises the following steps:
step S1: acquiring images which are shot by an unmanned aerial vehicle and need to be detected, and selecting two images from the images;
selecting all images to be detected from images in field aerial photography of the unmanned aerial vehicle, zooming according to a certain proportion under the condition that the matching precision of the feature points is not influenced due to the fact that the original photos greatly influence the calculation speed of the feature points, and selecting two images from the images;
step S2: calculating the characteristic points of the selected image by using a Harris algorithm;
the feature point-based image registration method is one of the most common methods in image registration. It does not directly use the image pixel value, twenty realizes the image registration by the symbol feature (such as feature point, feature line, feature area) derived by the pixel value, therefore, it can overcome the disadvantage of using the gray information to register the image, mainly embodied in the following three aspects: (1) the characteristic points are used instead of the image gray information, so that the calculated amount in the matching process is greatly reduced; (2) the matching metric value of the feature point is sensitive to the change of the relative position, so that the matching precision can be improved; (3) the extraction process of the feature points can reduce the influence of noise and has better adaptability to gray scale change, image deformation, shielding and the like.
And the corner points (corner points) are a class of important feature points, so that the feature points of the image can be obtained by calculating the corner points in the image, and the definition of the corner points is as follows:
(1) the local window moves along all directions, and the gray level of the local window is obviously changed
(2) Points of abrupt change of local curvature of image
The Harris corner moves in any direction with obvious changes. The identification of the corner points by the human eye is usually done in a local small area or small window. Moving the specific window to all directions, and if the gray scale of the area in the window is changed greatly, indicating that a corner point is met in the window; if the gray scale of the image in the window is not changed when the specific window moves towards the individual direction on the image, no angular point exists in the window; if the image in the window changes greatly when the particular window moves in one direction on the image, but does not change in the other direction, the image in the window may be a straight line segment.
For a point I (x, y) on the image, the self-similarity after translation (Δ x, Δ y) at point (x, y) is given by the autocorrelation function:
c(x,y;Δx,Δy)=∑(u,v)∈W(x,y)w(u,v)(I(u,v)–I(u+Δx,v+Δy))2
w (x, y) is a window centered on the point (x, y), which is also called a window function. u and v represent the offset of the x and y translation, respectively.
w (u, v) is a weighting function, and may be a constant or gaussian weighting function.
Where the window function (weight matrix) may be flat or gaussian. (weight matrix W (typically Gaussian filter G σ)).
After the image I (x, y) is translated (Δ x, Δ y) at point (x, y), the translated equation is taylor expanded as follows:
I(u+Δx,v+Δy)=I(u,v)+Ix(u,v)Δx+Iy(u,v)Δy+o(Δx2,Δy2)≈I(u,v)+Ix(u,v)Δx+Iy(u,v)Δy
wherein O represents a center point, Ix,IyIs the partial derivative of image I (x, y), the autocorrelation function can be simplified as:
where c is an invertible matrix of construction:
the autocorrelation function of the image I (x, y) after translation (Δ x, Δ y) at point (x, y) can be approximated as a binomial function:
c(x,y;Δx,Δy)≈AΔx2+2CΔxΔy+BΔy2
wherein:
finally, the expression for M is as follows:
the matrix M is also called Harris matrix. The width of W determines the region of interest around pixel x. The reason why the matrix M is averaged in the vicinity of the region in this manner is that the feature value changes depending on the local image characteristics.
Therefore, the corner points in the image are detected by the matrix M;
the adaptive function of the image after the point I (x, y) is translated (Δ x, Δ y) at the point (x, y) is substantially a binomial function, which is converted into an elliptic function. It can be obtained that the ellipticity and size of the ellipse are defined by the eigenvalues λ of M (x, y)1、λ2The direction of the ellipse is determined by the feature vector of M (x, y), and the ellipse equation is:
the relationship between the feature value of the elliptic function and the corner, line (edge) and plane in the image is shown in fig. 4, and can be divided into three cases:
a. a straight line in the image. In this case, a larger value of the eigenvalue, for example, λ 1> λ 2 or λ 2> λ 1, indicates that the autocorrelation function value is larger in one direction and smaller in the other direction.
b. A plane in the image. In this case, the two eigenvalues are both small and approximately equal, indicating that the autocorrelation function values are small in all directions.
c. Corner points in the image. In this case, both eigenvalues are large and approximately equal, indicating that the autocorrelation function is increasing in all directions.
Therefore, image points can be classified according to the size of the two characteristic values lambda 1 and lambda 2 of M, if lambda 1 and lambda 2 are small, and the image window moves in all directions without obvious gray change, the image has no corner point;
defining a corner corresponding function R:
R=detM-k(traceM)2
traceM=λ1+λ2
detM=λ1λ2
wherein k is an empirical constant, and is generally 0.04 to 0.06. To remove the weighting constant κ, the quotient detM/(traceM)2 is typically used as an indicator. Thus, FIG. 4 can be transformed as shown in FIG. 5:
wherein:
r is related only to the eigenvalues of M
Corner points: r is a large positive number
The edge: r is a large negative number
Flat area: r is a decimal value
Therefore, when a point on an image is calculated, if R is calculated to be a large positive number, it can be determined that the point is a corner point, that is, a feature point of the image.
Step S3: extracting high-quality feature points from the feature points by adopting a Lowe's algorithm, and matching every two high-quality feature points;
the Lowe's algorithm obtains excellent matching points by further screening the matching points, and achieves the effect of ' coarse and fine removal '.
In order to eliminate feature points without matching relationship due to image occlusion and background confusion and extract high-quality feature points, Lowe proposes a feature point matching mode for comparing a nearest neighbor distance with a next nearest neighbor distance: taking a feature point in one image, finding out the first two feature points with the nearest Euclidean distance in the other image, and if the ratio obtained by dividing the nearest distance by the next nearest distance is less than a certain threshold value T, accepting the matching of the pair of feature points to form a matching point. For a false match, due to the high dimensionality of the feature space, a similar distance may have a large number of other false matches, and thus its ratio value is high. Obviously by lowering this ratio threshold T, the number of matching points will be reduced, but more stable, and vice versa.
The threshold of Lowe recommends that the ratio is 0.8, but in this embodiment, a large number of two pictures with arbitrary scale, rotation and brightness variation are matched, and the result shows that the ratio is best between 0.4 and 0.6, few matching points less than 0.4 exist, and a large number of error matching points exist if the ratio is greater than 0.6, so the ratio value principle is as follows:
ratio is 0.4: matching with high accuracy requirements;
ratio is 0.6: the number of matching points is required to be more;
ratio is 0.5: in the usual case.
Therefore, by adopting a Lowe's algorithm and taking the ratio value to be 0.4-0.6, high-quality feature points are extracted from the feature points in the image, and the matching of the high-quality feature points is completed.
Step S4: the quality of the image is graded according to the matching quantity of the high-quality feature points, and if the data of the matching points is more, the quality of the image is better;
based on the above results, a word document is generated and exported in the form of a report, and the result is shown in fig. 6.
As a preferred embodiment, in step S1, the capturing time of the image to be detected is extracted, two images whose capturing times are closest are grouped two by two, and one group of images is selected from them to perform the subsequent steps.
As a preferred embodiment, in step S1, the position information of the image to be detected is extracted, the two images with the closest position information are grouped two by two, and one group of images is selected from them to perform the subsequent steps.
As a preferred embodiment, in step S1, the specific steps of selecting two images from the images to be detected are as follows:
extracting position information and shooting time information from the images to be detected, if no position information data is detected in the images to be detected, pairwise grouping two images with the closest shooting time, and selecting one group of images from the images to execute the subsequent steps;
if the image contains the position information, selecting the image which is closest to the shooting time of the position information according to the position information of the image, grouping the images in pairs, and selecting one group of images from the images to execute the subsequent steps.
As shown in fig. 2, a system for unmanned aerial vehicle image quality detection, the system comprising: the system comprises an image acquisition module 1, a Harris algorithm calculation module 4, a Lowe's algorithm calculation module 5 and an image quality grading module 6;
the image acquisition module 1 is used for acquiring images shot by the unmanned aerial vehicle;
the Harris algorithm calculating module 4 is used for randomly selecting two images from the shot images and calculating the characteristic points of the images by adopting a Harris algorithm on the selected images;
the Lowe's algorithm calculation module 5 is used for extracting high-quality feature points from the feature points and matching the high-quality feature points;
the image quality grading module 6 is used for grading the quality of the image according to the matching number of the high-quality feature points.
As a preferred embodiment, the system further comprises an image sorting module 2 and an image grouping module 3;
the image sorting module 2 is used for sorting images; and sequencing the images according to the sequence of the shooting time.
The image grouping module 3 is used for grouping images. The image grouping method comprises the following two methods:
if the images do not have data of unmanned aerial vehicle return, dividing the images into a group according to the time sequence;
if the image contains the return data of the unmanned aerial vehicle, calculating the image closest to the image shooting time in the return data according to the GPS information of the image and reclassifying the image and the image into a group.
As a preferred embodiment, the system further comprises a storage module 7, wherein the storage module 7 is used for storing the image subjected to quality grading;
as a preferred embodiment, the system further comprises a display module 8 for presenting the results of the image quality grading.
As shown in fig. 3, an apparatus 30 for drone image quality detection includes a processor 300 and a memory 301;
the memory 301 is used for storing a program code 302 and transmitting the program code 302 to the processor;
the processor 300 is configured to execute the steps of the method for drone image quality detection described above according to the instructions in the program code 302.
Illustratively, the computer program 302 may be partitioned into one or more modules/units that are stored in the memory 301 and executed by the processor 300 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 302 in the terminal device 30.
The terminal device 30 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 300, a memory 301. Those skilled in the art will appreciate that fig. 3 is merely an example of a terminal device 30 and does not constitute a limitation of terminal device 30 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 300 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 301 may be an internal storage unit of the terminal device 30, such as a hard disk or a memory of the terminal device 30. The memory 301 may also be an external storage device of the terminal device 30, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 30. Further, the memory 301 may also include both an internal storage unit and an external storage device of the terminal device 30. The memory 301 is used for storing the computer program and other programs and data required by the terminal device. The memory 301 may also be used to temporarily store data that has been output or is to be output.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for detecting the image quality of an unmanned aerial vehicle is characterized by comprising the following steps:
step S1: acquiring images which are shot by an unmanned aerial vehicle and need to be detected, and selecting two images from the images;
step S2: calculating the characteristic points of the selected image by using a Harris algorithm;
step S3: extracting high-quality feature points from the feature points by adopting a Lowe's algorithm, and matching every two high-quality feature points;
step S4: and reselecting the two images to be detected, repeating the steps S2-S3 to obtain the number of the high-quality feature point matches of all the images to be detected, and grading the quality of the images according to the number of the high-quality feature point matches of the images.
2. The method of claim 1, wherein in step S1, the shooting time information of the image to be detected is extracted, two images with the closest shooting time are grouped into two, and one group of images is selected from the two images for the next step.
3. The method of claim 1, wherein in step S1, the position information of the image to be detected is extracted, the two images with the closest position information are grouped into two, and one group of images is selected from the two images for the next step.
4. The method for unmanned aerial vehicle image quality detection according to claim 3, wherein in step S1, the specific steps of selecting two images from the images to be detected are as follows:
extracting position information and shooting time information from the images to be detected, if no position information data is detected in the images to be detected, pairwise grouping two images with the closest shooting time, and selecting one group of images from the images to execute the subsequent steps;
if the image contains the position information, selecting the image which is closest to the shooting time of the position information according to the position information of the image, grouping the images in pairs, and selecting one group of images from the images to execute the subsequent steps.
5. The method for unmanned aerial vehicle image quality detection according to claim 4, wherein Harris matrix in Harris algorithm is as follows:
wherein, W (x, y) is a window centered at a point (x, y) in the image, also called window function; i isx,IyIs the partial derivative of (x, y) on the image.
6. A system for image quality detection of unmanned aerial vehicles, the system comprising: the system comprises an image acquisition module, a Harris algorithm calculation module, a Lowe's algorithm calculation module and an image quality grading module;
the image acquisition module is used for acquiring images shot by the unmanned aerial vehicle;
the Harris algorithm calculation module is used for randomly selecting two images from the shot images and calculating the characteristic points of the images by adopting a Harris algorithm on the selected images;
the Lowe's algorithm calculation module is used for extracting high-quality feature points from the feature points and matching the high-quality feature points;
and the image quality grading module is used for grading the quality of the image according to the matching quantity of the high-quality feature points.
7. The system for unmanned aerial vehicle image quality detection according to claim 6, wherein the system further comprises an image sorting module and an image grouping module;
the image sorting module is used for sorting images;
the image grouping module is used for grouping images.
8. The system of claim 7, further comprising a storage module.
9. The system for unmanned aerial vehicle image quality detection of claim 8, wherein the system further comprises a display module.
10. An apparatus for image quality detection of a drone, the apparatus comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute a method of drone image quality detection according to any one of claims 1-5 according to instructions in the program code.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911310486.8A CN111062927A (en) | 2019-12-18 | 2019-12-18 | Method, system and equipment for detecting image quality of unmanned aerial vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911310486.8A CN111062927A (en) | 2019-12-18 | 2019-12-18 | Method, system and equipment for detecting image quality of unmanned aerial vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111062927A true CN111062927A (en) | 2020-04-24 |
Family
ID=70302229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911310486.8A Pending CN111062927A (en) | 2019-12-18 | 2019-12-18 | Method, system and equipment for detecting image quality of unmanned aerial vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111062927A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111859022A (en) * | 2020-07-07 | 2020-10-30 | 咪咕文化科技有限公司 | Cover generation method, electronic device and computer-readable storage medium |
CN112258437A (en) * | 2020-10-22 | 2021-01-22 | 广东电网有限责任公司 | Projection image fusion method, device, equipment and storage medium |
CN115511884A (en) * | 2022-11-15 | 2022-12-23 | 江苏惠汕新能源集团有限公司 | Punching compound die surface quality detection method based on computer vision |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886611A (en) * | 2014-04-08 | 2014-06-25 | 西安煤航信息产业有限公司 | Image matching method suitable for automatically detecting flight quality of aerial photography |
CN106447646A (en) * | 2016-06-28 | 2017-02-22 | 中国人民解放军陆军军官学院 | Quality blind evaluation method for unmanned aerial vehicle image |
CN206905745U (en) * | 2017-06-16 | 2018-01-19 | 西安煤航信息产业有限公司 | A kind of aeroplane photography flight reappearance checks platform |
CN109120919A (en) * | 2018-09-10 | 2019-01-01 | 易诚高科(大连)科技有限公司 | A kind of automatic analysis system and method for the evaluation and test of picture quality subjectivity |
-
2019
- 2019-12-18 CN CN201911310486.8A patent/CN111062927A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886611A (en) * | 2014-04-08 | 2014-06-25 | 西安煤航信息产业有限公司 | Image matching method suitable for automatically detecting flight quality of aerial photography |
CN106447646A (en) * | 2016-06-28 | 2017-02-22 | 中国人民解放军陆军军官学院 | Quality blind evaluation method for unmanned aerial vehicle image |
CN206905745U (en) * | 2017-06-16 | 2018-01-19 | 西安煤航信息产业有限公司 | A kind of aeroplane photography flight reappearance checks platform |
CN109120919A (en) * | 2018-09-10 | 2019-01-01 | 易诚高科(大连)科技有限公司 | A kind of automatic analysis system and method for the evaluation and test of picture quality subjectivity |
Non-Patent Citations (1)
Title |
---|
SHORE5: "基于Harris的特征检测与匹配", 《CSDN,HTTPS://BLOG.CSDN.NET/WANJINCHANG/ARTICLE/DETAILS/49497957》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111859022A (en) * | 2020-07-07 | 2020-10-30 | 咪咕文化科技有限公司 | Cover generation method, electronic device and computer-readable storage medium |
CN112258437A (en) * | 2020-10-22 | 2021-01-22 | 广东电网有限责任公司 | Projection image fusion method, device, equipment and storage medium |
CN115511884A (en) * | 2022-11-15 | 2022-12-23 | 江苏惠汕新能源集团有限公司 | Punching compound die surface quality detection method based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Alireza Golestaneh et al. | Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes | |
WO2020098250A1 (en) | Character recognition method, server, and computer readable storage medium | |
US9489566B2 (en) | Image recognition apparatus and image recognition method for identifying object | |
CN109117773B (en) | Image feature point detection method, terminal device and storage medium | |
KR20180065889A (en) | Method and apparatus for detecting target | |
US20110194772A1 (en) | Efficient scale-space extraction and description of interest points | |
CN109598271B (en) | Character segmentation method and device | |
CN109447074B (en) | License plate recognition method and terminal equipment | |
CN111062927A (en) | Method, system and equipment for detecting image quality of unmanned aerial vehicle | |
CN109064504A (en) | Image processing method, device and computer storage medium | |
CN111222452A (en) | Face matching method and device, electronic equipment and readable storage medium | |
CN112633221A (en) | Face direction detection method and related device | |
CN113158773B (en) | Training method and training device for living body detection model | |
CN109064475A (en) | For the image partition method and device of cervical exfoliated cell image | |
CN114187333A (en) | Image alignment method, image alignment device and terminal equipment | |
CN110660091A (en) | Image registration processing method and device and photographing correction operation system | |
CN117496560B (en) | Fingerprint line identification method and device based on multidimensional vector | |
CN109255797B (en) | Image processing device and method, and electronic device | |
CN111062984B (en) | Method, device, equipment and storage medium for measuring area of video image area | |
CN110287943B (en) | Image object recognition method and device, electronic equipment and storage medium | |
CN109871779B (en) | Palm print identification method and electronic equipment | |
US12125218B2 (en) | Object tracking apparatus and method | |
CN113239738B (en) | Image blurring detection method and blurring detection device | |
CN115984786A (en) | Vehicle damage detection method and device, terminal and storage medium | |
CN113793372A (en) | Optimal registration method and system for different-source images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200424 |
|
RJ01 | Rejection of invention patent application after publication |