Nothing Special   »   [go: up one dir, main page]

CN109558848A - A kind of unmanned plane life detection method based on Multi-source Information Fusion - Google Patents

A kind of unmanned plane life detection method based on Multi-source Information Fusion Download PDF

Info

Publication number
CN109558848A
CN109558848A CN201811458299.XA CN201811458299A CN109558848A CN 109558848 A CN109558848 A CN 109558848A CN 201811458299 A CN201811458299 A CN 201811458299A CN 109558848 A CN109558848 A CN 109558848A
Authority
CN
China
Prior art keywords
image
target
unmanned plane
detection
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811458299.XA
Other languages
Chinese (zh)
Inventor
王生水
韩明华
贺玉贵
衣晓飞
韩乃军
唐良勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HUNAN NOVASKY ELECTRONIC TECHNOLOGY Co Ltd
Original Assignee
HUNAN NOVASKY ELECTRONIC TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HUNAN NOVASKY ELECTRONIC TECHNOLOGY Co Ltd filed Critical HUNAN NOVASKY ELECTRONIC TECHNOLOGY Co Ltd
Priority to CN201811458299.XA priority Critical patent/CN109558848A/en
Publication of CN109558848A publication Critical patent/CN109558848A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of unmanned plane life detection method based on Multi-source Information Fusion, step includes: that S1. by UAV flight's Multiple Source Sensor carries out detection search to target area, and Multiple Source Sensor includes radar sensor, visible light image sensor and infrared thermal imagery sensor;S2. radar image, visible images and infrared image are received respectively and is pre-processed, and pretreated radar image, visible images and infrared image are obtained;S3. will pretreated visible images and infrared image carry out image registration after merge, fusion results and radar image carry out it is secondary merge, obtain target acquisition result and export.The present invention has many advantages, such as that implementation method is simple, anti-interference and environmental suitability is strong and detection efficient and precision are high.

Description

A kind of unmanned plane life detection method based on Multi-source Information Fusion
Technical field
The present invention relates to large-scale life detections, rescue technique field, more particularly to one kind to be based on Multi-source Information Fusion Unmanned plane life detection method.
Background technique
When the large-scale natural calamity of such as earthquake, landslide occurs, the tool that quick detection searches stranded pile things on personnel is needed Body position, to organize Quick rescue.It is usually used as at present using life detection radar and searches for equipment, but life detection thunder Up to that need to be checked by manual operation in the point-by-point blanket type in region of search, it is floating that life detection radar can penetrate shrub, thick grass, shallow-layer The non-metal barriers such as soil, building ruins, detect vital sign human body target below, but the region that can be detected has Limit, the human body target searching and detecting generally being suitable in the case of ruins, thin solum/rubble/floating grass burial, and dangerous is fallen Building of collapsing is since there may be the danger of secondary collapsing, unsuitable personnel enter and can not implement detection and search for, and life detection thunder Narrow up to each search coverage, each change sensing point requires professional's measuring and calculating and moves, and not only inefficiency, is not able to satisfy On a large scale, the fast search demand of large area personnel, it is also possible to will lead to collapsing ruins and shake to cause secondary harm.
There is practitioner to propose that treating region of search using unmanned platform carries out contactless detection, search, has controllability It is high, without secondary harm, a variety of advantages such as will not jeopardize rescuer, but visited currently based on the unmanned plane life of Multiple Source Sensor Survey method is usually all that the carrying life detection radar mode directly on unmanned plane is used to realize, such as above-mentioned, life detection radar It generally is suitable for underground life detection, it is not high for the life detection precision of earth's surface, and the letter that single type sensor can obtain Cease limited, when detection is easy to happen misjudgement or fails to judge, while noiseproof feature is not strong, and detection accuracy is easy to by environmental factor shadow It rings, and the regional environment of practical detection search required in rescue is severe, there are the environmental aspects of Various Complex, is taken by unmanned plane Carry quick, accurate scan that single life detection radar is difficult to realize in a wide range of.
Chinese patent open file CN201610557419.6 discloses a kind of unmanned plane and unmanned plane searches and rescues localization method, should Method is based on unmanned plane and is searched and rescued using radio signal strength, i.e., is differently directed the people in danger that antenna receives according to unmanned plane The wireless signal strength that member sends, judges the geographical location of distress personnel.Such method is sentenced using unmanned plane by directional aerial Disconnected distress personnel position, actual detection precision is not still high, and judges distress personnel information, nothing using wireless signal strength merely Method distinguishes the source of the signal, and probability of miscarriage of justice is higher, be equally difficult to realize in a wide range of in a wide range of it is quick, accurately sweep It retouches.
In conclusion needing to propose a kind of life detection method based on unmanned plane at present, can be realized in a wide range of Fast, accurately life detection.
Summary of the invention
The technical problem to be solved in the present invention is that, for technical problem of the existing technology, the present invention provides one Kind of implementation method is simple, anti-interference and environmental suitability is strong and detection efficient and nobody based on Multi-source Information Fusion with high accuracy Machine life detection method can be realized the fast, accurately life detection in a wide range of.
In order to solve the above technical problems, technical solution proposed by the present invention are as follows:
A kind of unmanned plane life detection method based on Multi-source Information Fusion, step include:
S1. detection search, the multi-source sensing detection search: are carried out to target area by UAV flight's Multiple Source Sensor Device includes for the radar sensor of detection radar image, the visible light image sensor for acquiring visible images and use In the infrared thermal imagery sensor of acquisition infrared image;
S2. image preprocessing: the radar image, visible images and infrared image are received respectively and is located in advance Reason, obtains pretreated radar image, visible images and infrared image;
S3. Multi-source Information Fusion: will pretreated visible images and infrared image carry out image registration after carry out Fusion, fusion results and the radar image carry out it is secondary merge, obtain target acquisition result and export.
As a further improvement of the present invention, pretreatment includes: to carry out interframe to the radar image in the step S2 Image correlation analysis, isolates target and background, and be filtered to the visible images, filters out the interruption in image Property discontinuous clutter and noise, and background estimating is carried out to the infrared image, and remove in image and carry on the back according to estimated result Scape.
As a further improvement of the present invention: when the visible images are filtered, jointing edge detection, Threshold segmentation It detects and realizes with Hough line, specific steps are as follows: the visible images are subjected to adaptive median filter to remove noise, then into Row edge detection, the edge detection include 45 ° and 135 ° of directions, and row threshold division of going forward side by side removes intermittent clutter, is partitioned into Kirchhoff transformation is executed after real goal completes line detection.
As a further improvement of the present invention: further including using dimension self-adaption after the visible images are filtered Dark channel prior defogging method carries out image defogging step, and specific steps include: special according to the color and edge of image to be processed The range scale of dark is adaptively adjusted in sign, and the dark for obtaining Pixel-level solves scale, and falls on target state estimator point In the background area being consistent with the physical significance of target state estimator point, so that skylight estimation point falls on foreground area.
As a further improvement of the present invention, described includes: to obtain comprising mesh to infrared image progress background estimating Target original infrared image carries out background estimating to the former infrared image using Wiener filtering method, obtains not including target Background image, the obtained background image and the former infrared image are subtracted each other, pretreated target image is obtained.
As a further improvement of the present invention, described that inter frame image correlation analysis, separation are carried out to the radar image The specific steps of target and background include: out
S21. cross correlation analysis is carried out to two images adjacent in radar image to be processed, every time in two images Same position extract the moving window of specified size, and calculate corresponding cross-correlation function value, the mobile moving window is simultaneously The cross-correlation function value is recalculated until forming what a width was made of gray level image cross-correlation function value throughout entire image Associated picture;
S22. estimate the grey level probability density distribution function of background clutter in the associated picture;
S23. adaptive global threshold is solved using the grey level probability density distribution function, and according to described adaptive whole The associated picture is carried out binaryzation by body threshold value, wherein will be greater than the pixel of the adaptive global threshold as candidate mesh Mark information, less than the adaptive global threshold pixel be background clutter;
S24. the pixel number in each candidate target region is counted, and is compared with preset minimum target pixel number Compared with, using the candidate target region less than the minimum target pixel number as false-alarm removal, the candidate target region remained As object detection results.
It as a further improvement of the present invention, further include to the received radar map after the step S1, before step S2 As, visible images and infrared image carry out stabilization processing step, specific steps are as follows: from the received radar image, can Corresponding unmanned plane kinematic parameter is detected in light-exposed image and infrared image in the interframe difference of image sequence, and according to described Unmanned plane kinematic parameter judges whether the shake generated belongs to randomized jitter, and corresponding shake is obtained when being judged as randomized jitter Parameter;Motion compensation is carried out to the radar image, visible images and infrared image according to the jitter parameter, to eliminate Or mitigate the interference that the randomized jitter of unmanned plane generates.
As a further improvement of the present invention, the visible images, infrared image are used and is based in the step S3 The crossvariance method for registering images of the multiple dimensioned multi-direction marginal information of area-of-interest is registrated, and specific steps include: point Multiple dimensioned multi-direction edge detection carry out not be carried out after area-of-interest selection, respectively obtain corresponding visible images, infrared figure The testing result of picture;The testing result of obtained visible images, infrared image is calculated into edge crossvariance, is arrived according to calculating Edge crossvariance determine registration parameter, be registrated by determining registration parameter.
As a further improvement of the present invention: the visible images and infrared image carry out the laggard line number of image registration According to fusion, fusion results carry out the secondary of decision level with the radar image and merge.
As a further improvement of the present invention: further including key area review step, specific steps packet after the step S3 Include: key area determined according to the target acquisition result that the step S3 is obtained, and control unmanned plane to the key area into The secondary review detection of row, it is final to determine life detection result.
Compared with the prior art, the advantages of the present invention are as follows:
1, the present invention is based on the unmanned plane life detection methods of Multi-source Information Fusion, carry out life mesh using unmanned aerial vehicle platform Target detection search, it can be achieved that a wide range of cruise, detect operation, while using radar sensor, visible light image sensor with And the work compound of infrared sensor, various varying environment operating conditions can effectively be detected, improve the anti-interference and ring of detection Border adaptability, at the same by the multi-source information of each sensor is carried out respectively pretreatment and image registration after merged come To final detection result, it can be improved detection accuracy, to realize the quick of extensive area, accurately detecting and search.
2, the present invention is based on the unmanned plane life detection method of Multi-source Information Fusion, by first it will be seen that optical sensor with it is red The image data of outer sensor is merged, and the correlation between two kinds of sensing datas can be made full use of to merge to obtain more Accurate detection result, then it will be seen that result and the data of radar sensor realize imaging after light image is merged with infrared image It merging again, visible light/infrared and bioradar imaging may be implemented cooperates with detection, final fusion detection imaging results are exported, To merge visible images, infrared image and radar image sufficiently to obtain accurate detection result.
3, the present invention is based on the unmanned plane life detection methods of Multi-source Information Fusion, are carried out by the image to each information source pre- Processing, can eliminate or reduce influence caused by unmanned plane randomized jitter, further increase detection accuracy, while being easy to implement thunder Multi-source Information Fusion up between image, visible images and infrared image;
4, the present invention is based on the unmanned plane life detection methods of Multi-source Information Fusion, further combined with edge detection, threshold value Divide the method that detects with Hough line to be filtered visible images, can effectively remove the discontinuous clutter of the discontinuity with Isolated noise point, segmentation removal discontinuity clutter, to effectively be partitioned into real goal.
5, the present invention is based on the unmanned plane life detection methods of Multi-source Information Fusion, further by combining between image Correlation pre-processes radar image, target can be accurately detected from radar image, then be based on the target detection As a result with visible light, infrared image fusion results carry out it is secondary merge, further increase the precision of object detection results.
Detailed description of the invention
Fig. 1 is the implementation process schematic diagram of unmanned plane life detection method of the present embodiment based on Multi-source Information Fusion.
Fig. 2 is the realization principle schematic diagram that image preprocessing is carried out in the present embodiment.
Fig. 3 is the implementation process schematic diagram that the processing of image defogging is realized in the present embodiment.
Fig. 4 is to carry out pretreated realization principle schematic diagram to infrared image in the present embodiment.
Fig. 5 is the implementation process schematic diagram that the processing of image stabilization is realized in the present embodiment.
Fig. 6 is the implementation process schematic diagram in the present embodiment based on neural fusion data fusion.
Fig. 7 is the implementation process schematic diagram for realizing decision level fusion in the present embodiment based on Bayesian decision.
Fig. 8 is to realize that the structure of unmanned plane life-detection system used by life detection is shown in the specific embodiment of the invention It is intended to.
Fig. 9 is the implementation process schematic diagram that life detection is realized in the specific embodiment of the invention.
Specific embodiment
Below in conjunction with Figure of description and specific preferred embodiment, the invention will be further described, but not therefore and It limits the scope of the invention.
As shown in Figure 1, unmanned plane life detection method and step of the present embodiment based on Multi-source Information Fusion includes:
S1. detection search, the multi-source sensing detection search: are carried out to target area by UAV flight's Multiple Source Sensor Device includes for the radar sensor of detection radar image, the visible light image sensor for acquiring visible images and use In the infrared thermal imagery sensor of acquisition infrared image;
S2. image preprocessing: the radar image, visible images and infrared image are received respectively and is located in advance Reason, obtains pretreated radar image, visible images and infrared image;
S3. Multi-source Information Fusion: will pretreated visible images and infrared image carry out image registration after carry out Fusion, fusion results and the radar image carry out it is secondary merge, obtain target acquisition result and export.
Multi-source Information Fusion is by the information resources using different time and multiple sensors in space, using calculating Machine technology is detected according to certain criterion, is interconnected, is related, estimation and group to the multisensor observation information chronologically obtained It closes, automatically analyzes, the information process of Optimum Synthesis to obtain more accurate state and identity estimation obtains measurand Consistency explain and description, so that system is obtained more superior than its component part performance.The present embodiment is flat using unmanned plane Platform carries out the detection search of inanimate object, it can be achieved that cruising on a large scale, detect operation, while utilizing radar sensor, visible light The work compound of imaging sensor and infrared sensor can effectively detect various varying environment operating conditions, improve detection Anti-interference and environmental suitability, while by the multi-source information of each sensor is carried out respectively pretreatment and image registration after It is merged to obtain final detection result, detection accuracy can be improved with effective integration multi-source information, to realize a wide range of area The quick of domain, accurately detecting and search.
In view of the image-forming principle of visible light image sensor, infrared sensor is identical, imaging effect is by target Correlation is strong between the image that geometry, physical property decision, i.e. the two obtain, while considering visible light/infrared sensor Image-forming principle it is different from life detection radar image-forming principle, obtained image data structure is different, and the present embodiment is above-mentioned logical After it will be seen that optical sensor is merged with the image data of infrared sensor, can first make full use of two kinds of sensing datas it Between correlation merge to obtain more accurate detection result, then it will be seen that light image merged with infrared image after result and radar The data of sensor realize merging again for imaging, and visible light/infrared detection that cooperates with bioradar imaging, output may be implemented Final fusion detection imaging results, so that it is accurate to obtain sufficiently to merge visible images, infrared image and radar image Detection result.
, the Multiple Source Sensor that in the present embodiment unmanned plane carries different with the detection physical sign parameters of target according to working mechanism Including radar sensor, visible light sensor and infrared thermal imagery sensor etc., ground/shallow-layer may search for by radar sensor Buried target is buried, ground target may search for by visible light sensor and infrared thermal imagery sensor, so that can both detect It searches for extensive area ground/shallow-layer and buries buried target, also may search for ground target.
In concrete application embodiment, radar sensor specifically uses ultra wide band life detection radar, is taken by unmanned plane It carries life detection radar and detection search is carried out to regions such as large area, the big scale of construction, high-risk multilayer ruins, it can be round-the-clock, full-time Section work, is generated by radar when work and by transmitting antenna (containing array) electromagnetic radiation signal, can penetrate building ruins, ground The complex barriers object such as shallow-layer surface dust realizes to vital sign target acquisition, identification and positioning, can detecting for efficiently and accurately hindered Hinder the indicator of trapped personnel that object blocks.
In concrete application embodiment, it is seen that optical image sensor uses low-light (level) Visible Light Camera, in normal light According to and severe illumination condition (such as night) under acquisition detection scene optical imagery, then depth/machine learning by being internally integrated Algorithm can distinguish human body/animal target.Digital noise reduction further can also be carried out to image when collecting image, to eliminate Interference source in signal, makes that image is apparent, profile is clearly more demarcated, contrast is stronger;It can be mended with further progress digital backlight It repays, so that can be adapted for the Image Acquisition under the scenes such as low-light (level), backlight.
In concrete application embodiment, infrared thermal imagery sensor uses thermal infrared imager, for detecting the infrared heat at scene Image, can round-the-clock work, and work is not limited by daytime, night light differential, then depth/machine by being internally integrated Learning algorithm can distinguish human body/animal target with vital sign.
By the sensor of above-mentioned three kinds different systems of UAV flight, wherein low-light (level) visible light/infrared sensor can be with Coarse scanning, wide area search are carried out, since it does not have penetration capacity, the achievable life when no trees, building etc. are blocked Order sign target search;And ultra wide band life detection radar can penetrate nonmetal medium, for by surface dust, rubble, trees, building The target that ruins etc. are blocked can be detected by ultra wide band life detection radar sensor, be cooperated by above-mentioned three kinds of sensors Using, it can be achieved that low-light (level), the target fast search of complex conditions such as block, while being melted by the data between each sensor Close, information mutual communication, detection accuracy and accuracy can be greatly improved compared to single-sensor, so as to realize round-the-clock/ Low illumination, medium block/target it is hidden etc. it is all kinds of under the conditions of seamless detection, greatly improve the environmental suitability of detection.
It is understood that above-mentioned radar sensor, visible light image sensor and infrared sensor can also bases Actual demand uses other types, other sensors can also be arranged further to further increase detection performance.
In concrete application embodiment, in advance by above-mentioned three kinds of sensor integrations unmanned plane the same rotating platform On, the antenna of ultra wide band life detection radar is face-down, and low-light (level) camera and infrared sensor are embedded in radar host computer, low photograph The probe of the camera lens and infrared sensor of spending camera is integrated to be arranged together, and is leaked out from radar antenna test surface centre, and It is rotated with radar detection face.By integrating the end of probe of low-light (level) camera and infrared sensor, can be convenient for Heterologous image registration is carried out, low-light (level) camera, infrared sensor are rotated with radar, and multi-faceted detection may be implemented and search Rope, and can guarantee that each sensor acquired image data are consistent.
As shown in Fig. 2, pretreatment includes: to carry out inter frame image correlation point to radar image in the present embodiment step S2 Analysis, isolates target and background, and be filtered to visible images, filters out the discontinuous clutter of the discontinuity in image and make an uproar Sound, and background estimating is carried out to infrared image, and background in image is removed according to estimated result.Receive from visible light, After the image information of the Multiple Source Sensors such as infrared, radar, it is contemplated that the carrying platform of each sensor is unmanned plane, unmanned aerial vehicle platform The shake of itself will affect imaging effect, and the present embodiment carries out above-mentioned pretreatment by the image to each information source, can be further Detection accuracy is improved, while making the subsequent multi-source letter that may be implemented between radar image, visible images and infrared image Breath fusion.
In the present embodiment, it is seen that when light image is filtered, specific jointing edge detection, Threshold segmentation and Hough line are detected It realizes, step are as follows: it will be seen that light image carries out adaptive median filter to remove noise, then carry out edge detection, edge detection Including 45 ° and 135 ° of directions, row threshold division of going forward side by side removes intermittent clutter, executes kirchhoff change after being partitioned into real goal It changes and completes line detection.It is mainly the discontinuous clutter of discontinuity and isolated noise that the factor of target detection is influenced in visible images Point, the method that the present embodiment is detected by jointing edge, Threshold segmentation and Hough line detect, can effectively remove the discontinuity Discontinuous clutter and isolated noise point remove discontinuity clutter using local auto-adaptive Threshold segmentation, are partitioned into real goal, together When the cavity occurred among target is filled up by Morphological scale-space, connected domain area detecting removes rest point to eliminate to target The influence of point.
When the present embodiment pre-processes visible images, for much noise in removal video image, use first certainly Median filter is adapted to pre-process the original image received;When using Sobel edge detection, in traditional Sobel operator On the basis of increase both direction template: 45 ° and 135 ° of directions, and readjust the weight of original template;Kirchhoff is carried out again Line detection is completed in transformation.
Hough transformation is converted by parameter space in the present embodiment completes line detection, complete using the extreme point of parameter space At.In xoy plane, pass through any point (xi,yi) be in line race without several linears, y can be expressed asi=axi+ b, Correspond in ab (also referred to as parameter space) plane and can be regarded as the straight line that independent variable is a and dependent variable is b, two planes it Between mapping be the conversion of family of straight lines to straight line, straight line principle is determined according to two o'clock, corresponding in ab plane is two Straight line intersection, that is, for determining the point set on straight line in xoy plane, is mapped to family of straight lines in ab plane and all intersects in a bit A bit.
When in view of the straight slope a in plane close to 90 ° of directions, the difficulty on calculating will cause, the present embodiment uses Method of polar coordinates indicates the straight line in plane, as shown in formula (1):
Xcos θ+ysin θ=ρ (1)
The point on x/y plane is mapped to the curve in ρ θ plane, then the non-background dot of each of image is sat on x/y plane Mark (xi,yi) by formula (1) carry out coordinate translation operation be converted into the curve in ρ θ plane, wherein the value range of θ be [- π/ 2, pi/2], the value of ρ is the maxima and minima of non-background dot and initial point distance in image, on θ axis the θ value of each quantization with ρ value corresponds, and the θ value after being quantified can obtain corresponding ρ value;Obtained ρ value is rounded, institute on ρ axis is obtained The closest value allowed, the accumulated value of corresponding (ρ, θ) unit are increase accordingly;After all calculating processes, statistics is each The accumulated value of summing elements completes line detection.
It can cause picture quality degradation under severe weather conditions, in the present embodiment, it is seen that light image is filtered Afterwards, further include using dimension self-adaption dark channel prior defogging method carry out image defogging step, specific steps include: according to The range scale of dark is adaptively adjusted in the color and edge feature that handle image, and the dark for obtaining Pixel-level solves ruler Degree, and fall on target state estimator point in the background area being consistent with the physical significance of target state estimator point, so that skylight estimation point Fall on foreground area.The range scale of dark is adaptively adjusted by the color and edge feature according to image, obtains picture The dark of plain grade solves scale, can take into account large scale solve the small and small scale of color distortion solve " halation " be distorted it is small etc. excellent Point, while by improved sky light estimation method, estimation point can be made robustly to fall on the background area being consistent with its physical significance Domain.
As shown in figure 3, when the present embodiment uses dimension self-adaption dark channel prior defogging, first according to the color of image and The range scale of dark is adaptively adjusted in edge feature, and the dark for obtaining Pixel-level solves scale, carries out to atmosphere light After estimation, estimation attenuation coefficient is simultaneously optimized, and carries out image reconstruction by the attenuation coefficient that optimizes, obtains defogging treated figure Picture.
The present embodiment can adaptively change part according to image fogging degree by above-mentioned image defogging step process The filtering parameter of sub-block, it is effective to realize image defogging, compared to traditional method based on histogram equalization, based on dark former The method of color priori rule and the methods of based on atmospheric physics model, enables to that detailed information is relatively sharp, color fidelity Effect is more preferable etc..
Infrared image mainly includes three parts content: target, background, noise, carries out pretreated target to infrared image It is that Weak target is detected from infrared image.As shown in figure 4, carrying out background estimating packet to infrared image in the present embodiment It includes: obtaining the former infrared image comprising target, background estimating is carried out to the former infrared image using Wiener filtering method, is obtained To the background image for not including target, the obtained background image and the former infrared image are subtracted each other, after obtaining pretreatment Target image.
Wiener filtering restores image using the least mean-square error between degraded image and estimation image, it is assumed that one The unit impulse response of linear system is h (n), allows and inputs a random signal x (n) observed, abbreviation observation, and the letter Number comprising noise w (n) and useful signal s (n), signal namely x (n)=w (n)+s (n) are built up, then Wiener filtering output is public Formula can indicate are as follows:
Output y (n) can regard the observation x (n-1), x (n- of observation x (n) and last time by current time as 2), the estimated value of x (n-3) ... estimates real signal s (n) with it.
The present embodiment uses Wiener filtering to carry out background estimating to infrared image based on the above principles, can effectively detect Weak target in infrared image out.
When the present embodiment pre-processes radar image, on the anisotropic basis of the relevant difference of analysis background clutter and target On, using the target rapid detection method based on radar sequence image, specific steps include:
S21. cross correlation analysis is carried out to two images adjacent in radar image to be processed, every time in two images Same position extract the moving window of specified size, and calculate corresponding cross-correlation function value, the mobile moving window is simultaneously The cross-correlation function value is recalculated until forming what a width was made of gray level image cross-correlation function value throughout entire image Associated picture;
S22. estimate the grey level probability density distribution function of background clutter in the associated picture;
S23. adaptive global threshold is solved using the grey level probability density distribution function, and according to described adaptive whole The associated picture is carried out binaryzation by body threshold value, wherein will be greater than the pixel of the adaptive global threshold as candidate mesh Mark information, less than the adaptive global threshold pixel be background clutter;
S24. the pixel number in each candidate target region is counted, and is compared with preset minimum target pixel number Compared with, using the candidate target region less than the minimum target pixel number as false-alarm removal, the candidate target region remained As object detection results.
Through the above steps, go out target in combination with the correlation accurate detection between image, it is subsequent to be based on the target again Testing result and visible light, infrared image fusion results carry out it is secondary merge, that is, can determine final object detection results.
In concrete application embodiment, when being pre-processed to radar image, adjacent two images are carried out first mutual The analysis of closing property, same position in two images extract the moving window of certain size and the cross-correlation letter of calculating between the two Numerical value, the mobile step-length of window, repetitive operation form a width by gray level image cross-correlation function value until throughout entire image The associated picture of composition;Then the gray scale of associated picture background clutter is estimated using probabilistic neural network model (PNN model) Probability density function (PDF);CFAR technology is reapplied, solves oneself of a differentiation target and background noise using dichotomy Global threshold is adapted to, and according to threshold value by associated picture binaryzation, wherein it is greater than target information of the pixel of threshold value as candidate, Pixel less than threshold value is then Sea background clutter;Finally each candidate target area is counted using connectivity 8- neighborhood criterion Pixel number, and be compared with minimum target pixel number predetermined, candidate target region less than normal is removed as false-alarm, The candidate target region remained is object detection results.
The shake of unmanned aerial vehicle platform itself will affect imaging effect, as shown in figure 5, in the present embodiment after step S1, step It further include that stabilization processing step, specific steps are carried out to received radar image, visible images and infrared image before S2 are as follows: From detecting corresponding nothing in the interframe difference of image sequence in the received radar image, visible images and infrared image Man-machine kinematic parameter, and judge whether the shake generated belongs to randomized jitter according to unmanned plane kinematic parameter, it is random when being judged as Corresponding jitter parameter is obtained when shake;According to jitter parameter to the radar image, visible images and infrared image into Row motion compensation, to eliminate or mitigate the interference that the randomized jitter of unmanned plane generates.The present embodiment utilizes electronic image stabilization method, It can solve the image instability problem that the irregular irregular movement of unmanned plane generates, realize the stabilization of image sequence.
In concrete application embodiment, when carrying out stabilization processing, specifically from the interframe difference of image sequence, movement is utilized Algorithm for estimating detection indicates the unmanned plane kinematic parameter of unmanned plane movement, and judges that the parameter belongs to randomized jitter or artificial Scanning motion, and corresponding jitter parameter is obtained, it is then eliminated by movement compensating algorithm or mitigates unmanned plane and trembled at random The dynamic interference to image.
Under complicated land, maritime environment, there is overcast and rainy, mist in the image data that optics, infrared imaging sensor obtain The noise jammings such as haze;Simultaneously because the difference of imaging sensor physical property, it is inconsistent to also result in respective imaging resolution, When pre-processing for visible light, infrared image, it further may also include image denoising, image enhancement, it is such as infrared And the noise type that visible images contain is generally additive noise, median filter method, wiener filter can be used in this partial noise The methods of wave, Kalman filtering, adaptive filter method, filtering method based on wavelet theory are filtered out.
When merging for visible light, infrared image, multi-source image registration is carried out after being pre-processed again.The present embodiment Described in the visible images, infrared image are used in step S3 and are based on the multiple dimensioned multi-direction marginal information of area-of-interest Crossvariance method for registering images be registrated, specific steps include: respectively carry out area-of-interest selection after carry out more rulers Multi-direction edge detection is spent, the testing result of corresponding visible images, infrared image is respectively obtained;The visible light figure that will be obtained The testing result calculating edge crossvariance of picture, infrared image, determines registration parameter according to the edge crossvariance calculated, by Determining registration parameter is registrated.
In the present embodiment, above-mentioned pretreated visible images and infrared image are subjected to the laggard line number of image registration According to fusion, fusion results carry out the secondary of decision level with radar image again and merge.I.e. first by visible light sensor and infrared sensing Device data are merged using pixel-based fusion mode, then are based on decision level fusion mode with radar sensor and are carried out Second Decision Grade fusion, to merge to obtain by visible images, infrared image whether there is or not on the basis of detection target, with radar detection result shape At unified decision.
Wound is blocked since visible light image sensor, infrared image sensor are mainly used on ground such as trees, thick grass The search positioning of member, more demanding to the interference free performance of fusion, the present embodiment, which is based on neural network fashion, will be seen that light image Data fusion is carried out with infrared image, the correlation between light-exposed image data and infrared picture data can be sufficiently excavated, melt Conjunction obtains more accurate imaging data, and strong interference immunity, can be further improved detection accuracy and environmental suitability.
As shown in fig. 6, the present embodiment will be seen that optical sensor image data, infrared sensor image data are merged When, after will be seen that optical sensor image data, infrared sensor image data are pre-processed respectively first, then figure is carried out respectively As registration, visible light/infrared fusion of imaging is obtained after registration, it will be seen that light/infrared fusion of imaging is input to trained in advance Feature extraction is carried out in neural network, and target identification is carried out by the feature vector extracted, obtains target acquisition result.
In the present embodiment it is secondary fusion specifically using be based on Bayesian decision grade amalgamation mode, it is seen that light/infrared imaging and Bioradar image-forming principle is different, on the basis of merging visible light/infrared imaging and obtaining detection result, due to visible light/red The target information that outer heat and bioradar sensor obtain has independence, and the reasoning process of the two meets phase between characteristic parameter Mutual independent condition, the present embodiment further will be seen that light/infrared imaging fusion results and radar detection using Bayesian decision As a result decision level fusion is carried out, to finally obtain the result of decision, further increases detection accuracy.
As shown in fig. 7, the present embodiment first will be seen that light/infrared fusion of imaging is registrated with life detection radar imaging, After carrying out feature extraction and target identification respectively, first object detection result is obtained by visible light/infrared fusion of imaging, by life Detection radar is imaged to obtain the second target acquisition as a result, first object detection result, the second target acquisition result are based on pattra leaves This estimation fusion decision obtains final detection result output.
It further include review step in key area in the present embodiment, after step S3, specific steps include: according to the step S3 Obtained target acquisition result determines key area, and controls unmanned plane and carry out secondary review detection to the key area, most Life detection result is determined eventually.Specific control unmanned plane carries out key area using radar sensor to approach formula detection, finally Determine life state and the position of detection target.Quick coarse scanning is carried out to target area by UAV flight's Multiple Source Sensor, Coarse scanning is quickly such as carried out to target area by low-light (level) Visible Light Camera, thermal infrared sensor, determining doubtful on ground has mesh Target area determines that subsurface buries the doubtful of vital sign target by ultra wide band life detection radar sensor large area scanning Region;After the detection data of Multiple Source Sensor is carried out fusion treatment, calibration needs to carry out the key area of secondary review detection; Approach formula detection to key area again, with obtain clearer image and by ultra wide band life detection radar sensor it is true The vital sign state for determining human body target finally determines the state for obtaining detection target and position.
As shown in Figure 8,9, the system packet of above-mentioned unmanned plane life detection method is realized in concrete application embodiment of the present invention Include three parts: multi-source heterogeneous sensor load, unmanned aerial vehicle platform and rear method, apparatus control terminal, wherein Multiple Source Sensor load packet Ultra wide band life detection radar, low-light (level) visible light sensor and infrared sensor are included, for detecting in search setting regions Target, and the data of acquisition, image are passed back into rear method, apparatus control terminal;Multiple Source Sensor is mounted on unmanned aerial vehicle platform, Afterwards method, apparatus control terminal as accuse platform, control unmanned plane flight path, control Multiple Source Sensor load operation, receive it is more The data/image that source sensor detects carries out Multi-source Information Fusion processing, and rear method, apparatus control terminal may be provided at distance objective At several hundred rice in region or several kilometers, communicated with unmanned aerial vehicle platform by figure biography/data transmission equipment, rear method, apparatus control terminal into One step may be configured so that the data fusion for having three kinds of sensors unifies display function.
Between the above-mentioned multi-source heterogeneous sensor load of the present embodiment and unmanned aerial vehicle platform be equipped with power interface, data-interface, Control interface, Multiple Source Sensor take electricity from unmanned aerial vehicle platform, receive under the rear method, apparatus control terminal that unmanned aerial vehicle platform passes over The operational order of hair, and the results/data that each sensor itself detects is passed back to rear method, apparatus control terminal and is shown, Unmanned aerial vehicle platform can directly determine the location information of target by the locating module carried.
As shown in figure 9, carrying out life detection using above-mentioned unmanned plane life-detection system in concrete application embodiment Process includes:
1) by equipped with life detection radar, low-light (level) Visible Light Camera, infrared sensor etc. Multiple Source Sensor and The unmanned aerial vehicle platform of Beidou satellite alignment system according to preset flight path, is planned to carry out cruise search in target area overhead;
2) unmanned aerial vehicle platform is in flight course, the region of search image detected by each sensor, on the ground The data such as the target position information of movement/static human body target carry out image procossing with live scene and merge, and scheme treated Picture and result pass link by figure biography/number and send back to rear method, apparatus control terminal;
3) method, apparatus control terminal carries out multi-source letter after carrying out image preprocessing according to each sensor detection result received afterwards Breath fusion, image preprocessing and Multi-source Information Fusion step are as described above, wherein it will be seen that light image and infrared image carry out Data fusion, obtains first object testing result, and first object testing result carries out what target identification obtained with by radar image Second object detection results carry out decision level fusion, obtain final object detection results, and lock the weight in the presence of detection target Point region;
4) control UAV flight's radar life-detection instrument carries out all key areas to approach formula detection, determines target Vital sign state simultaneously carries out Precise imaging confirmation;For not finding the location point of target on ground, carries out penetration detection and search Rope determines doubtful ruins, whether has detection target below shelter, if any then marking and pass back to rear method, apparatus control terminal.
Above-mentioned only presently preferred embodiments of the present invention, is not intended to limit the present invention in any form.Although of the invention It has been disclosed in a preferred embodiment above, however, it is not intended to limit the invention.Therefore, all without departing from technical solution of the present invention Content, technical spirit any simple modifications, equivalents, and modifications made to the above embodiment, should all fall according to the present invention In the range of technical solution of the present invention protection.

Claims (10)

1. a kind of unmanned plane life detection method based on Multi-source Information Fusion, which is characterized in that step includes:
S1. detection search, the Multiple Source Sensor packet detection search: are carried out to target area by UAV flight's Multiple Source Sensor Include the radar sensor for detection radar image, the visible light image sensor for acquiring visible images and for adopting Collect the infrared thermal imagery sensor of infrared image;
S2. image preprocessing: the radar image, visible images and infrared image are received respectively and is pre-processed, is obtained To pretreated radar image, visible images and infrared image;
S3. Multi-source Information Fusion: will pretreated visible images and infrared image carry out image registration after merge, Fusion results and the radar image carry out it is secondary merge, obtain the output of target acquisition result.
2. the unmanned plane life detection method according to claim 1 based on Multi-source Information Fusion, which is characterized in that described Pretreatment includes: to carry out inter frame image correlation analysis to the radar image in step S2, isolates target and background, and The visible images are filtered, filter out the discontinuous clutter of discontinuity and the noise in image, and to the infrared figure Background in image is removed as carrying out background estimating, and according to estimated result.
3. the unmanned plane life detection method according to claim 2 based on Multi-source Information Fusion, which is characterized in that described When visible images are filtered, jointing edge detection, Threshold segmentation and the detection of Hough line are realized, specific steps are as follows: will be described Visible images carry out adaptive median filter to remove noise, then carry out edge detection, the edge detection include 45 ° and 135 ° of directions, row threshold division of going forward side by side remove intermittent clutter, execute Kirchhoff transformation completion line after being partitioned into real goal Detection.
4. the unmanned plane life detection method according to claim 3 based on Multi-source Information Fusion, which is characterized in that described It further include that image defogging step is carried out using dimension self-adaption dark channel prior defogging method after visible images are filtered, Specific steps include: the range scale that dark is adaptively adjusted according to the color and edge feature of image to be processed, are obtained The dark of Pixel-level solves scale, and target state estimator point is made to fall on the background area being consistent with the physical significance of target state estimator point In, so that skylight estimation point falls on foreground area.
5. the unmanned plane life detection method according to Claims 2 or 3 or 4 based on Multi-source Information Fusion, feature exist In described includes: to obtain the former infrared image comprising target to infrared image progress background estimating, is filtered using Wiener Method carries out background estimating to the former infrared image, obtains the background image not comprising target, the Background that will be obtained As subtracting each other with the former infrared image, pretreated target image is obtained.
6. the unmanned plane life detection method according to Claims 2 or 3 or 4 based on Multi-source Information Fusion, feature exist In described to carry out inter frame image correlation analysis to the radar image, the specific steps for isolating target and background include:
S21. cross correlation analysis is carried out to two images adjacent in radar image to be processed, every time in two images same The moving window of specified size is extracted in one position, and calculates corresponding cross-correlation function value, the mobile moving window and again The cross-correlation function value is calculated until forming the correlation that a width is made of gray level image cross-correlation function value throughout entire image Image;
S22. estimate the grey level probability density distribution function of background clutter in the associated picture;
S23. adaptive global threshold is solved using the grey level probability density distribution function, and according to the adaptive whole threshold The associated picture is carried out binaryzation by value, wherein the pixel that will be greater than the adaptive global threshold is believed as candidate target Breath, less than the adaptive global threshold pixel be background clutter;
S24. the pixel number in each candidate target region is counted, and is compared with preset minimum target pixel number, The candidate target region for being less than the minimum target pixel number is removed as false-alarm, the candidate target region remained is Object detection results.
7. the unmanned plane life detection method according to Claims 2 or 3 or 4 based on Multi-source Information Fusion, feature exist In, further include after the step S1, before step S2 to the received radar image, visible images and infrared image carry out Stabilization processing step, specific steps are as follows: the image sequence from the received radar image, visible images and infrared image Interframe difference in detect corresponding unmanned plane kinematic parameter, and be according to the shake that unmanned plane kinematic parameter judgement generates It is no to belong to randomized jitter, corresponding jitter parameter is obtained when being judged as randomized jitter;According to the jitter parameter to the thunder Motion compensation is carried out up to image, visible images and infrared image, is generated with eliminating or mitigating the randomized jitter of unmanned plane Interference.
8. the unmanned plane life detection method according to Claims 2 or 3 or 4 based on Multi-source Information Fusion, feature exist In to the visible images, infrared image using based on the multiple dimensioned multi-direction edge letter of area-of-interest in the step S3 The crossvariance method for registering images of breath is registrated, specific steps include: respectively carry out area-of-interest selection after carry out it is more The multi-direction edge detection of scale, respectively obtains the testing result of corresponding visible images, infrared image;The visible light figure that will be obtained The testing result calculating edge crossvariance of picture, infrared image, determines registration parameter according to the edge crossvariance calculated, by Determining registration parameter is registrated.
9. the unmanned plane life detection method described according to claim 1~any one of 4 based on Multi-source Information Fusion, Be characterized in that: the visible images and infrared image carry out carrying out data fusion after image registration, fusion results with it is described The secondary fusion of radar image progress decision level.
10. the unmanned plane life detection method described according to claim 1~any one of 4 based on Multi-source Information Fusion, It is characterized by: further including review step in key area after the step S3, specific steps include: to be obtained according to the step S3 Target acquisition result determine key area, and control unmanned plane secondary review carried out to the key area and detect, it is final true Determine life detection result.
CN201811458299.XA 2018-11-30 2018-11-30 A kind of unmanned plane life detection method based on Multi-source Information Fusion Pending CN109558848A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811458299.XA CN109558848A (en) 2018-11-30 2018-11-30 A kind of unmanned plane life detection method based on Multi-source Information Fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811458299.XA CN109558848A (en) 2018-11-30 2018-11-30 A kind of unmanned plane life detection method based on Multi-source Information Fusion

Publications (1)

Publication Number Publication Date
CN109558848A true CN109558848A (en) 2019-04-02

Family

ID=65868471

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811458299.XA Pending CN109558848A (en) 2018-11-30 2018-11-30 A kind of unmanned plane life detection method based on Multi-source Information Fusion

Country Status (1)

Country Link
CN (1) CN109558848A (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109911550A (en) * 2019-04-17 2019-06-21 华夏天信(北京)智能低碳技术研究院有限公司 Scratch board conveyor protective device based on infrared thermal imaging and visible light video analysis
CN110110765A (en) * 2019-04-23 2019-08-09 四川九洲电器集团有限责任公司 A kind of multisource data fusion target identification method based on deep learning
CN110243769A (en) * 2019-07-30 2019-09-17 南阳理工学院 A kind of the high spectrum sub-pixel target identification system and method for multi-source information auxiliary
CN110472658A (en) * 2019-07-05 2019-11-19 哈尔滨工程大学 A kind of the level fusion and extracting method of the detection of moving-target multi-source
CN110547752A (en) * 2019-09-16 2019-12-10 北京数字精准医疗科技有限公司 Endoscope system, mixed light source, video acquisition device and image processor
CN110826503A (en) * 2019-11-08 2020-02-21 山东科技大学 Closed pipeline human body detection method and system based on multi-sensor information fusion
CN111025256A (en) * 2019-12-26 2020-04-17 湖南华诺星空电子技术有限公司 Method and system for detecting weak vital sign signals of airborne radar
CN111175833A (en) * 2020-03-15 2020-05-19 湖南科技大学 Thunder field synchronous detection method based on multi-source information
CN111401203A (en) * 2020-03-11 2020-07-10 西安应用光学研究所 Target identification method based on multi-dimensional image fusion
CN111489330A (en) * 2020-03-24 2020-08-04 中国科学院大学 Weak and small target detection method based on multi-source information fusion
CN111563559A (en) * 2020-05-18 2020-08-21 国网浙江省电力有限公司检修分公司 Imaging method, device, equipment and storage medium
CN111859266A (en) * 2020-07-30 2020-10-30 北京环境特性研究所 Spatial target structure inversion method and device based on multi-source information fusion
CN111967525A (en) * 2020-08-20 2020-11-20 广州小鹏汽车科技有限公司 Data processing method and device, server and storage medium
CN112562011A (en) * 2020-12-16 2021-03-26 浙江大华技术股份有限公司 Image calibration method and device, storage medium and electronic device
CN112633326A (en) * 2020-11-30 2021-04-09 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN113033513A (en) * 2021-05-24 2021-06-25 湖南华诺星空电子技术有限公司 Air-ground collaborative search and rescue system and method
CN113283411A (en) * 2021-07-26 2021-08-20 中国人民解放军国防科技大学 Unmanned aerial vehicle target detection method, device, equipment and medium
CN113438449A (en) * 2021-06-07 2021-09-24 西安恒盛安信智能技术有限公司 Video image transmission method
CN113534093A (en) * 2021-08-13 2021-10-22 北京环境特性研究所 Propeller blade number inversion method for airplane target and target identification method
CN114205564A (en) * 2022-01-27 2022-03-18 濮阳职业技术学院 Monitoring information processing system based on image recognition
CN114266724A (en) * 2021-11-16 2022-04-01 中国航空工业集团公司雷华电子技术研究所 High-voltage line detection method based on radar infrared visible light image fusion
CN114280690A (en) * 2021-12-28 2022-04-05 汇鲲化鹏(海南)科技有限公司 Life signal detection and acquisition processing system
CN115131980A (en) * 2022-04-20 2022-09-30 汉得利(常州)电子股份有限公司 Target identification system and method for intelligent automobile road driving
CN115861366A (en) * 2022-11-07 2023-03-28 成都融达昌腾信息技术有限公司 Multi-source perception information fusion method and system for target detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040047518A1 (en) * 2002-08-28 2004-03-11 Carlo Tiana Image fusion system and method
CN104215963A (en) * 2013-05-31 2014-12-17 上海仪电电子股份有限公司 Marine navigation radar enhancing infrared and visible light
CN104408400A (en) * 2014-10-28 2015-03-11 北京理工大学 Indistinguishable multi-target detection method based on single-image frequency domain information
CN106022235A (en) * 2016-05-13 2016-10-12 中国人民解放军国防科学技术大学 Missing child detection method based on human body detection
CA3024580A1 (en) * 2015-05-15 2016-11-24 Airfusion, Inc. Portable apparatus and method for decision support for real time automated multisensor data fusion and analysis
CN107491730A (en) * 2017-07-14 2017-12-19 浙江大学 A kind of laboratory test report recognition methods based on image procossing
CN108832997A (en) * 2018-08-07 2018-11-16 湖南华诺星空电子技术有限公司 A kind of unmanned aerial vehicle group searching rescue method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040047518A1 (en) * 2002-08-28 2004-03-11 Carlo Tiana Image fusion system and method
CN104215963A (en) * 2013-05-31 2014-12-17 上海仪电电子股份有限公司 Marine navigation radar enhancing infrared and visible light
CN104408400A (en) * 2014-10-28 2015-03-11 北京理工大学 Indistinguishable multi-target detection method based on single-image frequency domain information
CA3024580A1 (en) * 2015-05-15 2016-11-24 Airfusion, Inc. Portable apparatus and method for decision support for real time automated multisensor data fusion and analysis
CN106022235A (en) * 2016-05-13 2016-10-12 中国人民解放军国防科学技术大学 Missing child detection method based on human body detection
CN107491730A (en) * 2017-07-14 2017-12-19 浙江大学 A kind of laboratory test report recognition methods based on image procossing
CN108832997A (en) * 2018-08-07 2018-11-16 湖南华诺星空电子技术有限公司 A kind of unmanned aerial vehicle group searching rescue method and system

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
刘斌,黄伟艮,杨劲松,范开国,陈鹏,丁献文: "基于船载雷达图像的海上船只检测方法", 《海洋学研究》 *
宋颖超,罗海波,惠斌,常铮: "尺度自适应暗通道先验去雾方法", 《红外与激光工程》 *
张文娜: "多源图像融合技术研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
李雪: "非结构化道路路边融合算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
栾悉道 等: "《多媒体情报处理技术》", 31 May 2016, 国防工业出版社 *
王兆军: "基于视频的成像去抖动方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *
聂洪山,沈振康: "一种基于Wiener滤波的红外背景抑制方法", 《国防科技大学学报》 *
裴璐乾: "SAR、红外、可见光图像配准及融合算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109911550A (en) * 2019-04-17 2019-06-21 华夏天信(北京)智能低碳技术研究院有限公司 Scratch board conveyor protective device based on infrared thermal imaging and visible light video analysis
CN110110765A (en) * 2019-04-23 2019-08-09 四川九洲电器集团有限责任公司 A kind of multisource data fusion target identification method based on deep learning
CN110472658A (en) * 2019-07-05 2019-11-19 哈尔滨工程大学 A kind of the level fusion and extracting method of the detection of moving-target multi-source
CN110472658B (en) * 2019-07-05 2023-02-14 哈尔滨工程大学 Hierarchical fusion and extraction method for multi-source detection of moving target
CN110243769A (en) * 2019-07-30 2019-09-17 南阳理工学院 A kind of the high spectrum sub-pixel target identification system and method for multi-source information auxiliary
CN110547752A (en) * 2019-09-16 2019-12-10 北京数字精准医疗科技有限公司 Endoscope system, mixed light source, video acquisition device and image processor
CN110826503B (en) * 2019-11-08 2023-04-18 山东科技大学 Closed pipeline human body detection method and system based on multi-sensor information fusion
CN110826503A (en) * 2019-11-08 2020-02-21 山东科技大学 Closed pipeline human body detection method and system based on multi-sensor information fusion
CN111025256A (en) * 2019-12-26 2020-04-17 湖南华诺星空电子技术有限公司 Method and system for detecting weak vital sign signals of airborne radar
CN111401203A (en) * 2020-03-11 2020-07-10 西安应用光学研究所 Target identification method based on multi-dimensional image fusion
CN111175833A (en) * 2020-03-15 2020-05-19 湖南科技大学 Thunder field synchronous detection method based on multi-source information
CN111175833B (en) * 2020-03-15 2022-06-28 湖南科技大学 Thunder field synchronous detection method based on multi-source information
CN111489330A (en) * 2020-03-24 2020-08-04 中国科学院大学 Weak and small target detection method based on multi-source information fusion
CN111489330B (en) * 2020-03-24 2021-06-22 中国科学院大学 Weak and small target detection method based on multi-source information fusion
CN111563559A (en) * 2020-05-18 2020-08-21 国网浙江省电力有限公司检修分公司 Imaging method, device, equipment and storage medium
CN111563559B (en) * 2020-05-18 2024-03-29 国网浙江省电力有限公司检修分公司 Imaging method, device, equipment and storage medium
CN111859266A (en) * 2020-07-30 2020-10-30 北京环境特性研究所 Spatial target structure inversion method and device based on multi-source information fusion
CN111967525A (en) * 2020-08-20 2020-11-20 广州小鹏汽车科技有限公司 Data processing method and device, server and storage medium
CN112633326B (en) * 2020-11-30 2022-04-29 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN112633326A (en) * 2020-11-30 2021-04-09 电子科技大学 Unmanned aerial vehicle target detection method based on Bayesian multi-source fusion
CN112562011A (en) * 2020-12-16 2021-03-26 浙江大华技术股份有限公司 Image calibration method and device, storage medium and electronic device
CN113033513A (en) * 2021-05-24 2021-06-25 湖南华诺星空电子技术有限公司 Air-ground collaborative search and rescue system and method
CN113438449A (en) * 2021-06-07 2021-09-24 西安恒盛安信智能技术有限公司 Video image transmission method
CN113283411B (en) * 2021-07-26 2022-01-28 中国人民解放军国防科技大学 Unmanned aerial vehicle target detection method, device, equipment and medium
CN113283411A (en) * 2021-07-26 2021-08-20 中国人民解放军国防科技大学 Unmanned aerial vehicle target detection method, device, equipment and medium
CN113534093A (en) * 2021-08-13 2021-10-22 北京环境特性研究所 Propeller blade number inversion method for airplane target and target identification method
CN113534093B (en) * 2021-08-13 2023-06-27 北京环境特性研究所 Method for inverting number of propeller blades of aircraft target and target identification method
CN114266724A (en) * 2021-11-16 2022-04-01 中国航空工业集团公司雷华电子技术研究所 High-voltage line detection method based on radar infrared visible light image fusion
CN114266724B (en) * 2021-11-16 2024-10-25 中国航空工业集团公司雷华电子技术研究所 High-voltage line detection method based on Lei Dagong external visible light image fusion
CN114280690A (en) * 2021-12-28 2022-04-05 汇鲲化鹏(海南)科技有限公司 Life signal detection and acquisition processing system
CN114205564A (en) * 2022-01-27 2022-03-18 濮阳职业技术学院 Monitoring information processing system based on image recognition
CN115131980A (en) * 2022-04-20 2022-09-30 汉得利(常州)电子股份有限公司 Target identification system and method for intelligent automobile road driving
CN115861366A (en) * 2022-11-07 2023-03-28 成都融达昌腾信息技术有限公司 Multi-source perception information fusion method and system for target detection
CN115861366B (en) * 2022-11-07 2024-05-24 成都融达昌腾信息技术有限公司 Multi-source perception information fusion method and system for target detection

Similar Documents

Publication Publication Date Title
CN109558848A (en) A kind of unmanned plane life detection method based on Multi-source Information Fusion
CN109583383A (en) A kind of unmanned plane life detection method and system based on Multiple Source Sensor
JP6858415B2 (en) Sea level measurement system, sea level measurement method and sea level measurement program
CN109471098B (en) Airport runway foreign matter detection method utilizing FOD radar phase coherence information
CN109859247B (en) Near-ground scene infrared small target detection method
CN106845346A (en) A kind of image detecting method for airfield runway foreign bodies detection
CN105225251B (en) Over the horizon movement overseas target based on machine vision quickly identifies and positioner and method
Trinder et al. Aerial images and LiDAR data fusion for disaster change detection
CN110245566B (en) Infrared target remote tracking method based on background features
CN110288623B (en) Data compression method for unmanned aerial vehicle maritime net cage culture inspection image
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
Cai et al. Height estimation from monocular image sequences using dynamic programming with explicit occlusions
CN108563986B (en) Method and system for judging posture of telegraph pole in jolt area based on long-distance shooting image
CN114677531B (en) Multi-mode information fusion method for detecting and positioning targets of unmanned surface vehicle
CN117075112A (en) Unmanned ship radar photoelectric fusion method for azimuth track matching
Yu et al. Automatic extraction of green tide using dual polarization Chinese GF-3 SAR images
CN114037968A (en) Lane line detection method based on depth radar point cloud and image data fusion
Xu et al. Marine radar oil spill monitoring technology based on dual-threshold and c–v level set methods
CN117406234A (en) Target ranging and tracking method based on single-line laser radar and vision fusion
CN111311640B (en) Unmanned aerial vehicle identification and tracking method based on motion estimation
Wang et al. Research on Smooth Edge Feature Recognition Method for Aerial Image Segmentation
Wei et al. Automatic water line detection for an USV system
CN113776676A (en) Infrared small target detection method based on image curvature and gradient
US20230168688A1 (en) Sequential mapping and localization (smal) for navigation
Wang et al. Modification of CFAR algorithm for oil spill detection from SAR data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190402