Nothing Special   »   [go: up one dir, main page]

CN114964275A - Ground-air cooperative map construction method, device, equipment and storage medium - Google Patents

Ground-air cooperative map construction method, device, equipment and storage medium Download PDF

Info

Publication number
CN114964275A
CN114964275A CN202210567453.7A CN202210567453A CN114964275A CN 114964275 A CN114964275 A CN 114964275A CN 202210567453 A CN202210567453 A CN 202210567453A CN 114964275 A CN114964275 A CN 114964275A
Authority
CN
China
Prior art keywords
point cloud
map
area
detection
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210567453.7A
Other languages
Chinese (zh)
Inventor
王晓辉
丁佳
单洪伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Electronic 8mile Technology Co ltd
Original Assignee
Wuxi Electronic 8mile Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Electronic 8mile Technology Co ltd filed Critical Wuxi Electronic 8mile Technology Co ltd
Priority to CN202210567453.7A priority Critical patent/CN114964275A/en
Publication of CN114964275A publication Critical patent/CN114964275A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3841Data obtained from two or more sources, e.g. probe vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3852Data derived from aerial or satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a ground-air cooperative map construction method, a ground-air cooperative map construction device, ground-air cooperative map construction equipment and a storage medium, which relate to the technical field of image vision and comprise the following steps: shooting a scene picture through an unmanned aerial vehicle, and determining a risk area and a target detection area according to the scene picture; performing visual field detection on the risk area through binocular cameras carried by the unmanned aerial vehicle and the ground robot respectively, generating respective three-dimensional point cloud maps, and calibrating a target object in a target detection area; and splicing and fusing the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying characters and/or a highlighted and calibrated target object in a three-dimensional map constructed based on the fused point cloud map. According to the scheme, the accident area is cooperatively detected through the unmanned aerial vehicle and the robot, the three-dimensional point cloud map is generated according to the shot accident scene picture fusion, and meanwhile, the target object in the map is calibrated and displayed, so that the accuracy of the construction of the accident scene map and the efficiency of evidence obtaining and inspection are improved.

Description

Ground-air cooperative map construction method, device, equipment and storage medium
Technical Field
The application relates to the technical field of image vision, in particular to a ground-air cooperative map construction method, device, equipment and storage medium for accident scene evidence obtaining.
Background
Inspection of work by machines or drones is one of the important uses. Due to manual work limitations, robotic devices can break through limitations of time and space views. The method is particularly suitable for the fields of road detection, accident scene evidence obtaining, map mapping and the like.
In the related technology, the problems of picture proportion distortion, scene construction omission and incompleteness exist in the mode of singly adopting an unmanned aerial vehicle to patrol and draw a map or adopting a ground trolley to draw the map. Particularly, aiming at the accident scene, the efficiency and the result of accident evidence obtaining and analysis can be influenced by the accuracy problem of the constructed map.
Disclosure of Invention
The application provides a ground-air cooperative map construction method and device, computer equipment and a storage medium, and solves the problems of inaccuracy in construction and difficulty in evidence collection of an accident scene map in the related art.
On one hand, the method for constructing the ground-air cooperative map is provided, and comprises the following steps:
shooting a scene picture through an unmanned aerial vehicle, and determining a risk area and a target detection area according to the scene picture; the risk areas are all areas contained in an accident scene, and the target detection area is located in the risk area;
performing visual field detection on the risk area through binocular cameras carried by an unmanned aerial vehicle and a ground robot respectively, generating respective three-dimensional point cloud maps, and calibrating a target object in the target detection area; the unmanned aerial vehicle-generated first point cloud map and the robot-generated second point cloud map have different camera postures;
and splicing and fusing the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying characters and/or the highlighted and calibrated target object in a three-dimensional map constructed based on the fused point cloud map.
In another aspect, an air-ground cooperation map building device is provided, where the device includes:
the first determining module is used for shooting a scene picture through the unmanned aerial vehicle and determining a risk area and a target detection area according to the scene picture; the risk areas are all areas contained in an accident scene, and the target detection area is located in the risk area;
the point cloud map generation module is used for respectively carrying out visual field detection on the risk area through binocular cameras carried by the unmanned aerial vehicle and the ground robot, generating respective three-dimensional point cloud maps and calibrating a target object in the target detection area; the unmanned aerial vehicle-generated first point cloud map and the robot-generated second point cloud map have different camera postures;
and the map building module is used for splicing and fusing the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying characters and/or the highlighted and calibrated target object in a three-dimensional map built on the basis of the fused point cloud map.
In another aspect, a computer device is provided, which includes a processor and a memory, where at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the ground-air coordination map construction method according to any one of the above aspects.
In another aspect, a computer-readable storage medium is provided, where at least one instruction, at least one program, a code set, or a set of instructions is stored in the storage medium, and the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the ground-air coordination map building method according to any one of the above aspects.
In another aspect, a computer program product or computer program is provided, comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the ground-air cooperation map construction method of any one aspect.
The beneficial effect that technical scheme that this application provided brought includes at least: determining a risk area and a target detection area according to a field picture by an unmanned aerial vehicle, scanning and detecting the risk area by a binocular camera carried by the unmanned aerial vehicle and a ground robot to generate three-dimensional point cloud maps under respective camera postures, and identifying and calibrating a target object in the target detection area in the respective point cloud maps to facilitate field evidence collection; and then the point cloud data of the two point cloud maps are spliced and fused to generate a fused point cloud map, and the fused point cloud map makes up the conditions of data omission and incomplete coverage of the point cloud map generated by scanning and detecting a single unmanned aerial vehicle and a robot, so that the constructed three-dimensional map is more accurate, and characters and/or highlighted target objects are displayed in a target detection area, thereby facilitating the evidence obtaining of off-site personnel through map pictures and improving the operation efficiency.
Drawings
Fig. 1 is a scene schematic diagram illustrating a ground-air cooperation map construction method provided by an embodiment of the application;
fig. 2 is a flowchart of a ground-air collaborative map construction method according to an embodiment of the present application;
fig. 3 is a flowchart of a ground-air cooperation map construction method according to another embodiment of the present application;
FIG. 4 is a risk area determined by screen content identification provided by an embodiment of the present application;
fig. 5 is a schematic diagram illustrating splicing and fusing point cloud data in a first point cloud map and a second point cloud map according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a ground-air cooperation map construction method provided in the embodiment of the present application;
fig. 7 is a block diagram of a ground-air coordination map building apparatus according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Reference herein to "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 shows a scene schematic diagram of a ground-air cooperation map construction method provided by the embodiment of the application. The method is limited by the dangerousness of the accident site, such as a fire scene or a dangerous chemical leakage scene, and is not suitable for manual processing, but for the purposes of timely processing, site evidence obtaining and the like, the method needs to enter the site through instrument equipment for scanning, generate a three-dimensional map through data received from the outside of the site, and facilitate the familiar terrain and evidence obtaining. Adopt ground robot dolly and unmanned aerial vehicle equipment to implement jointly in this scheme. As shown in the figure, the ground robot 110 and the unmanned aerial vehicle 120 are respectively equipped with a binocular camera module, which can perform color and depth image scanning imaging. A plurality of target detection areas 130 are divided in the accident scene, and target objects 140 including, but not limited to, building buildings, roads, vehicles, facilities, scattered objects, human bodies, and the like exist in the target detection areas 130. The two binocular camera modules are complementary in scanning and detecting angle, namely the camera and the key point scanning path are in a straight line and shortest, and a target detection area 130 is used for surrounding type scanning imaging, so that a three-dimensional perception visual field is formed as far as possible. The drone 120 and the ground robot 110 are controlled by commands sent from the off-site console 150.
Fig. 2 is a flowchart of a ground-air cooperation map construction method according to an embodiment of the present application. The method comprises the following steps:
step 201, shooting a scene picture through an unmanned aerial vehicle, and determining a risk area and a target detection area according to the scene picture.
The off-site console controls the unmanned aerial vehicle to fly to the upper part of the accident scene to shoot a scene picture, off-site personnel can observe the scene picture on the console through transmitted data and analyze the picture to intercept a risk area and a target detection area in the picture, wherein the risk area is all areas contained in the scene picture of the accident, such as the whole plant area. The target detection area is a key attention area for a scene picture, such as a core accident area or a building dense area, and the danger occurrence probability is higher, and the ground robot and the unmanned aerial vehicle equipment are mainly used for identifying and imaging the area.
In one possible implementation, the boundary line of the risk area and the target detection area can be identified and calibrated according to buildings and roads by directly identifying the screen content. Or the boundary line and the internal target detection area are manually marked on the picture by off-site personnel, and then the scanning detection instruction is sent according to the boundary line to control the ground robot trolley and the unmanned aerial vehicle to enter the risk area for scanning and imaging.
Step 202, performing visual field detection on the risk area through binocular cameras carried by the unmanned aerial vehicle and the ground robot respectively, generating respective three-dimensional point cloud maps, and calibrating a target object in the target detection area.
Because unmanned aerial vehicle and robot carry on respectively and have binocular camera, therefore can carry out field of vision to the risk area respectively and survey, respectively generate respective three-dimensional point cloud map. However, because the camera poses and the internal reference are different, the first point cloud map generated by the unmanned aerial vehicle and the second point cloud map generated by the ground robot have different camera poses and different presentation visual angles, and objects which cannot be scanned or are omitted cannot be found in the maps presented between the first point cloud map and the second point cloud map, the point cloud data in the two point cloud maps need to be identified and calibrated, and the calibration aims to be highlighted in the maps, generate notes and store the notes, so that the key objects can be proved when off-site personnel observe the three-dimensional maps conveniently.
And 203, splicing and fusing point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying characters and/or a highlighted and calibrated target object in a three-dimensional map constructed based on the fused point cloud map.
In the foregoing, the viewing angles of the first point cloud map and the second point cloud map are different, so that incomplete object scanning and omission may occur, and a calibrated target object may also be omitted, which affects a forensics result. Therefore, according to the scheme, the point cloud data of the two three-dimensional point cloud maps are subjected to data fusion, the two point cloud data are complemented, and the missing point cloud data are added into the fused point cloud map again. And finally, constructing a three-dimensional map of the accident scene based on the generated fusion point cloud map, and displaying characters and/or highlighting the calibrated target object in the map.
In conclusion, the risk area and the target detection area are determined by the unmanned aerial vehicle according to the scene picture, the risk area is scanned and detected by the unmanned aerial vehicle and a binocular camera carried by the ground robot, three-dimensional point cloud maps under respective camera postures are generated, and the target object in the target detection area is identified and calibrated in the respective point cloud maps, so that on-site evidence collection is facilitated; and then the point cloud data of the two point cloud maps are spliced and fused to generate a fused point cloud map, and the fused point cloud map makes up the conditions of data omission and incomplete coverage of the point cloud map generated by scanning and detecting a single unmanned aerial vehicle and a robot, so that the constructed three-dimensional map is more accurate, and characters and/or highlighted target objects are displayed in a target detection area, thereby facilitating the evidence obtaining of off-site personnel through map pictures and improving the operation efficiency.
Fig. 3 is a flowchart of a ground-air cooperation map construction method according to another embodiment of the present application. The method comprises the following steps:
step 301, acquiring a scene picture, identifying content, and calibrating a boundary line and an accident grade of a risk area according to an accident scene range.
In a possible implementation mode, after the unmanned aerial vehicle transmits back a scene accident picture, the picture content is analyzed and identified through the console, and the area region related to the accident region is judged. Taking a fire as an example, the boundary line of a risk area is defined by identifying and judging the distance between the fire point coverage area and a street building through the picture, and then the accident grade is defined according to the size of the coverage area. And in a dangerous gas leakage scene, area areas, boundary lines, accident grades and the like can be defined according to the density and the quantity of buildings and facilities. Or the boundary line of the accident area is manually drawn by off-site personnel according to the picture content, and the accident grade is determined.
The judgment of the risk area and the boundary line can be identified through a polygon building extraction technology of frame field learning, specifically, the shape, the position and the density of a building in a picture can be identified, the size of the risk area is defined according to the identified shape and the position of the building, and then the corresponding boundary line can be determined.
And step 302, determining the detection number and the detection radius of the target detection area according to the longest edge and the shortest edge of the boundary line and the accident level.
Fig. 4 is a risk area determined by picture content identification. After the risk regions are determined, the area size of the risk regions and the length of each boundary line are calculated, and the shortest side La and the longest side Lb are found. The purpose of determining the longest side and the shortest side is to determine the detection radius of the target detection area subsequently, and the area size and the floor density are used for determining the accident grade. In one possible implementation mode, the accident level is divided into three levels, and different levels need to determine target detection areas and detection radiuses with different detection numbers.
When the accident Level is 3, the detection number n is more than 8, and the detection radius La < r < Lb;
when the accident Level is 2, the detection number n is greater than 12, and the detection radius La < r <1.2 Lb;
when the accident Level is 1, the detection number n is more than 18, and the detection radius La < r <1.5 Lb;
wherein La represents the shortest side of the boundary line, and Lb represents the longest side; and different target detection areas correspond to different detection radiuses.
The higher the Level is, the larger the accident area is, more target detection areas needing important attention should be defined, and the target objects in the target detection areas are detected.
According to the scheme, the circle center coordinates and the detection radius are selected according to the coordinate position and the density of a building body in an accident scene, for example, the coordinates of a circular position are arranged at the position of the building body or an equipment dense area, the detection radius is set according to the area range of the building body or the equipment, and the detection radius needs to meet the grade requirement.
And 303, controlling the unmanned aerial vehicle to perform surrounding scanning detection in the first camera posture according to the position coordinates of the circle center of the target detection area, performing global scanning detection on other areas in the risk area, and generating a first point cloud map based on the scanned image.
After a target detection area is set, a control instruction is sent to the unmanned aerial vehicle through the control console, and the unmanned aerial vehicle is controlled to perform surrounding scanning detection in a first camera posture according to the position coordinates of the circle center of the target detection area. Wherein, unmanned aerial vehicle need set for the scanning angle of binocular camera and present certain angle for the slope is downwards, and this angle needs and the robot looks up visual angle degree and corresponds, also can form the straight line and observe the field of vision. The unmanned aerial vehicle scans according to a default or set path. For the target detection area, the observation point is taken as the center of the circular coordinate, and the detection radius is scanned and shot in a surrounding mode to generate a three-dimensional observation object.
In order to generate the point cloud map, a binocular camera is adopted in the scheme, an ORB-SLAM3 algorithm is adopted for scanning and synthesizing, and the positioning effect in the map can be realized by real-time attitude estimation in motion.
The method specifically comprises the following steps:
and step 303a, acquiring two images scanned by the first binocular camera at the same time, and calculating to obtain depth map information of the object in the scanning area according to the position deviation between the pixels of the images.
In the same time period, a common camera and a depth camera of the binocular camera respectively scan and obtain two images, namely a color image and a depth image. Because the two lenses of the camera store the position deviation of the scanning visual angle, the depth map information of the object in the scanning area can be calculated according to the position deviation between the pixels of the two images. And the depth map information is used for subsequently establishing a three-dimensional point cloud map.
And 303b, performing three-dimensional reconstruction on the object in the scanning area based on the depth map information, the color map information, the first camera posture and the camera internal reference data to obtain a three-dimensional first point cloud map.
After the depth map information is acquired, the object in the scanning area is subjected to three-dimensional reconstruction further according to the depth map information, the color map information, the first camera posture and the camera internal reference data, and a three-dimensional first point cloud map is acquired. The method adopts an ORB-SLAM3 method to realize the functions of positioning and mapping in motion.
And step 304, controlling the robot to perform surrounding scanning detection in a second camera posture according to the position coordinates of the circle center of the target detection area, performing global scanning detection on other areas in the risk area, and generating a second point cloud map based on the scanned image.
Similar to step 303, the robot also needs to perform global scanning and detection on the ground, and for the target detection area, the robot needs to perform circular scanning with circular coordinates to ensure that the target object and the environment on the site can be scanned as much as possible. And the visual angle of the camera of the robot is shot obliquely upwards, and is complementary with the angle of the camera of the unmanned aerial vehicle. For example, the depression angle of the unmanned aerial vehicle perpendicular to the ground is 40 degrees, the elevation angle of the robot parallel to the ground is 60 degrees, the robot and the scanned object form the shortest straight line distance, and the three-dimensional structure of the object can be restored to the maximum extent. And constructing a second point cloud map based on the scanned area and the object.
It should be noted that the robot and the drone scan according to their respective path plans, but do not scan the same area or object synchronously.
And 305, respectively identifying objects in the target detection areas in the generated first point cloud map and the second point cloud map, calibrating point cloud data of the identified target objects, and associating to generate a first target label.
The purpose of identifying the target object in the point cloud map is to facilitate evidence obtaining of field appearance time measurement, for example, in the scheme, elements such as ignition points, roads, buildings, facilities, signs, marking lines, vehicles, trace attachments, pavement traces, scattered objects, human bodies and the like in a scene are taken as target detection objects, and important attention and identification are paid to the target detection objects. In one possible implementation, the method adopts a YOLOV5 intelligent recognition algorithm to perform recognition, calibrates point cloud data of a target object based on a recognition result, and generates a first target label according to geographical location information association. In addition, a tag list can be generated based on the first target tag, and off-site personnel can quickly locate a target object in the map by clicking the first target tag in the tag list when browsing the first point cloud map or the second point cloud map, so that the time for manually browsing and searching is saved.
And step 306, carrying out point cloud matching association based on the position information and the camera attitude information of the point cloud data in the first point cloud map and the second point cloud map.
Because the first point cloud map and the second point cloud map are respectively obtained under different camera attitude data, the point cloud maps have different observation visual angles, and the condition that individual objects are blocked or incompletely scanned may exist, so that the point cloud data is incomplete, and at the moment, the two point cloud data are required to be matched and fused. The point cloud registration adopts an ICP (inductively coupled plasma) algorithm to match and correlate two point cloud data, aims to construct a brand new coordinate system and place the two point cloud data together, and mainly comprises the steps of sampling original point cloud data, determining an initial corresponding point set, removing wrong corresponding point pairs and solving coordinate transformation to obtain a target coordinate system, and further performing correlation matching on the two point cloud data.
Step 307, determining an overlapping area in the two point cloud maps, and performing weighted fusion on point cloud data of the overlapping area; and carrying out feature matching on the point cloud data of the non-overlapping area, and screening a point cloud data set to be fused according to a matching feature difference value.
And after the association matching is completed, the two point cloud data can be fused, and the fusion process relates to the segmentation and screening of the point cloud. Firstly, judging an overlapping area through the Euclidean distance, and finding out the overlapping area in two point cloud maps. Under normal conditions, a large-area overlapping area should exist in the point cloud motion image, and only non-overlapping point cloud data appear on objects in a special area. For the separated overlapped areas, the point cloud data of the overlapped areas are subjected to weighted fusion, and the weighting coefficients of the weighted fusion are determined based on different scenes and parameters, for example, the weighting coefficients of the unmanned aerial vehicle and the robot are set to be 0.4 and 0.6, because the accuracy of the images acquired by the overlooking angles is poor compared with the data scanned by the robot in the environment. A flow chart of this process is shown in fig. 5.
For non-overlapping areas, feature matching is required to be performed on two point cloud data, clustering is generally performed by using feature attributes of the two point cloud data, and feature extraction or conversion is performed on each point cloud data or local space point cloud data to obtain multiple attributes. The method of normal vector, density, distance, elevation, intensity and the like can be adopted to segment the point clouds with different attributes, and then the point cloud data set to be fused is screened according to the matching characteristic difference. The point cloud data set to be fused comprises point cloud data and coordinate information which meet the conditions. For example, the point cloud data of the trees scanned on the ground has large noise, the noise is removed, and the automobiles, wounded personnel and the like on the road meet the matching requirement and are added into the point cloud data set to be fused.
And 308, generating a fusion point cloud map based on the weighted and fused point cloud data, the point cloud data set to be fused and the corresponding position information.
Based on the calculated new coordinate system, the weighted and fused point cloud data and the point cloud data set to be fused, placing each point cloud data in a specific area of the coordinate system according to corresponding position information, and generating a fused point cloud map. And point cloud data in the two point cloud maps are aggregated in the fusion point cloud map.
Step 309, re-determining and identifying the point cloud data of the target object in the fusion point cloud map, and generating a second target label based on the point cloud data association.
After the fusion point cloud map is generated, point cloud data needs to be calibrated again due to coordinate change, point cloud data of a target object is determined, a second target label is established according to position information association, a label list can be correspondingly generated, and editing and routing inspection by field personnel are facilitated.
And 310, constructing a three-dimensional map based on the fused point cloud map, and displaying characters and/or highlighted target objects in the map based on the second target label.
In the embodiment of the application, the boundary line and the risk level of the risk area of an accident scene are calibrated by identifying the content of the picture shot by the unmanned aerial vehicle, and then the number, the detection radius, the circle center coordinate and the like of the target detection area are determined by the calculated longest edge, the shortest edge and the risk level of the boundary line, so that the unmanned aerial vehicle is not required to be manually analyzed and operated and is more intelligent; the control console controls the unmanned aerial vehicle and the robot to scan cooperatively in the ground and air space by sending instructions, and the unmanned aerial vehicle and the robot perform surrounding scanning detection on the universe and the target detection area through set camera attitude data to generate respective three-dimensional point cloud maps. Two point cloud maps can be observed and obtained through a control console outside the field and are respectively analyzed, and a target label in a target detection area is calibrated in the point cloud maps according to position information and point cloud data extracted by a target, so that personnel can conveniently obtain evidence and inspect;
for point cloud maps with different visual angles, point cloud data in an overlapping area are subjected to weighted fusion by matching and associating the point cloud data, point cloud data in a non-overlapping area are subjected to feature matching, a point cloud data set to be fused meeting requirements is clustered and screened out, then a fused point cloud map is detected after the fused point cloud data, the point cloud data set to be fused and matched coordinates are obtained, point cloud data in different point cloud maps are complemented, data loss is avoided, and the accuracy of constructing a three-dimensional map is improved; in addition, characters and/or highlighted target objects are displayed in the finally fused three-dimensional map, so that off-site personnel can conveniently patrol and obtain evidence, and the efficiency of accident site treatment is improved.
Fig. 6 is a schematic structural diagram of a ground-air cooperation map construction method provided in the embodiment of the present application. The control console transmits control instructions to the unmanned aerial vehicle and the robot equipment to carry out total control. The unmanned aerial vehicle device carries out risk assessment through a scene picture and demarcates a plurality of ROI (target detection areas). And the ground robot and the unmanned aerial vehicle respectively carry out global scanning detection and target detection of the ROI area according to path planning. In the scanning process of the unmanned aerial vehicle and the robot, the ORB-SLAM3 method is adopted to realize the functions of positioning and mapping in motion, and the target object in the ROI area is identified and subjected to data analysis so as to establish a target label. And the generated first point cloud map and the second point cloud map are spliced and fused through point clouds to obtain a fused point cloud map, and finally, the constructed three-dimensional map and the calibrated target object are displayed on the console.
Fig. 7 is a block diagram of a ground-air coordination map building apparatus according to an embodiment of the present application. The device includes:
the first determining module 701 is used for shooting a scene picture through an unmanned aerial vehicle and determining a risk area and a target detection area according to the scene picture; the risk areas are all areas contained in an accident scene, and the target detection area is located in the risk area;
a point cloud map generation module 702, configured to perform visual field detection on the risk area through binocular cameras carried by the unmanned aerial vehicle and the ground robot, generate respective three-dimensional point cloud maps, and calibrate a target object in the target detection area; the unmanned aerial vehicle-generated first point cloud map and the robot-generated second point cloud map have different camera postures;
the map building module 703 is configured to splice and fuse the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and display text and/or the highlighted target object in a three-dimensional map built based on the fused point cloud map.
In an embodiment of the present application, there is also provided a computer device, including a processor and a memory; the memory stores at least one instruction, and the at least one instruction is used for being executed by the processor to realize the ground-air cooperation map construction method provided by the above method embodiments.
In an embodiment of the present application, a computer program product or a computer program is also provided, which includes computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the ground-air cooperation map construction method of any one aspect.
The above description is that of the preferred embodiment of the present invention; it is to be understood that the invention is not limited to the particular embodiments described above, in that devices and structures not described in detail are understood to be implemented in a manner common in the art; any person skilled in the art can make many possible variations and modifications, or modify equivalent embodiments, without departing from the technical solution of the invention, without affecting the essence of the invention; therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present invention are still within the scope of the protection of the technical solution of the present invention, unless the contents of the technical solution of the present invention are departed.

Claims (10)

1. A ground-air cooperation map construction method is characterized by comprising the following steps:
shooting a scene picture through an unmanned aerial vehicle, and determining a risk area and a target detection area according to the scene picture; the risk areas are all areas contained in an accident scene, and the target detection area is located in the risk area;
performing visual field detection on the risk area through binocular cameras carried by an unmanned aerial vehicle and a ground robot respectively, generating respective three-dimensional point cloud maps, and calibrating a target object in the target detection area; the unmanned aerial vehicle-generated first point cloud map and the robot-generated second point cloud map have different camera postures;
and splicing and fusing the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying characters and/or the highlighted and calibrated target object in a three-dimensional map constructed based on the fused point cloud map.
2. The method of claim 1, wherein the drone and the robot have a first binocular camera and a second binocular camera mounted thereon, respectively;
carry out visual field detection to the risk area through the binocular camera that unmanned aerial vehicle and ground robot carried on respectively, generate respective three-dimensional point cloud map, and to target object in the target detection area is markd, include:
controlling the unmanned aerial vehicle to perform surrounding scanning detection in a first camera posture according to the position coordinates of the circle center of the target detection area, performing global scanning detection on other areas in the risk area, and generating a first point cloud map based on a scanned image;
the control robot carries out surrounding scanning detection in a second camera posture according to the position coordinates of the circle center of the target detection area, carries out global scanning detection on other areas in the risk area and generates a second point cloud map based on a scanning image;
respectively identifying objects in the target detection areas in the generated first point cloud map and the second point cloud map, calibrating the point cloud data of the identified target objects, and generating a first target label according to geographical position information association.
3. The method of claim 2, wherein generating the first point cloud map based on the scanned image comprises:
acquiring two images scanned by the first double-purpose camera at the same time, and calculating depth map information of an object in a scanning area according to the position deviation between pixels of the images;
and performing three-dimensional reconstruction on the object in the scanning area based on the depth map information, the color map information, the first camera posture and the camera internal reference data to obtain the three-dimensional first point cloud map.
4. The method according to claim 3, wherein the splicing and fusing the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying text and/or the highlighted target object in a three-dimensional map constructed based on the fused point cloud map comprises:
performing point cloud matching association on the basis of the position information and the camera attitude information of point cloud data in the first point cloud map and the second point cloud map;
determining an overlapping area in two point cloud maps, and performing weighted fusion on point cloud data of the overlapping area; carrying out feature matching on the point cloud data of the non-overlapping area, and screening a point cloud data set to be fused according to a matching feature difference value;
generating the fusion point cloud map based on the weighted and fused point cloud data, the point cloud data set to be fused and the corresponding position information;
re-determining and identifying point cloud data of the target object in the fusion point cloud map, and generating a second target label based on point cloud data association;
and constructing a three-dimensional map based on the fused point cloud map, and displaying characters and/or highlighting the calibrated target object in the map based on the second target label.
5. The method of claim 1, wherein the capturing of the scene by the drone and the determining of the risk area and the target detection area from the scene comprises:
acquiring a scene picture, identifying the content, and calibrating the boundary line and the accident grade of the risk area according to the accident scene range;
determining the detection number and the detection radius of the target detection area according to the longest edge and the shortest edge of the boundary line and the accident grade; and the target detection range is determined based on the position coordinates of each circle center and the detection radius.
6. The method of claim 5, wherein the number of detections and the radius of detection of the target detection region are determined according to:
when the accident Level is 3, the detection number n is more than 8, and the detection radius La < r < Lb;
when the accident Level is 2, the detection number n is greater than 12, and the detection radius La < r <1.2 Lb;
when the accident Level is 1, the detection number n is more than 18, and the detection radius La < r <1.5 Lb;
wherein La represents the shortest side of the boundary line, and Lb represents the longest side; and different target detection areas correspond to different detection radiuses.
7. The method according to any one of claims 1 to 6, wherein the unmanned aerial vehicle and the robot use a YOLO algorithm to perform detection and identification on the target object in the target detection area;
recognizing and determining the shape, position and density of a building in a picture through a polygonal building extraction technology of frame field learning, determining a boundary line of a risk area according to the recognized shape and position of the building, and selecting a circle center coordinate and a detection radius according to the recognized position and density of the building.
8. An air-ground cooperation map construction device, characterized in that the device comprises:
the first determining module is used for shooting a field picture through the unmanned aerial vehicle and determining a risk area and a target detection area according to the field picture; the risk areas are all areas contained in an accident scene, and the target detection area is located in the risk area;
the point cloud map generation module is used for respectively carrying out visual field detection on the risk area through binocular cameras carried by the unmanned aerial vehicle and the ground robot, generating respective three-dimensional point cloud maps and calibrating a target object in the target detection area; the unmanned aerial vehicle-generated first point cloud map and the robot-generated second point cloud map have different camera postures;
and the map building module is used for splicing and fusing the point cloud data in the first point cloud map and the second point cloud map to generate a fused point cloud map, and displaying characters and/or the highlighted and calibrated target object in a three-dimensional map built on the basis of the fused point cloud map.
9. A computer device comprising a processor and a memory, wherein the memory stores at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the ground-air coordination map construction method according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the computer-readable storage medium, and the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by a processor to implement the ground-space collaborative mapping method according to any one of claims 1 to 7.
CN202210567453.7A 2022-05-24 2022-05-24 Ground-air cooperative map construction method, device, equipment and storage medium Withdrawn CN114964275A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210567453.7A CN114964275A (en) 2022-05-24 2022-05-24 Ground-air cooperative map construction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210567453.7A CN114964275A (en) 2022-05-24 2022-05-24 Ground-air cooperative map construction method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114964275A true CN114964275A (en) 2022-08-30

Family

ID=82985726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210567453.7A Withdrawn CN114964275A (en) 2022-05-24 2022-05-24 Ground-air cooperative map construction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114964275A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311354A (en) * 2022-09-20 2022-11-08 中国铁建电气化局集团有限公司 Foreign matter risk area identification method, device, equipment and storage medium
CN116772887A (en) * 2023-08-25 2023-09-19 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium
CN116878468A (en) * 2023-09-06 2023-10-13 山东省国土测绘院 Information acquisition system for mapping
CN117523431A (en) * 2023-11-17 2024-02-06 中国科学技术大学 Firework detection method and device, electronic equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311354A (en) * 2022-09-20 2022-11-08 中国铁建电气化局集团有限公司 Foreign matter risk area identification method, device, equipment and storage medium
CN115311354B (en) * 2022-09-20 2024-01-23 中国铁建电气化局集团有限公司 Foreign matter risk area identification method, device, equipment and storage medium
CN116772887A (en) * 2023-08-25 2023-09-19 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium
CN116772887B (en) * 2023-08-25 2023-11-14 北京斯年智驾科技有限公司 Vehicle course initialization method, system, device and readable storage medium
CN116878468A (en) * 2023-09-06 2023-10-13 山东省国土测绘院 Information acquisition system for mapping
CN116878468B (en) * 2023-09-06 2023-12-19 山东省国土测绘院 Information acquisition system for mapping
CN117523431A (en) * 2023-11-17 2024-02-06 中国科学技术大学 Firework detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN114964275A (en) Ground-air cooperative map construction method, device, equipment and storage medium
CN112793564B (en) Autonomous parking auxiliary system based on panoramic aerial view and deep learning
CN111275759B (en) Transformer substation disconnecting link temperature detection method based on unmanned aerial vehicle double-light image fusion
US7528938B2 (en) Geospatial image change detecting system and associated methods
EP2917874B1 (en) Cloud feature detection
US7630797B2 (en) Accuracy enhancing system for geospatial collection value of an image sensor aboard an airborne platform and associated methods
US7603208B2 (en) Geospatial image change detecting system with environmental enhancement and associated methods
US8433457B2 (en) Environmental condition detecting system using geospatial images and associated methods
KR101261409B1 (en) System for recognizing road markings of image
CN113870343A (en) Relative pose calibration method and device, computer equipment and storage medium
CN114004977B (en) Method and system for positioning aerial data target based on deep learning
US8547375B2 (en) Methods for transferring points of interest between images with non-parallel viewing directions
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
CN113834492A (en) Map matching method, system, device and readable storage medium
CN111709994B (en) Autonomous unmanned aerial vehicle visual detection and guidance system and method
CN113378754B (en) Bare soil monitoring method for construction site
JP2020015416A (en) Image processing device
CN117557931B (en) Planning method for meter optimal inspection point based on three-dimensional scene
JP2024133575A (en) Visual inspection support system and visual inspection support method
CN114326794A (en) Curtain wall defect identification method, control terminal, server and readable storage medium
CN108195359B (en) Method and system for acquiring spatial data
JP3437671B2 (en) Landmark recognition device and landmark recognition method
CN112232272A (en) Pedestrian identification method based on fusion of laser and visual image sensor
Garcia et al. A Proposal to Integrate ORB-Slam Fisheye and Convolutional Neural Networks for Outdoor Terrestrial Mobile Mapping
Tian et al. Fusion of Stereo Aerial Images and Official Surveying Data for Mapping Curbstones Using AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220830