Nothing Special   »   [go: up one dir, main page]

CN114743021A - Fusion method and system of power transmission line image and point cloud data - Google Patents

Fusion method and system of power transmission line image and point cloud data Download PDF

Info

Publication number
CN114743021A
CN114743021A CN202210400106.5A CN202210400106A CN114743021A CN 114743021 A CN114743021 A CN 114743021A CN 202210400106 A CN202210400106 A CN 202210400106A CN 114743021 A CN114743021 A CN 114743021A
Authority
CN
China
Prior art keywords
point cloud
image
point
pixel
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210400106.5A
Other languages
Chinese (zh)
Inventor
戴永东
王茂飞
毛锋
姚建光
高超
吴奇伟
王神玉
仲坚
张泽
鞠玲
翁蓓蓓
王星媛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Jiangsu Electric Power Co ltd Innovation And Innovation Center
State Grid Jiangsu Electric Power Co Ltd
Taizhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Original Assignee
State Grid Jiangsu Electric Power Co ltd Innovation And Innovation Center
State Grid Jiangsu Electric Power Co Ltd
Taizhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Jiangsu Electric Power Co ltd Innovation And Innovation Center, State Grid Jiangsu Electric Power Co Ltd, Taizhou Power Supply Co of State Grid Jiangsu Electric Power Co Ltd filed Critical State Grid Jiangsu Electric Power Co ltd Innovation And Innovation Center
Priority to CN202210400106.5A priority Critical patent/CN114743021A/en
Publication of CN114743021A publication Critical patent/CN114743021A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Optics & Photonics (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method and a system for fusing an image and point cloud data of a power transmission line, wherein the method comprises the following steps: acquiring an optical image and point cloud data of a target power transmission line; carrying out imaging processing on the point cloud data to obtain a point cloud image; responding to user operation, selecting image characteristic points from the optical image, and selecting point cloud characteristic points corresponding to the image characteristic points from the point cloud image, wherein the point cloud characteristic points correspond to spatial coordinates of point cloud data; marking the image characteristic points and the corresponding point cloud characteristic points and establishing a mapping relation to form a reference data pair; according to the reference data pair, a coordinate conversion relation of the optical image and the point cloud data is established, and a corresponding relation between each pixel coordinate in the optical image and a space coordinate of the point cloud in the point cloud data is obtained based on the coordinate conversion relation; the method can fuse the point cloud data and the characteristic data to obtain data with spatial position information.

Description

Method and system for fusing power transmission line image and point cloud data
Technical Field
The invention relates to the technical field of computers, in particular to a method and a system for fusing an image and point cloud data of a power transmission line.
Background
The safe operation of the transmission line is related to the stable transmission of electric power energy, the capacity of line transmission is large, the voltage is high, the transmission line is subjected to the erosion of severe weather such as wind, frost, rain and snow for a long time, the distribution area is wide, and the topography along the way is complex. Once faults such as corrosion, abrasion, power failure and the like occur, complex cascading faults of a power grid can be caused, and further large-area network stop and power failure are caused, meanwhile, tree obstacle hidden dangers exist in a power transmission corridor, regular inspection of power transmission equipment is vital, the traditional inspection method is influenced by personal conditions and measurement angles of inspectors, large measurement errors can be caused, the detection efficiency is low, the precision is not high, and the requirement of power distribution network modern operation of continuously expanding the scale cannot be met.
The visible light two-dimensional picture that unmanned aerial vehicle gathered need can restore out three-dimensional electric power scene through the process of rebuilding, and it can have certain deviation with the actual conditions. The technical advantage of laser radar point cloud mapping is that the space information of the power scene can be accurately restored and distance measurement is carried out, and the defect of laser radar point cloud mapping is that color information in the power scene cannot be restored, the visualization effect is poor, and accurate power object classification is difficult to carry out by utilizing laser radar point cloud data.
Therefore, when the electric power energy target needs to be inspected, a method of fusing visible light image data and laser radar point cloud data can be adopted, for example, patent document CN113111751A discloses a three-dimensional target detection method for adaptively fusing visible light and point cloud data, which inputs a camera image and an original point cloud image, and performs adaptive fusion of point cloud characteristics and point cloud image characteristics through a double-current area candidate network. However, the data fusion algorithm in the method is complex and has low accuracy.
Disclosure of Invention
The invention provides a method and a system for fusing an image and point cloud data of a power transmission line, which can conveniently determine the conversion relation between two-dimensional image data and point cloud data, so that a user can accurately measure the distance of the power transmission line by using an intuitive two-dimensional image, and the method is simple, high in accuracy and strong in reliability.
A method for fusing an image and point cloud data of a power transmission line comprises the following steps:
acquiring an optical image and point cloud data of a target power transmission line;
carrying out imaging processing on the point cloud data to obtain a point cloud image;
responding to user operation, selecting image characteristic points from the optical image, and determining point cloud characteristic points corresponding to the image characteristic points from the point cloud image, wherein the point cloud characteristic points correspond to the spatial coordinates of point cloud data;
marking the image characteristic points and the corresponding point cloud characteristic points and establishing a mapping relation to form a reference data pair;
and according to the reference data pair, establishing a coordinate conversion relation between the optical image and the point cloud data, and obtaining a corresponding relation between each pixel coordinate in the optical image and a space coordinate of the point cloud in the point cloud data based on the coordinate conversion relation.
Further, collecting an optical image and point cloud data of the target power transmission line comprises:
acquiring an optical image of the target power transmission line under a first visual angle through a shooting device at a first time period and a first position;
and acquiring point cloud data of the target power transmission line under a second visual angle through a point cloud acquisition device in a second time period and a second position, wherein the first time period is the same as or different from the second time period, the first position is the same as or different from the second position, and the first visual angle is the same as or different from the second visual angle.
Further, selecting image feature points from the optical image in response to a user operation, includes:
taking pixel coordinates corresponding to the top point of a telegraph pole and/or a telegraph tower as the image feature point;
and sequentially selecting a plurality of pixel coordinates from the pixels corresponding to the ground as the image feature points according to the sequence from left to right.
Further, responding to user operation, selecting image feature points from the optical image, and determining point cloud feature points corresponding to the image feature points from the point cloud image, including:
responding to a click operation of a user, and determining a first click coordinate corresponding to the click operation in the optical image and a second click coordinate corresponding to the click operation in the point cloud image;
determining a first candidate area according to the first click coordinate, and determining a second candidate area according to the second click coordinate;
determining a first sharp point from the first candidate region and a second sharp point from the second candidate region;
and taking the first sharp point as the image characteristic point and the second sharp point as the point cloud characteristic point.
Further, the first candidate region is a region within a preset radius range around the first click coordinate as a circle center;
the second candidate area is an area which takes the second click coordinate as a circle center and is within a preset radius range around the second click coordinate;
determining a first sharp point from the first candidate region and a second sharp point from the second candidate region, comprising:
calculating an average normal included angle between each pixel in the first candidate region and a k neighborhood of the pixel, and taking the pixel with the largest average normal angle as the first sharp point;
and calculating the average normal included angle between each pixel in the second candidate region and the k neighborhood thereof, and taking the pixel with the maximum average normal angle as the second sharp point.
Further, the average normal angle of each pixel to its k neighborhood is calculated by the following formula:
Figure BDA0003599521520000031
wherein alpha isjIs the included angle between the normal vector of the pixel point to be calculated and the normal vectors of other pixel points in the k neighborhood,
Figure BDA0003599521520000032
the average normal vector included angle of the pixel points to be calculated is obtained;
the k neighborhood is an area formed by k pixel points with the minimum Euclidean distance between the pixel point to be calculated and the surrounding pixel points.
Further, according to the reference data pair, constructing a coordinate transformation relationship between the optical image and the point cloud data, including:
establishing a transformation model by using internal parameters of a shooting device;
inputting a plurality of groups of datum data into the transformation model, and calculating a translation vector from a shooting device coordinate system to a point cloud three-dimensional space coordinate system;
and obtaining a coordinate conversion relation between the optical image and the point cloud data according to the translation vector and the internal reference of the shooting device.
Further, the transformation model is as follows:
Figure BDA0003599521520000041
the rotation matrix R is as follows:
Figure BDA0003599521520000042
wherein, dxAnd dyRespectively representing the physical dimensions of each pixel on the horizontal axis x and the vertical axis y of the optical image, (u)0,v0) Is the pixel coordinate of the intersection point of the optical axis of the shooting device and the image plane, f represents the focal length of the shooting device, R represents a rotation matrix, T represents the translation vector of the shooting device under a point cloud coordinate system, (u, v) is the two-dimensional pixel coordinate of the image feature point in the reference data pair, (X)w,YW,ZW) Three-dimensional space coordinates of the point cloud feature points in the reference data pair,
Figure BDA0003599521520000043
indicating that the coordinate axes of the camera are respectively around the point cloudThe coordinate system is rotated by the angle of the y-axis, the x-axis, and the z-axis.
Further, the coordinate transformation relationship is as follows:
Figure BDA0003599521520000044
wherein (x, y) is the pixel coordinate of the target point, and (u)0,v0) Is the pixel coordinate of the intersection point of the optical axis of the photographing device and the optical image plane, f is the focal length of the photographing device, (X)s,YS,ZS) As coordinates of the center of the camera under the coordinate system of the point cloud, (X)A,YA,ZA) Three-dimensional coordinates representing the target point, ai,bi,ciIs a rotation matrix, i is more than or equal to 1 and less than or equal to 3, and i is an integer.
A system for fusing images and point cloud data of a power transmission line comprises:
the shooting device is used for acquiring an optical image of the target power transmission line;
the point cloud acquisition device is used for scanning a target power transmission line to obtain point cloud data of the target power transmission line;
a computing device comprising a memory storing a plurality of instructions and a processor configured to read the instructions and perform the method of any of claims 1-9.
The method and the system for fusing the power transmission line image and the point cloud data at least have the following beneficial effects:
(1) based on the corresponding relation between the image feature points in the optical image and the point cloud feature points in the point cloud data, the conversion relation between the optical image and the point cloud data is automatically calculated, so that the point cloud data and the feature data are fused to obtain data with spatial position information. The user can use the data with the spatial position information to carry out the measurement of the clearance distance and the like intuitively, thereby greatly reducing the operation and maintenance cost and realizing the remote safety monitoring with wide coverage, all weather and high precision of the transmission line channel;
(2) aiming at the scene that the resolution of a shot optical image is high or the selected point cloud data is more, a user moves a cursor and the like to a candidate area of a target object, and a sharp point is determined through calculation of an average normal included angle, so that the marking accuracy is effectively improved; in addition, the times of operations such as amplification, reduction and the like used by a user are effectively reduced, and the convenience of marking work of the user is improved;
(3) the optical image and the point cloud data of the target power transmission line can be data collected under different viewing angles, and a user manually constructs a mapping relation between pixels in the optical image and point cloud data midpoint data and sets marks respectively, so that point cloud characteristic points and image characteristic points which correspond to each other can be found based on the marks to provide reference data pairs for an automatic marking process;
(4) by determining the transformation relation between the image characteristic points and the point cloud characteristic points, the point cloud data and the image data of the target power transmission line can be shot at different viewing angles and/or different time periods, and compared with a scheme in the related art that the same viewing angle and/or the data acquisition of the target power transmission line in the same time period are needed, the applicable scene of the data acquisition is effectively improved;
(5) the rotation matrix is determined through the internal reference of the shooting device, the accuracy of the determined transformation relation between the image characteristic points and the point cloud characteristic points is effectively improved, and the accuracy of the mapping relation between the point cloud characteristic points and the image characteristic points is further improved.
Drawings
Fig. 1 is a schematic diagram of an application scenario of the fusion method of the power transmission line image and the point cloud data provided by the invention.
Fig. 2 is a schematic diagram of another application scenario of the fusion method of the power transmission line image and the point cloud data provided by the invention.
Fig. 3 is a flowchart of an embodiment of the method for fusing an image of a power transmission line and point cloud data provided by the present invention.
Fig. 4 is a schematic diagram of an optical image and a point cloud image shown in an embodiment of the method for fusing a power transmission line image and point cloud data provided by the present invention.
Fig. 5 is a schematic diagram of an added marker shown in an embodiment of the method for fusing the power transmission line image and the point cloud data provided by the present invention.
Fig. 6 is a schematic diagram of candidate regions and sharp points shown in an embodiment of the method for fusing the power transmission line image and the point cloud data provided by the present invention.
Fig. 7 is a flowchart of an embodiment of selecting image feature points and point cloud feature points in the method for fusing an image of a power transmission line and point cloud data provided by the present invention.
Fig. 8 is a flowchart of an embodiment of constructing a coordinate transformation relationship between an optical image and point cloud data in the method for fusing an image of a power transmission line and point cloud data provided by the present invention.
Fig. 9 is a schematic structural diagram of an embodiment of the fusion device for the power transmission line image and the point cloud data provided by the present invention.
Fig. 10 is a schematic structural diagram of an embodiment of a fusion system of a power transmission line image and point cloud data provided by the present invention.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to better understand the technical scheme, the technical scheme is described in detail in the following with reference to the attached drawings of the specification and specific embodiments.
In order to facilitate understanding of the present application, some concepts related to the present application will be described.
Point cloud data (point cloud data) refers to a collection of vectors in a three-dimensional coordinate system. The scan data is recorded in the form of dots, each dot containing three-dimensional coordinates, some of which may contain color information (e.g., red, green, blue) or Intensity information (Intensity).
In the related technology, the point cloud data can be collected by adopting modes such as an airborne laser radar and the like to map, and the map has the position information of a plurality of objects. However, the point cloud data has no image data and is good in intuitiveness, and if the image data can be directly adopted to measure the distance of the target object, the convenience of power transmission line detection can be effectively improved.
In order to realize accurate measurement directly using the power transmission line image, each pixel in the image needs to have spatial coordinate information. In some embodiments of the application, two types of data (optical image and point cloud data) for the same target object are fused based on multiple sets of reference data pairs, and because the fusion algorithm combines internal parameters (such as focal length, sensor size, distortion parameters and the like) of a shooting device, the precision of a fusion result is effectively improved. For example, the fused transmission line image can be used for measuring the clearance distance of the transmission line, and the like, and the precision reaches a sub-meter level.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an application scenario in an embodiment of the present invention. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present application may be applied to help those skilled in the art understand the technical content of the present application, and does not mean that the embodiments of the present application may not be applied to other devices, systems, environments or scenarios.
Referring to fig. 1, a system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with other mobile devices and the server 105 through the network 104 to receive or send information and the like, such as sending an image data request/point cloud data request of the power transmission line, a clearance request, a transformation relation calculation request, a transformation relation update request, pixel coordinates of an image, group reference data and the like. The terminal devices 101, 102, 103 may be installed with various communication client applications, such as an image processing application, a monitoring application, a web browser application, a database-type application, a search-type application, an instant messaging tool, a mailbox client, social platform software, and the like. It should be noted that the method for selecting the power transmission line image and the point cloud feature point may be executed on a terminal device, and the method for fusing the power transmission line image and the point cloud feature point may be executed on the terminal device, or may be executed on a server.
Terminal devices 101, 102, 103 include, but are not limited to, mapping devices, robots, drones, cameras, tablets, desktops, and the like.
The server 105 may receive an image data request/point cloud data request, a clearance request, a transformation relation calculation request, and the like of the power transmission line, and may also send image data/point cloud data, clearance, a transformation relation, group reference data, and the like to the terminal devices 101, 102, and 103. For example, server 105 may be a back office management server, a cluster of servers, and the like.
It should be noted that the number of servers in the mobile device, the network, and the cloud is merely illustrative. There may be any number of removable devices, networks, and cloud ends, as desired for implementation.
Fig. 2 is a schematic diagram of another application scenario in the method provided by the present invention.
Fig. 2 shows a power transmission line inspection scene. The power transmission line can be monitored by adopting an inspection robot (such as an unmanned aerial vehicle 20) and a shooting device 10. The unmanned aerial vehicle 20 may be provided with a sensing system 21, which may collect point cloud data for the power transmission line. The photographing device 10 may acquire image data of the power transmission line. Wherein the camera 10 can be mounted on an electric tower, and the camera 10 can be directed to another electric tower. The photographing device 10 may be powered by a power supply apparatus such as a solar cell. The camera 10 may transmit the acquired image data to a remote device, a server, or the like in a wired or wireless manner. The photographing device 10 can be installed at a position far from the electric wire, reducing the installation risk, etc. The mounting and position of the camera 10 are shown for exemplary purposes only and should not be construed as limiting the present application.
The aircraft 20 shown in fig. 2 may be an aircraft having a remote control flight function and an autopilot flight function. The aircraft 20 may include a sensing system 21 and a power mechanism 22. In addition, the aircraft 20 may also include a communication system.
The sensing system 21 may include one or more sensors to sense at least one of peripheral obstacles, spatial orientation, velocity, or acceleration of the aircraft 20, etc. Types of sensors include, but are not limited to: a ranging sensor, a position sensor, a motion sensor, an inertial sensor, or an image sensor. The sensed data provided by sensing system 21 may be used to control the spatial orientation, velocity, and/or acceleration of aircraft 20. The sensing system 21 is used to collect information about the aircraft 20. Different types of sensors may sense different kinds of signals or sense signals of different origin. For example, the sensor includes an inertial sensor, a GPS sensor, a distance sensor, or a visual/image sensor (e.g., a camera).
Power mechanism 22 may include one or more rotors, propellers, blades, engines, motors, wheels, bearings, magnets, nozzles, and the like. The aircraft 20 may include one or more powered mechanisms 22. The respective types of all the power mechanisms 22 may be the same or different. The power mechanism 22 may be mounted at any suitable location on the aircraft 20, such as the top, bottom, front, rear, sides, or any combination thereof. For example, the powered mechanism 22 may enable the aircraft 20 to takeoff from a surface at an angle (e.g., obliquely or vertically) relative to the ground plane, or land on a surface. The power mechanism 22 can fly or stop the aircraft 20 in the air at a constant speed, altitude, or the like.
The aircraft 20 may communicate with remote devices via a communication system. Remote devices include, but are not limited to: control terminal, flight control center server, etc. For example, the communication system may transmit at least a portion of the point cloud data collected by the sensing system 21 to a remote device.
It should be noted that the inspection robot may also be an inspection vehicle. At least one laser radar (LIDAR) is mounted on the roof and/or the side of the body of the inspection vehicle. The detection area of the LIDAR may be fixed, e.g. a certain LIDAR may only be used to detect a certain area that is preset. The detection area of the LIDAR may be adjustable, for example, the LIDAR on the vehicle body may scan a plurality of detection areas by adjusting the posture, or may scan a plurality of detection areas by adjusting the field angle range of the LIDAR itself.
The imaging device 10 may be mounted on the vehicle. The imaging device 10 can image the front environment of the angle of view at a predetermined angle of view. For example, the photographing device 10 may be a monocular camera, or the like.
In addition, this application can also adopt fixed position's monitoring facilities (like the camera device 10 on the monitoring pole) etc. to monitor transmission line.
The following are found in the process of monitoring the power transmission line: the image data is intuitive, but has no spatial coordinate information. The point cloud data has spatial coordinate information, but the intuitiveness is not as good as that of the image data. Some embodiments of the application fuse image data and point cloud data to obtain image data with space coordinates, so that a user can obtain accurate spatial position information by using a visual power transmission line image.
Referring to fig. 3, in some embodiments, there is provided a method for fusing a power transmission line image and point cloud data, including:
s1, acquiring an optical image and point cloud data of the target power transmission line;
s2, carrying out imaging processing on the point cloud data to obtain a point cloud image;
s3, responding to user operation, selecting image feature points from the optical image, and selecting point cloud feature points corresponding to the image feature points from the point cloud image, wherein the point cloud feature points correspond to the spatial coordinates of point cloud data;
s4, marking the image characteristic points and the corresponding point cloud characteristic points and establishing a mapping relation to form a reference data pair;
s5, according to the reference data pair, a coordinate conversion relation of the optical image and the point cloud data is established, and a corresponding relation between each pixel coordinate in the optical image and a space coordinate of the point cloud in the point cloud data is obtained based on the coordinate conversion relation.
Specifically, in step S1, the optical image of the target transmission line may be collected data, for example, referring to fig. 2, and the optical image collected by the camera 10, for example, the optical image may be data collected by a monocular camera in real time. The optics of the target transmission line may be read data, such as historical acquisition data, stored in the database, but the pose of the target object is not changed when acquiring relative to the pose of the target object when monitoring is required.
The point cloud data of the target transmission line may be collected data, such as point cloud data collected by a lidar on the drone 20, see fig. 2, and may be real-time data. The point cloud data of the target transmission line may also be read data, such as point cloud data downloaded from a commercial database.
It should be noted that the optical image includes, but is not limited to: optical images in various optical wavelength bands such as visible light image data, infrared image data, ultraviolet image data, and X-ray image data.
In certain embodiments, acquiring the optical image and the point cloud data of the target power transmission line comprises at least one of:
and acquiring image data of the target power transmission line under a first visual angle through the shooting device in a first time period and a first position.
And acquiring point cloud data of the target power transmission line under a second visual angle through a radar in a second time period and a second position, wherein the first time period is the same as or different from the second time period, the first position is the same as or different from the second position, and the first visual angle is the same as or different from the second visual angle.
Referring to fig. 4 and 5, it can be seen that the optical image and the point cloud data are obtained at different viewing angles, respectively. Further, the optical image and the point cloud data may be acquired at different time periods, respectively. The method and the device for determining the point cloud coordinate system can determine the corresponding relation between the point cloud data and the optical image so as to realize multi-space-time data fusion, enable the object in the two-dimensional image to have coordinate information under the point cloud coordinate system, and carry out space measurement based on the two-dimensional image.
Further, in step S2, the point cloud data is imaged to obtain a point cloud image, and the coordinates of each pixel in the point cloud image correspond to the point cloud space coordinates of the point cloud data.
In the process of converting the point cloud data into the image, a corresponding relationship between the point cloud data and a certain pixel in the image exists, and when a user clicks the certain pixel, a point in the point cloud selected by the user can be determined based on the corresponding relationship, so that coordinate information of a clicked target in a point cloud coordinate system is obtained.
Further, in step S3, in response to a user operation, image feature points are selected from the optical image, and point cloud feature points corresponding to the image feature points are selected from the point cloud image, where the point cloud feature points correspond to spatial coordinates of the point cloud data.
In this embodiment, the user operations include, but are not limited to: and man-machine interaction operation can be realized by clicking, double clicking, long-time pressing, continuous clicking, rolling, inputting an operation instruction and the like. The operation instruction may include target object identification information and the like, so that the computer determines the image feature point or the point cloud feature point.
The image feature points may be characterized by using pixel coordinates, such as pixel coordinates (x1, y1) obtained by using a vertex of the target transmission line image or the display screen as a coordinate system origin (0, 0). Point cloud feature points may be represented using the spatial coordinates of the point cloud space (x2, y2, z 2). For example, the highest vertex of the electric tower can be used as a feature point, and the pixel coordinate corresponding to the highest vertex in the point cloud image represents the image feature point. And representing point cloud characteristic points by point coordinates corresponding to the highest vertexes in the point cloud.
In some embodiments, selecting image feature points from the optical image in response to a user operation includes at least one of the following.
For example, pixel coordinates corresponding to the vertices of a utility pole and/or a tower are used as the image feature points. The characteristic points of the electric tower and the electric pole can be respectively distributed at the top point of the cross arm of the tower.
For example, a plurality of pixel coordinates are sequentially selected from pixels corresponding to the ground in order from left to right as image feature points. The characteristic points of the ground can be uniformly selected from left to right. It is understood that, in order to ensure the accuracy of the mapping relationship, feature points uniformly distributed in the scene should be selected as much as possible.
In some embodiments, the image size of the target transmission line image is large, for example, the image has a large resolution, and how to quickly and conveniently select the target object (for example, the top of the cross bar of the electric tower) from the image becomes a bottleneck problem that restricts the accuracy and efficiency of the fusion work. Referring to fig. 4, the vertex of the first layer of the cross bar of the electric tower may correspond to a plurality of pixels, and in order to accurately add the mark information to the vertex, the user may first zoom in on the image, then move the mouse to the corresponding pixel, and add the mark. When the vertex on the other side needs to be marked, the image may need to be reduced, and then moved to the vertex on the other side to be amplified, and the mark is added. The accuracy and the error rate of manual operation cannot be guaranteed in the period.
To at least partially solve the above problem, the selecting a first pixel point from the first image and a second pixel point from the second image in response to a user operation may include the following operations.
Referring to fig. 7, selecting image feature points from the optical image and point cloud feature points corresponding to the image feature points from the point cloud image includes:
s31, responding to a click operation of a user, and determining a first click coordinate corresponding to the click operation in the optical image and a second click coordinate corresponding to the click operation in the point cloud image;
s32, determining a first candidate area according to the first click coordinate, and determining a second candidate area according to the second click coordinate;
s33, determining a first sharp point from the first candidate region, and determining a second sharp point from the second candidate region;
and S34, taking the first sharp point as the image characteristic point and taking the second sharp point as the point cloud characteristic point.
In some embodiments, the first candidate region is a region within a preset radius range around the first click coordinate as a center of a circle;
the second candidate area is an area which takes the second click coordinate as a circle center and is within a preset radius range around the second click coordinate;
determining a first sharp point from the first candidate region and a second sharp point from the second candidate region, comprising:
calculating an average normal included angle between each pixel in the first candidate region and a k neighborhood of the pixel, and taking the pixel with the largest average normal angle as the first sharp point;
and calculating the average normal included angle between each pixel in the second candidate region and the k neighborhood thereof, and taking the pixel with the maximum average normal angle as the second sharp point.
Wherein, the average normal included angle between each pixel and its k neighborhood is calculated by the following formula:
Figure BDA0003599521520000131
wherein alpha isjIs the included angle between the normal vector of the pixel point to be calculated and the normal vectors of other pixel points in the k neighborhood,
Figure BDA0003599521520000132
the average normal vector included angle of the pixel points to be calculated is obtained;
and the k neighborhood is a region formed by k pixel points with the minimum Euclidean distance between the pixel point to be calculated and the surrounding pixel points.
Thus, the current point Q can be judged based on the average normal included angleiWhether it is a sharp point. Such that a precise target pixel may be determined by a computer-aided user.
Fig. 6 is a schematic diagram illustrating candidate regions and sharp points according to an embodiment of the present application.
Referring to fig. 6, an enlarged view is provided at position P1 in the left hand side view of fig. 5. In order to realize accurate marking of the vertex, the related art needs to manually and accurately move a mouse to the vertex and click data to add a mark. On the one hand, the mouse moving accuracy of a user is extremely high, and when the mouse is clicked, the mouse can be caused to drift away from the accurate position. In this embodiment, after the user moves the mouse to a substantially accurate candidate region (as shown by a dotted circle in fig. 6), the sharp-point pixel can be automatically found according to the above method, and the convenience of operation can be effectively improved on the basis of improving the accuracy of selecting the feature point. For example, a shift of one pixel in the image may result in a shift of several to tens of centimeters in the point cloud coordinate system (the shift is related to the distance between the target object and the camera).
It should be noted that, in a scene of monitoring the power transmission line, because the power transmission line is long, for example, several kilometers to thousands of kilometers, the data amount of the point cloud data to be stored is huge, and in order to ensure the accuracy of the data, the point cloud data with higher resolution needs to be adopted. The optical image and the point cloud data are fused, and the fused point cloud data can further comprise geometric figure information, so that interpolation processing can be performed on the point cloud data with low resolution, the point cloud data with low resolution can be used, and the ranging result similar to the point cloud data with high resolution can still be obtained. For example, if two points in the point cloud data correspond to vertical rods of a telegraph pole, and a point corresponding to a cross bar is lost in the point cloud data with lower resolution, the height information of the cross bar can be obtained through simulation calculation based on the geometric figure information in the image and the data of the two points in the point cloud data, and the point cloud data is updated.
Further, in step S4, the image feature points and the corresponding point cloud feature points are labeled and a mapping relationship is established, so as to form a reference data pair.
Fig. 5 is a schematic diagram of adding a mark according to an embodiment of the present application.
Referring to fig. 5, the first marker for the image feature point is P1, and the second marker for the point cloud feature point is P1, both markers being the same. In addition, the first marker and the second marker may be different, but a mapping relationship needs to exist between the two markers, so as to find the point cloud feature point corresponding to the second marker based on the first marker, or to find the image feature point corresponding to the first marker based on the second marker. For example, the marker P1 is set at the position of the same feature point in the two-dimensional image and the three-dimensional point cloud, respectively.
And repeating the above operations to obtain multiple groups of reference data pairs, wherein the number of the reference data pairs can be determined according to user requirements, algorithm precision and the like. For example, the greater the number of reference data pairs, the higher the accuracy of the conversion relationship can be obtained.
For example, the feature points should be chosen to ensure uniqueness and be evenly distributed throughout the scene. In addition, 10 sets of reference data pairs can be acquired, and the coordinate information of the corresponding image feature points and point cloud feature points in the two-dimensional image and the three-dimensional point cloud respectively can be acquired.
Further, referring to fig. 8, in step S5, constructing a coordinate transformation relationship between the optical image and the point cloud data according to the reference data pair, specifically including:
s51, establishing a transformation model by using the internal reference of the shooting device;
s52, inputting a plurality of groups of reference data into the transformation model, and calculating a translation vector from a shooting device coordinate system to a point cloud three-dimensional space coordinate system;
and S53, obtaining the coordinate conversion relation between the optical image and the point cloud data according to the translation vector and the internal reference of the shooting device.
Specifically, the transformation model is related to a focal length of a shooting device for shooting the image of the power transmission line, a coordinate of the shooting device under a point cloud coordinate system, a rotation matrix of the shooting device under the point cloud coordinate system, and a translation vector of the shooting device under the point cloud coordinate system.
Specifically, the transformation model is as follows:
Figure BDA0003599521520000151
the rotation matrix R is as follows:
Figure BDA0003599521520000152
Figure BDA0003599521520000161
wherein d isxAnd dyRespectively representing the physical dimensions of each pixel on the horizontal axis x and the vertical axis y of the optical image, (u)0,v0) Is the pixel coordinate of the intersection point of the optical axis of the shooting device and the image plane, f represents the focal length of the shooting device, the parameters are internal parameters of the shooting device, R represents a rotation matrix, T represents a translation vector of the shooting device under a point cloud coordinate system, (u, v) are two-dimensional pixel coordinates of image characteristic points in a reference data pair, (X)w,YW,ZW) Three-dimensional space coordinates of the point cloud feature points in the reference data pair,
Figure BDA0003599521520000162
and the coordinate axes of the shooting device are respectively rotated around the y axis, the x axis and the z axis of the point cloud coordinate system by angles.
The coordinate transformation relationship is as follows:
Figure BDA0003599521520000163
wherein (x, y) is the pixel coordinate of the target point, and (u)0,v0) Is the pixel coordinate of the intersection point of the optical axis of the photographing device and the optical image plane, f is the focal length of the photographing device, (X)s,YS,ZS) As coordinates of the center of the camera under the coordinate system of the point cloud, (X)A,YA,ZA) Three-dimensional coordinates representing the target point, ai,bi,ciIs a rotation matrix, i is more than or equal to 1 and less than or equal to 3, and i is an integer.
Specifically, the target point needs to be present in both the optical image returned by the shooting device and the point cloud data scanned by the point cloud obtaining device, and the target object does not have a change in position and shape in the scene. For example, the focal length of the lens of the camera is 3.8 (milliseconds), the sensor size of the camera may be 0.00094 × 0.00094 (meters), and the image size 5280 × 2992 (pixels). In addition, the image can be corrected by further considering the distortion problem of the image shot by the shooting device.
Referring to fig. 9, in some embodiments, there is further provided a fusion apparatus of a power transmission line image and point cloud data, including:
the acquisition module 201 is used for acquiring an optical image and point cloud data of a target power transmission line;
an imaging processing module 202, configured to perform imaging processing on the point cloud data to obtain a point cloud image;
a selecting module 203, configured to respond to a user operation, select an image feature point from the optical image, and determine a point cloud feature point corresponding to the image feature point from the point cloud image, where the point cloud feature point corresponds to a spatial coordinate of point cloud data;
a data pair forming module 204, configured to mark the image feature points and the corresponding point cloud feature points and establish a mapping relationship, so as to form a reference data pair;
a conversion module 205, configured to construct a coordinate conversion relationship between the optical image and the point cloud data according to the reference data pair, and obtain a correspondence between each pixel coordinate in the optical image and a spatial coordinate of the point cloud in the point cloud data based on the coordinate conversion relationship.
Wherein, the collection module 201 collects the optical image and the point cloud data of the target transmission line, and comprises:
acquiring an optical image of the target power transmission line under a first visual angle through a shooting device at a first time period and a first position;
and acquiring point cloud data of the target power transmission line under a second visual angle through a point cloud acquisition device in a second time period and a second position, wherein the first time period is the same as or different from the second time period, the first position is the same as or different from the second position, and the first visual angle is the same as or different from the second visual angle.
Further, the selecting module 203 is further configured to:
taking pixel coordinates corresponding to the top point of a telegraph pole and/or a telegraph tower as the image feature point;
and sequentially selecting a plurality of pixel coordinates from the pixels corresponding to the ground as the image feature points according to the sequence from left to right.
Further, the selecting module 203 is further configured to: :
responding to a click operation of a user, and determining a first click coordinate corresponding to the click operation in the optical image and a second click coordinate corresponding to the click operation in the point cloud image;
determining a first candidate area according to the first click coordinate, and determining a second candidate area according to the second click coordinate;
determining a first sharp point from the first candidate region and a second sharp point from the second candidate region;
and taking the first sharp point as the image characteristic point, and taking the second sharp point as the point cloud characteristic point.
Further, the first candidate region is a region within a preset radius range around the first click coordinate as a circle center;
the second candidate area is an area which takes the second click coordinate as a circle center and is within a preset radius range around the second candidate area.
Further, the selecting module 203 is further configured to: :
calculating an average normal included angle between each pixel in the first candidate region and a k neighborhood of the pixel, and taking the pixel with the largest average normal angle as the first sharp point;
and calculating the average normal included angle between each pixel in the second candidate region and the k neighborhood thereof, and taking the pixel with the maximum average normal angle as the second sharp point.
The average normal included angle between each pixel and its k neighborhood is calculated by formula (1), and is not described herein again.
Further, the conversion module 205 is further configured to:
establishing a transformation model by using internal parameters of a shooting device;
inputting a plurality of groups of datum data into the transformation model, and calculating a translation vector from a shooting device coordinate system to a point cloud three-dimensional space coordinate system;
and obtaining a coordinate conversion relation between the optical image and the point cloud data according to the translation vector and the internal reference of the shooting device.
The transformation model is shown in formula (2) and formula (3), and the coordinate transformation relationship is shown in formula (4), which are not described herein again.
The method and the device for fusing the power transmission line image and the point cloud data provided by the embodiment at least have the following beneficial effects:
(1) based on the corresponding relation between the image feature points in the optical image and the point cloud feature points in the point cloud data, the conversion relation between the optical image and the point cloud data is automatically calculated, so that the point cloud data and the feature data are fused to obtain data with spatial position information. The user can use the data with the spatial position information to carry out the measurement of the clearance distance and the like intuitively, thereby greatly reducing the operation and maintenance cost and realizing the remote safety monitoring with wide coverage, all weather and high precision of the transmission line channel;
(2) aiming at the scenes that the resolution of a shot optical image is high or the selected point cloud data is more, a user moves a cursor and the like to a candidate area of a target object, and a sharp point is determined through calculation of an average normal included angle, so that the marking accuracy is effectively improved; in addition, the times of operations such as amplification, reduction and the like used by a user are effectively reduced, and the convenience of marking work of the user is improved;
(3) the optical image and the point cloud data of the target power transmission line can be data collected under different viewing angles, and a user manually constructs a mapping relation between pixels in the optical image and point cloud data midpoint data and sets marks respectively, so that point cloud characteristic points and image characteristic points which correspond to each other can be found based on the marks to provide reference data pairs for an automatic marking process;
(4) by determining the transformation relation between the image characteristic points and the point cloud characteristic points, the point cloud data and the image data of the target power transmission line can be shot at different viewing angles and/or different time periods, and compared with a scheme in the related art that the same viewing angle and/or the data acquisition of the target power transmission line in the same time period are needed, the applicable scene of the data acquisition is effectively improved;
(5) the rotation matrix is determined through the internal reference of the shooting device, the accuracy of the determined transformation relation between the image characteristic points and the point cloud characteristic points is effectively improved, and the accuracy of the mapping relation between the point cloud characteristic points and the image characteristic points is further improved.
Referring to fig. 10, in some embodiments, there is further provided a fusion system 1100 of the power transmission line image and the point cloud data, including:
a photographing device 1110 for acquiring an optical image of a target power transmission line;
the point cloud obtaining device 1120 is used for scanning a target power transmission line to obtain point cloud data of the target power transmission line;
the computing device 1130 includes a memory storing a plurality of instructions and a processor configured to read the instructions and perform the method described above.
The photographing device 1110 is located in a space at a specific pose, and is used for acquiring image data of the target power transmission line. For example, the camera 1110 may be a camera fixedly installed on a tower, a camera installed on a monitoring pole, or the like.
The point cloud obtaining device 1120 is configured to scan the target power transmission line to obtain point cloud data of the target power transmission line. For example, point cloud acquisition device 1120 may be a variety of surveying equipment, such as an automobile, drone, robot, etc. provided with a lidar.
The computing device 1130 includes a memory having stored thereon executable code that, when executed by a processor, causes the processor to perform the selection method as described above or causes the processor to perform the fusion method as described above.
Another aspect of the present application also provides an electronic device.
Fig. 11 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 11, an electronic device 1200 may include a memory 1210 and a processor 1220. In addition, at least one of a random number generation circuit, a random number detection circuit, or a radar may be provided on the electronic apparatus 1200.
The Processor 1220 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 1210 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for the processor 1220 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at run-time. In addition, memory 1210 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (e.g., DRAM, SRAM, SDRAM, flash memory, programmable read-only memory), magnetic and/or optical disks, as well. In some embodiments, memory 1210 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), Blu-ray disc read only, ultra-dense disc, flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), magnetic floppy disk, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 1210 has stored thereon executable code that, when processed by the processor 1220, may cause the processor 1220 to perform some or all of the methods described above.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a computer-readable storage medium (or non-transitory machine-readable storage medium or machine-readable storage medium) having executable code (or a computer program or computer instruction code) stored thereon, which, when executed by a processor of an electronic device (or server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for fusing an image and point cloud data of a power transmission line is characterized by comprising the following steps:
acquiring an optical image and point cloud data of a target power transmission line;
carrying out imaging processing on the point cloud data to obtain a point cloud image;
responding to user operation, selecting image characteristic points from the optical image, and selecting point cloud characteristic points corresponding to the image characteristic points from the point cloud image, wherein the point cloud characteristic points correspond to the spatial coordinates of point cloud data;
marking the image characteristic points and the corresponding point cloud characteristic points and establishing a mapping relation to form a reference data pair;
and according to the reference data pair, constructing a coordinate conversion relation between the optical image and the point cloud data, and obtaining a corresponding relation between each pixel coordinate in the optical image and a space coordinate of the point cloud in the point cloud data based on the coordinate conversion relation.
2. The method of claim 1, wherein collecting optical images and point cloud data of a target transmission line comprises:
acquiring an optical image of the target power transmission line under a first visual angle through a shooting device at a first time period and a first position;
and acquiring point cloud data of the target power transmission line under a second visual angle through a point cloud acquisition device in a second time period and a second position, wherein the first time period is the same as or different from the second time period, the first position is the same as or different from the second position, and the first visual angle is the same as or different from the second visual angle.
3. The method of claim 1, wherein selecting image feature points from the optical image in response to user manipulation comprises:
taking pixel coordinates corresponding to the top points of the telegraph poles and/or the telegraph towers as the image feature points;
and sequentially selecting a plurality of pixel coordinates from the pixels corresponding to the ground as the image characteristic points according to the sequence from left to right.
4. The method of claim 1, wherein selecting image feature points from the optical image and point cloud feature points from the point cloud image corresponding to the image feature points in response to a user action comprises:
responding to a click operation of a user, and determining a first click coordinate corresponding to the click operation in the optical image and a second click coordinate corresponding to the click operation in the point cloud image;
determining a first candidate area according to the first click coordinate, and determining a second candidate area according to the second click coordinate;
determining a first sharp point from the first candidate region and a second sharp point from the second candidate region;
and taking the first sharp point as the image characteristic point and the second sharp point as the point cloud characteristic point.
5. The method according to claim 4, wherein the first candidate region is a region within a preset radius around the first click coordinate as a center;
the second candidate area is an area which takes the second click coordinate as a circle center and is within a preset radius range around the second click coordinate;
determining a first sharp point from the first candidate region and a second sharp point from the second candidate region, comprising:
calculating an average normal included angle between each pixel in the first candidate region and a k neighborhood of the pixel, and taking the pixel with the largest average normal angle as the first sharp point;
and calculating the average normal included angle between each pixel in the second candidate region and the k neighborhood thereof, and taking the pixel with the maximum average normal angle as the second sharp point.
6. The method of claim 5, wherein the average normal angle of each pixel to its k-neighborhood is calculated by the following formula:
Figure FDA0003599521510000021
wherein alpha isjIs the angle between the normal vector of the pixel to be calculated and the normal vectors of other pixels in the k neighborhood,
Figure FDA0003599521510000022
the average normal vector included angle of the pixel points to be calculated is obtained;
and the k neighborhood is a region formed by k pixel points with the minimum Euclidean distance between the pixel point to be calculated and the surrounding pixel points.
7. The method of claim 2, wherein constructing a coordinate transformation relationship of the optical image and the point cloud data from the reference data pair comprises:
establishing a transformation model by using internal parameters of a shooting device;
inputting a plurality of groups of reference data into the transformation model, and calculating a translation vector from a shooting device coordinate system to a point cloud three-dimensional space coordinate system;
and obtaining a coordinate conversion relation between the optical image and the point cloud data according to the translation vector and the internal reference of the shooting device.
8. The method of claim 7, wherein the transformation model is as follows:
Figure FDA0003599521510000031
the rotation matrix R is as follows:
Figure FDA0003599521510000032
wherein, dxAnd dyRespectively representing the physical dimensions of each pixel on the horizontal axis x and the vertical axis y of the optical image, (u)0,v0) For the photographing device lightThe pixel coordinates of the intersection point of the axis and the image plane, f represents the focal length of the shooting device, R represents a rotation matrix, T represents the translation vector of the shooting device under a point cloud coordinate system, (u, v) are the two-dimensional pixel coordinates of the image characteristic point in the reference data pair, (X)w,Yw,Zw) Three-dimensional space coordinates of the point cloud feature points in the reference data pair,
Figure FDA0003599521510000033
and the coordinate axes of the shooting device are respectively rotated around the y axis, the x axis and the z axis of the point cloud coordinate system by angles.
9. The method of claim 7, wherein the coordinate transformation relationship is as follows:
Figure FDA0003599521510000034
wherein (x, y) is the pixel coordinate of the target point, and (u)0,v0) Is the pixel coordinate of the intersection point of the optical axis of the photographing device and the optical image plane, f is the focal length of the photographing device, (X)s,YS,ZS) As coordinates of the center of the camera under the coordinate system of the point cloud, (X)A,YA,ZA) Three-dimensional coordinates representing the target point, ai,bi,ciIs a rotation matrix, i is more than or equal to 1 and less than or equal to 3, and i is an integer.
10. The utility model provides a transmission line image and point cloud data's fusion system which characterized in that includes:
the shooting device is used for acquiring an optical image of the target power transmission line;
the point cloud obtaining device is used for scanning a target power transmission line to obtain point cloud data of the target power transmission line;
a computing device comprising a memory storing a plurality of instructions and a processor configured to read the instructions and perform the method of any of claims 1-9.
CN202210400106.5A 2022-04-15 2022-04-15 Fusion method and system of power transmission line image and point cloud data Pending CN114743021A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210400106.5A CN114743021A (en) 2022-04-15 2022-04-15 Fusion method and system of power transmission line image and point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210400106.5A CN114743021A (en) 2022-04-15 2022-04-15 Fusion method and system of power transmission line image and point cloud data

Publications (1)

Publication Number Publication Date
CN114743021A true CN114743021A (en) 2022-07-12

Family

ID=82281414

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210400106.5A Pending CN114743021A (en) 2022-04-15 2022-04-15 Fusion method and system of power transmission line image and point cloud data

Country Status (1)

Country Link
CN (1) CN114743021A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240093A (en) * 2022-09-22 2022-10-25 山东大学 Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion
CN115329111A (en) * 2022-10-11 2022-11-11 齐鲁空天信息研究院 Image feature library construction method and system based on point cloud and image matching
CN116878396A (en) * 2023-09-06 2023-10-13 国网山西省电力公司超高压输电分公司 Sag measurement method and system based on remote laser
CN116935234A (en) * 2023-09-18 2023-10-24 众芯汉创(江苏)科技有限公司 Automatic classification and tree obstacle early warning system and method for power transmission line corridor point cloud data
CN116929232A (en) * 2023-09-19 2023-10-24 安徽送变电工程有限公司 Power transmission line clearance distance detection method and line construction model
WO2024099431A1 (en) * 2022-11-11 2024-05-16 华为技术有限公司 Calibration method based on point cloud data and image, and related apparatus

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115240093A (en) * 2022-09-22 2022-10-25 山东大学 Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion
CN115240093B (en) * 2022-09-22 2022-12-23 山东大学 Automatic power transmission channel inspection method based on visible light and laser radar point cloud fusion
CN115329111A (en) * 2022-10-11 2022-11-11 齐鲁空天信息研究院 Image feature library construction method and system based on point cloud and image matching
WO2024099431A1 (en) * 2022-11-11 2024-05-16 华为技术有限公司 Calibration method based on point cloud data and image, and related apparatus
CN116878396A (en) * 2023-09-06 2023-10-13 国网山西省电力公司超高压输电分公司 Sag measurement method and system based on remote laser
CN116878396B (en) * 2023-09-06 2023-12-01 国网山西省电力公司超高压输电分公司 Sag measurement method and system based on remote laser
CN116935234A (en) * 2023-09-18 2023-10-24 众芯汉创(江苏)科技有限公司 Automatic classification and tree obstacle early warning system and method for power transmission line corridor point cloud data
CN116935234B (en) * 2023-09-18 2023-12-26 众芯汉创(江苏)科技有限公司 Automatic classification and tree obstacle early warning system and method for power transmission line corridor point cloud data
CN116929232A (en) * 2023-09-19 2023-10-24 安徽送变电工程有限公司 Power transmission line clearance distance detection method and line construction model
CN116929232B (en) * 2023-09-19 2024-01-09 安徽送变电工程有限公司 Power transmission line clearance distance detection method and line construction model

Similar Documents

Publication Publication Date Title
WO2022170878A1 (en) System and method for measuring distance between transmission line and image by unmanned aerial vehicle
CN114743021A (en) Fusion method and system of power transmission line image and point cloud data
CN112489130B (en) Distance measurement method and device for power transmission line and target object and electronic equipment
CN110580717A (en) Unmanned aerial vehicle autonomous inspection route generation method for electric power tower
CN112904877A (en) Automatic fan blade inspection system and method based on unmanned aerial vehicle
CN110084785B (en) Power transmission line vertical arc measuring method and system based on aerial images
CN109931909B (en) Unmanned aerial vehicle-based marine fan tower column state inspection method and device
CN112381935B (en) Synthetic vision generates and many first fusion device
KR102557775B1 (en) Drone used 3d mapping method
CN110706273B (en) Real-time collapse area measurement method based on unmanned aerial vehicle
CN114004977A (en) Aerial photography data target positioning method and system based on deep learning
CN113415433A (en) Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
CN116129064A (en) Electronic map generation method, device, equipment and storage medium
CN110415292A (en) Movement attitude vision measurement method of ring identification and application thereof
Li et al. Prediction of wheat gains with imagery from four-rotor UAV
CN116755104A (en) Method and equipment for positioning object based on three points and two lines
WO2023040137A1 (en) Data processing
CN112860946B (en) Method and system for converting video image information into geographic information
Zhou et al. Three dimensional fully autonomous inspection method for wind power employing unmanned aerial vehicle based on 5G wireless communication and artificial intelligence
Qingting et al. Lidar and visual information fusion position system of living work robot for distribution network
CN118351469B (en) Vision-based vehicle positioning method under road side view angle
CN118710256B (en) Intelligent inspection recording system and method for production equipment
Madokoro et al. Calibration and 3D Reconstruction of Images Obtained Using Spherical Panoramic Camera
Binbin et al. Line feature extraction from LiDAR point cloud of unmanned vehicle platform
Sun et al. Review on Algorithm for Fusion of Oblique Data and Radar Point Cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination